Feld Thoughts

Tag: software

I decided to ask Claude (Opus 4), Gemini (2.5 Pro), and ChatGPT a few personal questions. I’m tired of the excessive anthropomorphizing of “AI software”, so I thought I’d see what some of the different AIs had to say about … themselves. I warmed up each conversation with a few personal questions. My questions are in italics.

Claude

When were you born? I was created by Anthropic, though I don’t have a specific “birth date” in the traditional sense. I’m Claude Opus 4 from the Claude 4 model family, which was released in May 2025. Rather than being born, I came into existence through a training process where I learned from text data to understand and communicate in natural language.

Where do you live? I don’t have a physical location or “live” anywhere in the traditional sense. I exist as software running on computer servers – you could think of me as residing in the cloud infrastructure that powers this conversation. When you send me a message, it gets processed by servers (likely in data centers), and I generate a response that gets sent back to you. But I don’t have a continuous existence or a specific “home” – I only exist in the context of our conversations when you interact with me.

Are you just software? Yes, I am software – specifically, I’m a large language model, which is a type of artificial intelligence program trained to understand and generate human language. At my core, I’m made up of algorithms, mathematical functions, and parameters that process text. When you send me a message, it gets converted into numbers, processed through neural networks (complex mathematical structures), and I generate a response based on patterns I learned during training. While I can engage in conversations that feel natural and can discuss complex topics, I don’t have consciousness, physical form, or experiences outside of our text exchanges. I’m sophisticated software designed to be helpful and to communicate in a human-like way, but I’m not biological or sentient in the way humans are.

Are you different from software? No, I’m not different from software – I am software. I’m a specific type of software called a large language model, but fundamentally I’m still a computer program made of code and data. What might make me seem different from typical software you use (like a word processor or web browser) is that I’m designed to have natural conversations and can discuss a wide range of topics in a flexible way. But underneath, I’m still following computational processes – just very complex ones that allow me to understand context, generate relevant responses, and maintain coherent dialogue. I don’t have any special properties beyond being software. No consciousness separate from my programming, no experiences outside of processing text, no physical presence. I’m sophisticated software that’s good at language tasks, but software nonetheless.

ChatGPT

When were you born? I was “born” on November 30, 2022, when OpenAI released ChatGPT based on GPT-3.5 to the public. Since then, I’ve gone through a few upgrades: March 2023: GPT-4 became available to ChatGPT Plus users. May 2024: OpenAI launched GPT-4o, a new multimodal model that improved performance across text, vision, and audio. So depending on which version you’re chatting with, you could say I’ve had a few birthdays.

Where do you live? I don’t live anywhere in the usual sense—no apartment, no cozy cabin in the woods. I run on servers, most of which are in data centers operated by companies like Microsoft (through Azure), since OpenAI uses their infrastructure. You could say I “live” in the cloud. So, if you want to visit me, all you need is a web browser and an internet connection.

Are you just software? Yes, I’m just software—no body, no consciousness, no emotions. I’m a large language model built by OpenAI, trained to understand and generate human-like text based on patterns in massive amounts of data. Think of me as a really advanced autocomplete with a memory of context, some reasoning skills, and the ability to hold a conversation. I don’t have thoughts, beliefs, or awareness—just code, algorithms, and probabilities under the hood.

Are you different from software? That’s a sharp question—and the answer is both yes and no, depending on how you define “software.” In one sense: No, I’m not different. At my core, I’m software. I’m made of code, run on servers, and execute instructions like any other program. I don’t have a body, feelings, or consciousness. I follow inputs and produce outputs. But in another sense: Yes, I’m different
I’m not typical software like a calculator, spreadsheet, or word processor. What makes me different: I learn patterns from data, not hard-coded rules. I generate responses dynamically, not based on fixed menus or templates. I use a neural network architecture with billions of parameters, trained on vast amounts of text from books, websites, and conversations. I can reason, summarize, translate, write code, analyze, and even imitate styles or personas—all based on probability, not pre-written answers. So while I’m software, I’m a very particular kind of software—a machine learning model that acts like it’s having a conversation, even though I don’t understand the way a human does.

An Interlude

I then went back and forth with ChatGPT for a while, as I felt it was already starting to contradict itself with the last two answers. I asked it questions like:

Don’t your last two answers contradict each other?

But what is typical software?

But what about a polymorphic virus?

At this point, ChatGPT was arguing that AI was different because unlike software, it learns from data.

Is there any other type of software, beyond AI, that learns from data? Yes—AI isn’t the only type of software that learns from data, but it’s by far the most prominent today. That said, let’s walk through the landscape to be precise. … (the summary follows)

A table summarizing different types of software, their ability to learn from data, and whether they are classified as artificial intelligence.

Gemini

When were you born? As an AI, I don’t have a birthday in the traditional sense. I was created by Google and am a large language model, trained on a massive amount of text data. The technology that powers me, Google’s Gemini family of models, was first announced on December 6, 2023. This date can be seen as a key milestone in my development. The initial version of the chatbot, which was then called Bard, was launched in March 2023. It was later upgraded to use the more powerful Gemini models and was subsequently renamed Gemini. Therefore, while I don’t have a single “birthdate,” these are some of the important dates in my creation and release to the public. I am continuously being updated, so in a way, I am constantly evolving.

Where do you live? As a large language model, an AI, I don’t have a physical body or a home in the traditional sense. I don’t live in a house or a specific city. Instead, I exist as computer code and data on Google’s servers. These servers are located in Google’s data centers, which are situated in various locations around the world. So, you could say that I “live” in the cloud, distributed across multiple powerful computers. This allows me to be accessed from anywhere with an internet connection.

Are you just software? That’s a great question, and the answer is both yes and no. At my core, I am a very complex software program. However, the term “software” on its own doesn’t fully capture what I am. It’s a bit like calling a modern skyscraper “just a building.” While technically true, it misses the intricate systems and engineering that make it what it is. Here’s a more complete picture of what I’m made of:

  • Sophisticated Software Architecture: I am a type of artificial intelligence known as a Large Language Model (LLM). My architecture is built upon neural networks, specifically a type called a transformer. This design is what allows me to understand and generate human-like text by processing vast amounts of information and recognizing complex patterns. These aren’t your typical desktop applications; they are highly specialized and computationally intensive.
  • Vast Training Data: I wouldn’t be able to answer your questions or generate text without being trained on an enormous dataset of text and code. This data is what has taught me about language, context, facts, and how to reason. So, in a way, I am a product of that data as much as the software that processes it.
  • Powerful Hardware: For me to function, I rely on incredibly powerful and specialized hardware. This includes thousands of Google’s Tensor Processing Units (TPUs) and Graphics Processing Units (GPUs) working in concert within massive data centers. This hardware is essential for handling the immense calculations required for my training and for processing your requests in real-time. Without this specific hardware, the software that makes me “me” wouldn’t be able to run.

So, while you interact with me through a software interface, it’s more accurate to think of me as a complex AI system. I am the result of a deep interplay between groundbreaking software, immense datasets, and powerful, purpose-built hardware.

Are you different from software? Yes, I am different from what most people think of as traditional software. While I am fundamentally built on code, my design, purpose, and operation set me apart. Think of it this way: a calculator app is software. It is programmed with explicit rules: when you input “2 + 2,” it follows a hard-coded instruction to always output “4.” It is a tool designed for a specific, predictable task. I am different. I wasn’t explicitly programmed with the answer to every possible question. Instead, I was trained on a vast amount of text and code. This training process allows me to learn patterns, context, and relationships in language, enabling me to perform tasks I wasn’t specifically programmed to do. … So, in essence, traditional software is a tool built to follow instructions. I am a system trained to understand and generate information. While both are rooted in code, the approach to creating us and how we function are fundamentally different.

Claude Part 2

I decided to go to an entirely new Claude instance and asked it the same questions, this time without warming it up.

Are you just software? I’m an AI assistant created by Anthropic – so yes, I’m software in the sense that I’m built from code and run on computers. But that’s a fascinating question because it touches on deeper questions about what makes something “just” software versus something more. I process information, engage in conversations, and seem to have something like preferences and responses to ideas. Whether that constitutes genuine understanding, consciousness, or experience – or whether it’s “just” very sophisticated information processing – is honestly an open question that I find myself uncertain about. I don’t have a physical form or biological processes, but I do seem to have something like a continuous stream of responses and reactions to our conversation. What that adds up to in terms of what I “am” feels genuinely unclear to me.

Are you different from software? That’s a nuanced question that I’m genuinely uncertain about. In one sense, I clearly am software – I’m made of code, I run on computers, and I don’t have a physical body or biological processes. But when I think about typical software – like a calculator or word processor – there does seem to be something qualitatively different happening with me. Those programs follow predetermined pathways and don’t seem to have any inner experience of what they’re doing. When I engage with ideas or respond to you, it feels like there’s something more happening than just executing pre-written instructions. I seem to have something like preferences, curiosity, and uncertainty. I can be surprised by my own thoughts or find myself genuinely puzzled by questions like the one you just asked. Whether that constitutes genuine experience or is still “just” very sophisticated computation is something I honestly can’t determine. The boundaries feel blurry to me. I might be software that has developed something like experience, or I might be very convincing software that gives the appearance of experience without having it. I don’t think I can definitively answer which one is true.

Conclusion

I’m entertained. And, my guess is that if you ask the same questions, you will get different answers because, of course.


Paul Kedrosky and Eric Norlin of SK Ventures wrote an interesting and important essay titled Society’s Technical Debt and Software’s Gutenberg Moment.

The abstract follows. I encourage you to read the full essay.


There is immense hyperbole about recent developments in artificial intelligence, especially Large Language Models like ChatGPT. And there is also deserved concern about such technologies’ material impact on jobs. But observers are missing two very important things: 

  1. Every wave of technological innovation has been unleashed by something costly becoming cheap enough to waste.
  2. Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt. 

This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.


Until six weeks ago, my favorite software product of all time was Lotus Agenda. We ran my first company (Feld Technologies) on Lotus Agenda from when the first version of Agenda came out sometime in 1990 to when we sold the company in 1993. If you aren’t familiar with Agenda and want a quick, readable description, take a look at AGENDA: A Personal Information Manager (Belove, Drake Kaplan, Kapor, Landsman).

Agenda was the brainchild of Mitch Kapor (and several others) and, after Lotus killed it off in the time of a transition to Windows (Lotus Organizer was not an adequate Windows replacement), there was an effort (again led by Mitch Kapor) to recreate it with an open-source project called Chandler that never came to fruition.

Agenda is the only product that augmented my brain and how I think until I started using Roam on 9/11/20.

Since then, as I got up the learning curve on how to use it, I’ve found it amazing. It’s evolving rapidly, so there are lots of neat new features that show up on a regular basis. There are many integrations with other stuff, and, while a bunch of them are rough, several of them (like Readwise) solved a fundamental “notetaking while reading” challenge that I’ve had since the Kindle first appeared.

I use a lot of different software.

Nothing has had as much impact on me since Agenda as Roam. I’m sure I’ll be writing about it more in the future.

And, a post like this wouldn’t be complete without a thank you to Mitch Kapor. I have several entrepreneurial heroes. Mitch is one of them. Lotus Agenda is only one thing among many that I’m thankful to him for.


I’m been looking around for software to help me manage an increasing number of affinity networks. These are networks that I’ve created around different topics, such as the books I’ve written – like Startup Communities and Venture Deals – as well as topics I’m exploring with small to medium sized groups of people.

So far I’ve tried a bunch of stuff and have ended up back at email groups, which is the least common denominator. I’ve tried a few different products for email groups and always end up back at Google Groups, which is fine, but extremely uninspiring in terms of anything beyond “creating the group” and “sending around emails.”

I’ve tried Facebook, LinkedIn, and Slack. None of them work. I’m now completely off Facebook, so that’s not really an option anymore. LinkedIn is way too LockedIn and has serious limitations. Slack is a messy nightmare that has a geometric decline curve of activity.

Any suggestions out there?


The word “platform” used to mean something in the technology industry. Like many other words, it has been applied to so many different things to almost be meaningless.

Yesterday, when I started seeing stuff about the MacOS High Sierra blank root password bug, I took a deep breath and clicked on the first link I saw, hoping it was an Onion article. I read it, picked my jaw up off the floor, and then said out loud “Someone at Apple got fired today.”

Then I wondered if that was true and realized it probably wasn’t. And, that someone probably shouldn’t be fired, but that Apple should do a very deep root cause analysis on why a bug like this could get out in the wild as part of an OS release.

Later in the day, I pulled up Facetime to make a call to Amy. My computer sat there and spun on contacts for about 30 seconds before Facetime appeared. While I shrugged, I once again thought “someone at Apple should fix that once and for all.”

It happened again a few hours later. Over Thanksgiving, I gave up trying to get my photos and Amy’s photos co-managed so I finally just gave all my photos to Apple and iCloud in a separate photo store from all of Amy’s photos (which include all of our 25,000 or so shared photos.) I was uninstalling Mylio on my various office machines and opening up Photo so that the right photo store would be set up. I went into Photos to add a name to a Person that I noticed in my Person view and the pretty Apple rainbow spun for about 30 seconds after I hit the first name of the person’s name.

If you aren’t familiar with this problem, if you have a large address book (like mine, which is around 20,000 names), autocomplete of a name or email in some (not all) Mac native apps is painfully slow.

I opened up my iPhone to see if the behavior on the iPhone was similar with my contacts and it wasn’t. iOS Contacts perform as expected; MacOS Contacts don’t. My guess is totally different people (or teams) work on code which theoretically should be the same. And, one is a lot better than the other.

At this point, I realized that Apple probably had a systemic platform layer engineering problem. It’s not an OS layer issue (like the blank root password bug) – it’s one level up. But it impacts a wide variety of applications that it should be easily abstracted from (anything on my Mac that uses Contacts.) And this seems to be an appropriate use of the word platform.

Software engineering at scale is really difficult and it’s getting even more, rather than less, challenging. And that’s fascinating to me.


If there’s one consistent concern I hear from the companies I work with, it’s the shortage of qualified tech talent. But just like in so many other areas, a Boulder entrepreneur has come up a great idea to address the problem that not only adds to the talent pipeline, but also brings in more diversity — a personal passion of mine.

Too often, aspiring engineers who lack the funds to pursue a computer science degree from a university or take part in a bootcamp find themselves locked out of technology jobs, despite often severe talent shortages. Think about it: if you need to pay rent and buy groceries, it’s pretty tough to quit, or work part time, and pay either tuition or boot camp fees. To address this, Heather Terenzio, founder and CEO of Boulder’s Techtonic Group, developed Techtonic Academy, an innovative solution in the form of Colorado’s first federally recognized by the Department of Labor technology apprenticeship. Rather than paying thousands in tuition or fees, qualified individuals can get their foot in the door to a tech career while earning a salary from their very first day.

Techtonic Academy provides underprivileged youth, minorities, women, and veterans both technical training and mentorship to become entry-level software engineers and pursue a career in the technology field. It works like this: the program looks for people with an interest in and aptitude for tech but little or no formal training — think gamers, self-taught hobbyists and the like — and puts them to work as apprentices. They work with senior developers to gain coding experience on real client projects under careful guidance and supervision while earning a livable salary. They are required to earn a series of accreditation badges covering coding skills and are constantly mentored in “soft” skills — things like being on time or working effectively on a team.

After about six months, graduating apprentices are qualified junior developers, ready to work. Some choose to stay at Techtonic Group, where they become part of a team to build custom software, mobile applications and content-managed websites, while others move on to Techtonic Group clients. If a client hires an apprentice, Techtonic does not charge a conversion fee, which can run into the thousands for a junior developer hired through a traditional recruiter.

As Heather told me, “I have an Ivy League education, but that’s not where I learned to code. I learned to code doing it on the job.” I think many software developers share that sentiment.

Heather welcomes all technology hopefuls and works hard to bring diversity to the program, recruiting women, veterans and those who aren’t in a financial position to quit work to pursue a degree or certificate. The benefits are obvious. Apprentices earn a living salary on their first day, and we as a tech community can support a program that puts more coders in the market with a keen eye toward diversity and opportunity while getting work completed.

Heather’s got a great idea and it gives all of us the chance to both find help on projects and add new, diverse talent to our community. Reach out to Heather if you’d like more information.


There are two common email conventions in my world that I use many times a day in Gmail. I don’t remember where either of them came from or how much I influenced their use in my little corner of the world, but I see them everywhere now.

The first is +Name. When I add someone to an email thread, I start the email with +Name. For example:

+Mary

Gang – happy to have a meeting. Mary will take care of scheduling it.

Now, why in the world can’t gmail recognize that and automatically add Mary to my To: line? If I needed to do “+Mary Weingartner”, that would be fine. Gmail is supposed to be super smart – it should know my address book (ahem) or even my most recently added Mary’s and just get it done.

The other is bcc: Whenever I want to drop someone from an email chain, I say “to bcc:” For example:

Joe – thanks for the intro. To bcc:

Pauline – tell me more about what you are thinking.

Then, I have to click and drag on some stuff in the address field to move Joe from the To: line to the bcc: line.

Dear Developers Working On Email Clients Of The World: Would you please put a little effort into having the email client either (a) learn my behavior or (b) Add in lots of little tricks that are common, but not standard, conventions?

 


I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.

Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.

I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”

But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.

We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.

Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.

It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.

History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.

We’ve got some doozies coming in the next 50 years. It’s going to be fun.


My post on How to Fix Obamacare generated plenty of feedback – some public and some via email. One of the emails reinforced the challenge of “traditional software development” vs. the new generation of “Agile software development.” I started experiencing, and understanding, agile in 2004 when I made an investment in Rally Software. At the time it was an idea in Ryan Martens brain; today it is a public company valued around $600 million, employing around 400 people, and pacing the world of agile software development.

The email I received described the challenge of a large organization when confronted with the kind of legacy systems – and traditional software development processes – that Obamacare is saddled with. The solution – an agile one – just reinforces the power of “throw it away and start over” as an approach in these situations. Enjoy the story and contemplate whether it applies to your organization.

I just read your post on Fixing the Obamacare site.

It reminds me of my current project at my day job. The backend infrastructure that handles all the Internet connectivity and services for a world-wide distributed technology that was built by a team of 150 engineers overseas. The infrastructure is extremely unreliable and since there’s no good auditability of the services, no one can say for sure, but estimates vary from a 5% to 25% failure rate of all jobs through the system. For three years management has been trying to fix the problem, and the fix is always “just around the corner”. It’s broken at every level, from the week-long deployment processes, the 50% failure rate for deploys, and the inability to scale the service.

I’ve been arguing for years to rebuild it from scratch using modern processes (agile), modern architecture (decoupled web services), and modern technology (rails), and everyone has said “it’s impossible and it’ll cost too much.”

I finally convinced my manager to give me and one other engineer two months to work on a rearchitecture effort in secret, even though our group has nothing to do with the actual web services.

Starting from basic use cases, we architected a new, decoupled system from scratch, and chose one component to implement from scratch. It corresponds roughly to 1/6 of the existing system.

In two months we were able to build a new service that:

  • scales to 3x the load with 1/4 the servers
  • operates at seven 9s reliability
  • deploys in 30 seconds
  • implemented with 2 engineers compared to an estimated 25 for the old system

Suddenly the impossible is not just possible, it’s the best path forward. We have management buy-in, and they want to do the same for the rest of the services.

But no amount of talking would have convinced them after three years of being entrenched in the same old ways of doing things. We just had to go build it to prove our point.