AI Thoughts

Posted on March 4, 2026

Bear cases

Every other podcast these days is talking up the wonders of AI and how it is just a matter of time before everything radically changes. Meanwhile, as soon as I peel my eyes and ears away from my phone and look outside the window, I see that really nothing has changed. The singularity, where and whenever it is, isn’t here yet.

As a fun intellectual exercise, I thought I’d think through some bear cases for AI companies.

Model Stagnation. If models stop advancing past a certain point (or users perceive them to stop advancing), the competitive environment can shift to a race to the bottom. Those tens of billions of investment and the high operating expenses become quite a liability.

Chip Shortage. This could take the form of supply chain issues (remember that term from the Covid years?) or simply from AI’s scale exceeding the world’s chip production capacity. Prices of chips are driven up to the point that additional investment has a negative expected ROI. Either training slows, or inference costs go up. This helps profits in the short term, but puts the companies in a risky position long term if someone else can still make chips.

Model Supremacy. If one lab’s model starts outstripping all the other models, they’ll get the lion’s share of the revenue, and thus the investment, and be able to spend the most on training the next model, maintaining their lead. All the other labs’ revenue dries up.

Investment Dries Up. AI companies will lose their luster at some point as surely as social media companies did. Investors will start having more interest in the return on their investment than in making more investments. When the music stops, not everyone will have a chair to sit in. Will each lab be able to make the transition to living off of what they’ve built so far?

Cheap Models. Models that are cheap to run (free licensing costs, cheap hardware) have been getting good. A cheap model could plausibly get good enough that everyone stops paying $20 or $200 a month for their subscriptions. The SOTA models will of course be better, but when only the most intensive workloads need them, it becomes very hard to recoup the training cost.

Legal. At this point it seems that the copyright ship has sailed and poses no risk to AI labs. Still, you could imagine laws limiting their power use or requiring them to pay well above market for their power. Or, laws governing the safety of models which are so severe that the models become useless. Real danger from the legal side seems unlikely, however Anthropic’s recent woes prove that this is still meaningful.

It’s interesting to think of which of these bear cases do and do not apply to Google, which has a large source of income outside of AI.

Expected in the next decade

Some of these will probably happen much sooner than a decade from now.

A good robot model. It seems reasonable to expect that a model will be developed which you can plunk into some off the shelf humanoid-shaped robot hardware it will start being practically useful. They’ll get used in cases where the robot can either charge often or be tethered to a power source. Or maybe swapping batteries will be practical. Presumably the first uses will be cases where the cost of using humans (wages, danger, safety equipment) is high.

Continual Learning. You can teach someone who has never played the guitar before how to make a C chord. You’ll have to give them very detailed instructions and corrections about what to do with each finger and how to fix buzzing strings. It might take them a minute or two to get it to sound good. Compare that to an experienced guitar player, who can play a C chord without even thinking: their brain is hundreds of times more efficient at that task because they’ve practiced it. AI models don’t get a chance to learn from their practice. They can keep knowledge in their context window the way the guitar student can keep your instructions in their head, but they never become faster or more efficient at a task over time. They don’t “compress” what they’ve experienced into updates to their weights.

Bicycles for the mind. If computers are bicycles for the mind, why aren’t they bicycles for the artificial mind? Agents are starting to make really good use of tools that have been built for humans - particularly CLI tools - but we’ve seen fairly little (besides MCP servers) in the way of code for von neumann machines written specifically for making AI more powerful.

New Chip Fabs. At some point it will start to make sense to start a new chip fab company to rival TSMC. Or, maybe one of the existing fab companies will step up their game and be competitive once again. It seems unlikely, but it is also possible they could be more vertically integrated and create their own lithography machines instead of relying on ASML.

A trillion lines of code. Google has some billions of lines of code. That’s the unit we talk about code size in: no one cares how many assembly instructions that is because it’s not what we interact with. If we interact less directly with code, we’ll stop caring how much of it there is. It will also become easy to produce lots and lots of it. Some company may hit the trillion line mark. (Based on a quick Google search, the world probably has at least 10 trillion lines of code.) At that scale, how do you check for security issues? We might have to invent faster compilers and other tools to have any hope of even building that much code. When code becomes free, Amdahl’s law says that shipping software doesn’t become free because all the other costs start to dominate. We’ll have to invest more than ever in making good, fast developer tools.

More efficient training. Someone pointed out that when you are working your way through a math textbook, you aren’t trying to predict the next word in the text. Most of your learning comes from doing the exercises. You can innately feel when you don’t know what step to take next in solving a problem and explicitly direct your reading in search of what you’re missing. If you get a problem wrong, you can step back over your own work and figure out what the problem was. You learn in a very different process from an LLM. That’s a good thing, because LLM training needs far more words than you’ll ever hear or read. The way training works today is fundamentally very inefficient. Models don’t compress reality all that much. They never feel an “aha” moment when they realize that many different observations can be explained with a new mental model. Some lab is sure to figure out how to fix this.

Hopes

There are a few things I hope AI will bring in the not-too-distant future:

More cures. We might not tell AI “go fix migraines. make no mistakes.” but Google’s DeepMind has already proven that AI can be very useful in understanding biology (e.g. AlphaFold). AI should allow us to find more promising molecules much faster, and have a much higher success rate by sussing out problems in simulations.

Build better businesses. If AI can listen in on every meeting and read every document (which it basically already can), it would have an amazing amount of insight into problems in businesses and be able to solve them. We forgot to tell the team in India about the priority change because they weren’t in that meeting? No worries, the AI noticed this and flagged the issue. Maybe it just shared that context with them without needing to be told. Someone is overloaded and burning out, but not doing a good job of communicating what’s on their plate because they are too stressed to think straight? AI could easily be trained to spot this and help. Decisions aren’t getting made? Well guess what, AI knows all the best organization practices and has read all the management books. Responsibilities shift to a new team? AI has all the history of where the skeletons are buried and which random script you have to run to fix things.

Actually help humans. If algorithms can create echo chambers and make everyone mad at each other, surely AI could be used to help people understand each other (which requires intention, not mere information - which is why the internet by itself didn’t help here). If people want to hang out and make friends, AI could suggest times when people are available, things to do that everyone likes, and goad people out of the house.

Runaway

I don’t fundamentally believe the runaway intelligence (or the “city of geniuses”) case. Partially this is because the amount of electricity we can generate and the number of chips we can make is pretty limited. AI is bits that run on atoms, and atoms are a lot harder to scale than bits.

Models will certainly be useful in creating the next model (I’m sure labs already use one model to prepare training data for the next) the way that computers are a useful tool when designing the next generation of computer. Can an AI be trained that is better than any human at coming up with ways to make the next model better? Probably! (I wonder when it becomes the only logical choice for a lab to start devoting all their effort to building this.)

Will that create AI models that are smarter than humans? I mean, computers have always been smarter than humans in a lot of ways. A computer from 80 years ago can do way more math than I can. One from 40 years ago can remember far more information than I can. Algorithms have been able to design circuits, optimize routes, and predict weather better than any human for a while now. AI models make computers better than humans at many more things, and make them quite good at many more. It seems reasonable to expect that we’ll eventually make an AI that is better than any human at everything. The singularity appears imminent.

And maybe once you are smarter than humans you can work your way past all the bottlenecks: power, manufacturing, rare minerals, speed of light latency, and keep making more computational power and use it to build smarter models. I think this process doesn’t go on forever, for one of two reasons:

  1. It plateaus. Human brains have some upper limit of intelligence, so it seems that silicon brains should as well. Show me something without an upper limit and I’ll show you a crypto scam.

  2. It keeps getting more intelligent, but each iteration takes longer than the last. The advancements in computation power don’t keep up with the increased needs of each subsequent model thanks to the pesky limits of physics. The advancements get further and further apart until it is stalled for all practical purposes (or more realistically, until we decide to stop investing the world’s resources in letting it autoadvance).