[ROUNDTABLE] Is something ‘big’ actually happening with AI right now?
On February 10, 2026, Matt Shumer, CEO of Otherside AI, posted his version of an AI manifesto: “Something Big Is Happening”.
Four days later, the X post has 81M impressions, 107K likes, 37K reposts, and 5.9K comments. A big deal for the tech/X community!
It frames the current AI moment as analogous to early COVID: A huge shift is underway, but most people are not tracking it yet.
Some key ideas:
AI has crossed from ‘assistant’ to ‘cognitive substitute.’
AI is now contributing to building better…AI, compressing timelines of progress between models.
Even if it’s unsettling, people should start using AI tools at least an hour a day because the adjustment period to an AI-first work world will be short.
What now?
Naturally, this article generated dozens of thought pieces, X reactions, and takes.
But I know who I want to get initial reactions from: Cities Decoded contributors.
Hearing these perspectives below really sharpened my understanding of the piece’s shortcomings (bottlenecks), the highlights (building as the real learning path), and if this AI wave is really like COVID or not. I hope it can do the same for you.
Ladies and gentlemen of Cities Decoded, welcome back to our roundtable edition: Is something big really happening with AI right now?
The Daily TEA creator (AI and crypto news, AI memes)
This article resonates.
The future is already here, but not evenly distributed.
Anthropic CEO Dario Amodei often emphasizes how fast AI is evolving, and how “most people don’t know how good it has gotten, fast.” Just because you don’t know how advanced AI is doesn’t mean it isn’t. Just because you don’t know how to use it better doesn’t mean those uses don’t exist.
We keep seeing this pattern: teams are designing products around the timing of model improvements. The chief scientist of Manus, Peak Ji, recently shared that while building Manus, they waited and timed the upgrade of Claude models because they knew it would unlock new functionality—and it did. Same with OpenClaw.
In mid-2025, Peter Steinberger went viral with “Claude Code Is My Computer,” describing Claude Code as his primary interface. At the time, that level of autonomy still wasn’t viable as a product: the models were too brittle, too expensive, and too unreliable.
Less than a year later, users can ask OpenClaw to help check into flights, trade crypto, or even fight insurance claims end-to-end. There’s more to come as new capabilities unlock.
Learning and building are different. Learning through building is more efficient.
Reading to stay up to date and building to stay up to date are completely different. I’ve noticed this in myself. I read so many articles daily about which model is best, what automations AI can do, and best practices for setting up agents.
But it’s only when I actually do it—whether vibe-coding, using n8n to automate my workflow, or coding with Claude Code—that the articles start to register. That’s when the ah-ha moment comes. It comes not when you read, but when you build.
Learning and building go hand in hand. You read to create a knowledge base and concepts that guide your direction. But only when you build do you understand what it actually means.
Learn to build. Build to learn.
Driven by passion, not fear.
In an era where everything moves so fast, it’s easy to feel FOMO. It can feel like missing one article means falling behind.
Last year, I read The Last Economy by Emad Mostaque (founder of Stability AI). He said you have to constantly run just to stay in place, or you’re already behind. Even thinking about that brings me anxiety.
But I believe we should be driven by passion and love, not fear.
“The truth about the American technology business is that it is in fact a sacred artifact of all Man’s existence. Few involved from the inside will see its true archetypal power. But it is one of the most important engines of Man’s destiny. It should be treated as sacred, not merely profitable. It should be cherished as incredibly rare and precious, like a Promethean flame in a fragile fennel stalk carried on into the dark.” — a16z
So beautifully written. What we’re experiencing is incredibly amazing. With AI, the democratization of knowledge means anyone can learn anything at any time at no cost—as long as you’re willing to learn. “Ask and you shall receive; knock and it will be answered.”
If the purpose of life on earth is to “know thyself”—to know ourselves—then there couldn’t be a better time than now to know the world, understand the universe, and pursue the meaning of life. This is fun and honorable.
We do this not because we’re afraid of losing the race or falling behind, but because we appreciate humanity so much. We value and honor this life so much that we want to make the best use of our time on earth: to know more, build more, and to know “thyself.”
Let’s do this not out of fear, but out of love. Because at the end of the day, love is the only way and the only answer.
Read the latest from the Daily TEA, ‘Agents, Wallets, and Interplanetary AI?,’ here.
Hillary Zeng — Scaffold (Critical thinking and AI)
I’ll lay my cards out on the table: I am an AI skeptic. I get particularly peeved about articles that condescendingly assert that the revolution is coming, you—the average person—just don’t know it yet.
Needless to say, the article largely fit into this category. For my critique, though, I’ll focus on the introductory framing, that the AI Revolution (crisis?) is comparable to the COVID pandemic.
COVID hit the world like a storm—I still vividly remember not being able to conceptualize the meaning of “quarantine,” and then spending the rest of the next three months in that state. What was formerly imaginable became my everyday reality. But COVID is a virus, and our vulnerability stemmed from our biological vulnerability to that virus.
In contrast, we—as individuals, but also as a society—can choose how we want AI to shape our realities. Accepting unbridled disruption implicitly embraces a wholly laissez-faire approach to regulation. The government can impose regulations on tech companies. Tech companies can make decisions about how they want to deploy their products. Implementors have the power to design their operations.
It is not the case that the AI revolution will simply happen, as the COVID pandemic did. COVID and AI are not equivalent as disruptors, and conflating the two undermines human agency.
Read the latest from Hill, ‘Refining the Prompt, Exploring State Machines, Reviewing a Prototype, and Facilitating a Satisfying Ending,’ here.
Justin Curl — AI x Law
I directionally agree with many of Matt’s takes. Too few lawyers, for example, are experimenting with AI given how capable the models have become.
That said, I think he underestimates the non-technical bottlenecks that slow the pace of AI disruption. It’s possible his essay is an intentional overcorrection aimed at shocking people into action (he does acknowledge industries with “licensed accountability” and higher “regulatory barriers” will have more time).
Either way, it’s worth recapping the short and medium-term bottlenecks to AI disrupting the legal industry.
Organizations don’t know enough about AI use cases to know how it can make them more productive.
AI doesn’t actually save time after accounting for how long it takes to check outputs because verification costs are too high.
Regulatory barriers can prevent non-lawyers from accessing capabilities or chill experimentation in business models.2
For some, these bottlenecks are a source of comfort. To me, they offer an important place to intervene and shape the development of AI.
Read the latest from Justin, ‘AI Won’t Automatically Make Legal Services Cheaper,’ here. And catch us talking more about AI and the law this Monday here.
Austin Nelleson — COMPUTE/COMPETE (geopolitical AI)
In the same way that, every second, Earth experiences approximately 100 lightning strikes, 4,000,000 emails are sent (really??), 4 babies are born, and the planet moves nearly 18.5 miles through space, every few weeks someone writes a manifesto professing the dawn of Machine God. It is one of our world’s most sacred truths.
The question is: Are they crazy, or are we for ignoring them? Probably both?
A self-admitted tech acolyte, the author wrote the piece to warn the world (or perhaps just Twitter) of AI’s exponential capability growth. And, he’s right that the technology is rapidly encroaching in not just the CS industry, but every digital service sector job out there. It’s undeniable that anyone and everyone should be, as he suggests, learning not just to use AI chatbots as Google 2.0, but how to build their workflows around AI’s strengths and weaknesses.
Yet, I hesitate to agree that we’ll see replacement over reinforcement in the next few years. You see, the bottleneck to deployment isn’t AI capability, but human ego. The “AI Race” of the next 5 years will be centered on which companies can restructure their organization around AI’s analytical productivity boost and which companies will be preserved by good ol’ public sector inefficiency. Humans are, for good or bad, likely here to stay.
My take: The author is looking in the wrong direction. AI won’t break the capitalist system, it’ll just augment its best and worst. Rather than mass replacement, we should be concerned that members of OpenAI’s Superalignment team have quit, warning:
“The company [emphasizes] financial gain over minimizing the dangers of building ‘AI systems much smarter than us.”
There will be exploitation, there will be inequity, and there will be profit. So, things will remain much the same—just different.
Read the latest from Austin, ‘COMPUTE/COMPETE #1,’ here.
Myself (Cities Decoded)
Manifestos in the AI space are not new.
You have Altman’s gentle Singularity.
You have Andreesen’s Techno-Optimist.
You even have startups writing manifestos, with Cluely taking a stab.
What’s different about Shumer’s vision painting is that he articulates a clear and realistic path forward with AI. I guarantee you will nod your head at least one item or development that resonates.
In that vein, like most people, I don’t agree with everything Shumer says. But that almost doesn’t matter.
What matters way more to me is that there will be a growing gap between people who can at least understand the five most likely scenarios of AI development in the next few years, and those who don’t. Very little will be different between these two sects of people, besides the time and effort they can spend on staying up to date and building with AI.
So my question after reading a well-written and thought-provoking piece like this one: How do we get 50 more pieces like it, all from different angles?
Maybe we can do more with this publication to support that vision.
Sounds like an idea worth pursuing. Maybe I’ll even write a manifesto about it one day.🙂
Read the latest from me, ‘How we are celebrating one full year of Cities Decoded,’ here.






