I align with Yann LeCun’s camp on this matter. I don’t believe that LLMs and machine learning alone will lead us to human-level intelligence. Current models have demonstrated only very limited reasoning capabilities within specific domains—essentially following and recombining similar patterns they’ve ingested during training.

If you follow computer scientist and AI researcher Subbarao Kambhampati, a professor at the School of Computing and Augmented Intelligence at ASU, he proposes and demonstrates that language models and reasoning models cannot plan effectively even in domains as simple as organizing blocks in a virtual world, let alone in more complex domains.

Given all of this, I would be highly skeptical that anyone could achieve a successful single-employee company using AI agents by 2026. The moment these systems encounter any non-trivial decision, they would likely fall apart completely.

Unless there’s a dramatic shift in architecture—and this doesn’t even account for the fact that the cost of running all operations in a simulated manner would equal or exceed the cost of hiring people for those positions. Consider, for example, that just a few minutes of coding with Claude Opus 4 in an IDE already costs several dollars to approximate the performance of a mediocre junior developer. Imagine the cost of running it nearly all day?

I don’t know—it doesn’t seem threatening to me.