AI's Pitfalls

AI is not what the marketing material tells you it is. The companies building it are losing billions. The models make things up. And the path from "impressive demo" to "reliable tool" is far longer than anyone wants to admit.

Beginner · 10 min read

The Money Problem

The AI industry has a dirty secret: almost nobody is making money.

$14 Billion

OpenAI's projected losses in 2026 — despite $20 billion in annualised revenue and 900 million weekly ChatGPT users. The company does not expect to be profitable until 2030.

Here is the structural problem: only 5.5% of ChatGPT's users pay for a subscription. The other 94.5% use it for free — while OpenAI bears the compute cost of every single query. Every free user is a cost centre.

OpenAI is projecting cash burn of approximately $9 billion in 2025 and $17 billion in 2026. To put that in perspective, they need to nearly double revenue every year until 2030 — reaching roughly $275 billion — just to break even.

Anthropic, the company behind Claude, shows a different trajectory — hitting $19 billion in annualised revenue by March 2026 and expecting slight profitability this year. But they are the exception, not the rule.

The companies building on top of AI are in an even worse position. They pay the model providers for API access, add their own development and infrastructure costs, then try to charge customers enough to cover all of it. Most cannot.

This matters to you because it means the AI tools you depend on today may not exist tomorrow. Companies burning billions of dollars per year are not stable foundations for your workflow. And when the funding runs out, the tools change, the pricing changes, or the companies disappear entirely.

Hallucinations: Making Things Up With Confidence

Large language models do not "know" things. They predict the next most likely word in a sequence based on patterns in their training data. This means they can — and regularly do — generate plausible-sounding information that is completely false.

Research shows LLMs hallucinate between 3% and 27% of the time, depending on the task. That might sound small until you consider what it means in practice:

  • A lawyer filed AI-generated legal citations in a real court case. The cases did not exist. The judge sanctioned the lawyer.
  • Google's Bard incorrectly stated findings from the James Webb Space Telescope in its very first public demonstration, wiping $100 billion from Alphabet's market value in a single day.
  • Medical AI systems have recommended treatments that contradict clinical guidelines, with the same unearned confidence they use to state correct facts.

This is not a bug that will be fixed in the next update. Hallucination is a structural feature of how large language models work. They do not have an internal model of truth. They have an internal model of what text typically follows other text. These are not the same thing.

This is exactly why augmented intelligence matters. When a human stays in the loop — reviewing, verifying, applying judgment — hallucinations get caught before they cause damage. Remove the human, and you are trusting a system that cannot tell the difference between fact and plausible fiction.

The Black Box Problem

When an AI gives you an answer, can you ask it why?

Not really. Large language models cannot explain their reasoning because they do not reason in the way humans understand that word. They process patterns. They can generate text that looks like an explanation, but that text is itself a prediction — another sequence of likely words — not an actual trace of the computation that produced the answer.

This creates a fundamental trust problem. In medicine, law, finance, engineering — in any field where decisions have consequences — you need to understand why a recommendation was made, not just what it was. You need to be able to challenge it, verify it, and take responsibility for it.

An AI that cannot show its working is an AI you cannot properly oversee. And an AI you cannot properly oversee is a liability, not an asset.

Automation Bias: The Danger of Trusting Too Much

There is a well-documented psychological phenomenon called automation bias: humans tend to over-rely on automated systems, especially when those systems present information confidently.

AI makes this worse because it speaks with absolute certainty whether it is right or wrong. There is no hesitation, no hedging, no "I'm not sure about this." It presents fabricated information with the same tone and structure as verified facts.

Studies of early AI tool adoption show a pattern: new users are sceptical. Then they see the AI get things right several times in a row. They start trusting it. Then they stop checking. And then the AI makes a critical error that goes uncaught because no one was looking any more.

The bounded rationality of human cognition means we are already prone to taking shortcuts. AI amplifies this tendency by giving us a seemingly authoritative source we can defer to — without the cognitive cost of verification.

The Job Displacement Reality

The numbers are real, and they are large:

92 Million

The World Economic Forum projects 92 million roles will be displaced by 2030. 170 million new roles will emerge — but they are not the same jobs, do not require the same skills, and are not in the same places.

Goldman Sachs estimates approximately 300 million full-time jobs globally will be affected by generative AI. In the US alone, that is roughly 11 million workers who will need to transition to different occupations.

The impact is not evenly distributed. Younger workers are disproportionately affected — those aged 18-24 are 129% more likely to fear AI will make their jobs obsolete. Among 20- to 30-year-olds in tech-exposed roles, unemployment has increased by nearly 3 percentage points since early 2025.

The hardest-hit categories are clear: clerical and administrative work (6.1 million US workers at high risk), manual data entry (95% automation risk), content writing, customer service, and entry-level coding.

But here is what the displacement narrative misses: the people who learn to work with AI are not getting displaced. They are getting promoted. A Harvard/BCG study found that consultants using AI completed 12.2% more tasks, 25.1% faster, with 40% higher quality — but only when the tasks fell within AI's capability frontier and the human knew how to direct it.

That "knowing how to direct it" is the entire point of augmented intelligence. It is the skill that separates the displaced from the amplified.

So What Do You Do?

None of this means you should ignore AI. That would be as foolish as ignoring the internet in 1998. The technology is real, the capabilities are genuine, and the impact will be enormous.

But it does mean you should approach AI with clear eyes:

  1. Stay in the loop. Never let AI make decisions without human review. Use it to generate, research, and analyse — but you make the final call.
  2. Verify everything. Assume AI output contains errors until you have confirmed otherwise. Build verification into your workflow, not as an afterthought but as a core step.
  3. Build skills, not dependencies. Learn how AI works well enough to know when it is reliable and when it is not. Develop the judgment to use it effectively rather than the habit of accepting its output uncritically.
  4. Diversify your tools. Do not build your entire workflow on a single AI provider. The industry is volatile, pricing changes constantly, and companies disappear.
  5. Learn augmented intelligence. The gap between "using AI" and "using AI well" is enormous. The people who bridge that gap will thrive. Everyone else is rolling dice.

The future belongs to people who can think with machines, not people who are replaced by them. That is the premise of augmented intelligence, and that is what we teach here.

Continue Learning

Understand the pitfalls. Now learn how to build the skills that make you irreplaceable.