After AI, Investors Are Eyeing Quantum Computing. Here’s What Actually Matters.
Which stocks to watch and why the next great computing breakthrough could reward patience more than hype
Disclaimer: This publication and its authors are not licensed investment professionals. Nothing posted on this blog should be construed as investment advice. Do your own research.
Quantum computing sits in a strange place. It’s early, it’s hard, and it’s absolutely not following a clean startup-to-scale story. At the same time, it’s one of the few areas in tech where the upside isn’t just better software, but access to entirely new classes of problems.
That combination makes it frustrating and compelling at the same time. If you’re an investor, the question usually isn’t “does this work?” but “where does value realistically accumulate while this is still messy?”
I’m going to explain quantum computing in simple terms for you and will describe the risks and opportunities from a technical perspective. Of course, I’ll also mention some stocks that are worth looking into if you want participate in the future of quantum computing.
What quantum computing actually is
Quantum computing is an attempt to compute using physical systems that behave fundamentally differently from classical electronics. Instead of abstracting everything down to bits, quantum machines lean directly into quantum mechanical behaviour.
The important implication is that progress is constrained by nature, not just engineering talent. If quantum computing ever becomes commercially useful at scale, it won’t be because someone shipped a clever app. It will be because a small number of teams solved problems that most companies cannot even meaningfully attempt.
From an investment perspective, that matters. Hard problems tend to concentrate value.
How quantum computing works, in very simple terms
A classical computer explores one path at a time, very quickly.
A quantum computer explores many possible paths at once, but only for very specific kinds of problems. It uses “Qubits” for that. Qubits can exist in overlapping states, and groups of qubits can be linked together in ways that allow correlations classical systems cannot replicate.
When everything lines up, this lets quantum machines search complex solution spaces far more efficiently than brute-force classical approaches.
That’s why areas like materials science, chemistry simulation, cryptography, and optimization keep coming up.
The downside is: Qubits don’t like the real world. Keeping them coherent requires extreme conditions and constant correction. This makes quantum computing very tricky to control and develop and many of the brightest minds are working actively on its problems.
Quantum computing and AI: complementary, not competitive
Quantum computing isn’t going to replace GPUs for training AI models. That’s not its role.
Where quantum could matter is in areas adjacent to AI: optimization, sampling, and simulation problems that are hard even for large classical clusters. Think better materials for chips, better batteries, more efficient logistics, or improved models for physical systems.
There’s also a feedback loop forming. AI is increasingly used to design quantum experiments, tune control systems, and search for better quantum algorithms. As quantum hardware improves, that loop tightens.
This is less about one technology eating the other, and more about both expanding the set of problems we can realistically attempt.
Why quantum computing is still risky, but less fragile than it looks
Investing in Quantum Computing is risky, but not in the way most people might think.
The risk isn’t that quantum is fake or that progress will suddenly stop. The real risk is mistaking technical progress for commercial readiness, or assuming timelines compress the way software timelines do.
That said, there’s a more positive angle that often gets overlooked. Much of the foundational work has already been done. Error rates are improving. Tooling is better and hybrid classical-quantum workflows are becoming more practical. Governments and large enterprises are no longer just funding research, they’re testing early use cases.
This doesn’t mean revenue explodes this or next year. It does mean the probability that quantum becomes useful keeps going up, even if adoption stays slow and uneven.
For patient value investors, that distinction matters.
The real risk: quantum becoming the next place speculation hides
The biggest risk around quantum computing isn’t that the technology is imaginary, it’s that capital is very good at outrunning reality. Over the last few decades, markets have repeatedly shown that hype often produces better short-term returns than legitimate businesses, at least until the story collapses under its own weight. Dot-coms, crypto, NFTs, and now generative AI all followed a similar pattern: plausible technology, enormous expectations, weak economics, and capital piling in long before usefulness was proven.
AI is now starting to show those cracks. Despite hundreds of billions in investment, productivity gains remain narrow and difficult to generalize, while costs scale aggressively. Several studies (like the one from MIT) suggest that most AI pilots fail to deliver meaningful profitability or output improvements, and even widely adopted products struggle to break even without optimistic pricing assumptions. As improvement curves flatten and unit economics remain uncomfortable, capital naturally looks for the next narrative that can absorb belief without immediate verification.
Quantum computing fits that role almost too well. It is real, genuinely hard, deeply technical, and extremely difficult for outsiders to falsify. Hardware progress is visible and easy to market, while the true bottleneck, useful software and algorithms, remains slow, uncertain, and largely resistant to brute-force investment. Even with functional quantum hardware, quantum computers cannot run conventional code and only outperform classical systems on very specific problem classes. For many of the areas they are most often associated with, including AI training and general machine learning, there is still no clear evidence that relevant quantum algorithms even exist.
This creates a familiar mismatch. Scientific progress continues, but commercial usefulness lags far behind expectations. Timelines stretch into decades, while public markets price outcomes years in advance. In that gap, speculation thrives. The danger for investors isn’t that quantum fails, but that it gets used as an escape hatch for disappointed AI expectations, pulling future possibilities forward to justify present valuations.
Quantum computing can still become important, even transformative, without rewarding most early capital. That’s the uncomfortable part. The technology doesn’t owe investors a clean adoption curve, and history suggests it rarely provides one.
How investors can participate
This is where nuance helps. The most visible quantum stocks are not interchangeable. They represent very different philosophies. Here are some of the more famous ones:
IonQ (IONQ): betting on physics elegance and long-term scalability
IonQ is often seen as the “cleanest” quantum story. Their approach uses trapped ions, which tend to have very high qubit fidelity and long coherence times.
From the outside, IonQ looks slow. From the inside, they’re optimising for scalability and error reduction rather than rushing qubit counts. That’s appealing if you believe stable qubits matter more than flashy numbers.
IonQ also benefits from a relatively hardware-light model compared to some competitors. Their systems integrate well with cloud platforms, which makes experimentation easier for customers and lowers friction for adoption.
As an investment, IonQ is a long-duration bet on doing things the hard way early so scaling hurts less later. It won’t pay off quickly, but if trapped-ion architectures win, the payoff could be meaningful.
Rigetti Computing (RGTI): vertical integration and iteration speed
Rigetti Computing takes a different approach. They build superconducting qubit systems and control much more of the stack themselves, from chip fabrication to software.
That vertical integration gives Rigetti speed and flexibility. They can iterate quickly, experiment with architectures, and optimize across layers. The downside is cost and complexity. Superconducting systems are demanding, and scaling them reliably is non-trivial.
Rigetti feels more like a traditional hardware startup: faster feedback loops, higher burn, more visible stumbles. For investors, this is a higher-volatility bet, but also one with clearer signals along the way as systems improve or don’t.
If quantum adoption accelerates through cloud-accessible, general-purpose systems, Rigetti is positioned to benefit.
D-Wave Quantum (QBTS): the most commercially grounded path
D-Wave Quantum is often misunderstood because it doesn’t build universal quantum computers in the same way as IonQ or Rigetti. Instead, D-Wave focuses on quantum annealing, which is specialized but useful for optimization problems.
This specialization is a feature, not a flaw. D-Wave already has customers using their systems for real-world optimization tasks. Revenue exists. Use cases are concrete. The tradeoff is that annealing won’t solve everything quantum can theoretically do.
From an investor’s perspective, D-Wave is less about moonshots and more about pragmatic adoption. It’s a bet that early, narrow utility can compound into broader relevance over time.
NVIDIA (NVDA): quantum doesn’t replace GPUs, it leans on them
NVIDIA is not a quantum computing company in the traditional sense, but it’s deeply embedded in the quantum ecosystem.
Most quantum work today still depends heavily on classical computing. You simulate quantum systems, design quantum circuits, run error-correction layers, and coordinate experiments using massive amounts of classical compute. GPUs sit right in the middle of that workflow.
NVIDIA’s role is subtle but powerful. They sell the picks and shovels for quantum research, from simulation libraries to high-performance hardware used by labs, startups, and hyperscalers. Even if large-scale quantum computing takes longer than expected, NVIDIA still benefits as long as money keeps flowing into research and experimentation.
From an investor perspective, this is attractive because NVIDIA doesn’t need a quantum breakthrough. Incremental progress is enough to justify continued spending on tooling and infrastructure.
Microsoft (MSFT): quantum as a platform feature, not a product
Microsoft approaches quantum the same way it approaches most infrastructure problems: abstract it, wrap it in tooling, and make it accessible through a platform.
Azure Quantum is less about owning the “best” quantum hardware and more about being the place where quantum experiments happen. Microsoft connects multiple quantum hardware providers, classical compute, developer tools, and enterprise workflows under one roof.
What’s interesting here is the incentive structure. Microsoft doesn’t need to guess which quantum architecture wins. If quantum becomes useful at all, customers are likely to access it through existing cloud relationships. That gives Microsoft leverage without forcing them to bet everything on one technical path.
This makes quantum at Microsoft feel less like a moonshot and more like a long-dated platform option that fits naturally into their cloud strategy.
IBM (IBM): slow, steady, and deeply committed
IBM has been working on quantum longer than almost anyone else, and it shows in how methodical their approach is.
IBM treats quantum as an extension of its research-first DNA. They publish roadmaps, expose systems to developers, and push heavily on education and ecosystem building. Their superconducting qubit approach is well understood, even if it’s technically demanding.
From an investment lens, IBM’s strength is credibility. Enterprises, governments, and universities already trust IBM with long-term infrastructure projects. Quantum fits naturally into that relationship model.
IBM is unlikely to surprise anyone with sudden breakthroughs, but it’s also unlikely to abandon the field. If quantum adoption happens through conservative, institutional channels, IBM is positioned to be a core beneficiary.
Google (GOOG): quantum as a long-term research advantage
Google treats quantum computing as fundamental research with potential strategic upside, not as a near-term business line.
Google’s quantum work is deeply tied to their strengths in physics, systems engineering, and large-scale infrastructure. Their focus has been on proving hard technical milestones, even when those milestones don’t immediately translate into products.
What makes Google interesting is optionality. If quantum ends up mattering for materials science, optimization, or future compute workloads, Google has the talent and infrastructure to integrate it internally before it ever becomes a commercial offering.
For investors, this means quantum success would likely show up indirectly, through better internal tools, improved efficiency, or future services layered into Google Cloud rather than as a standalone revenue line.
Closing thoughts
Quantum computing is neither a punchline nor a saviour. It’s a real technological frontier with legitimate long-term potential, operating under constraints that don’t bend to capital or narrative pressure. The physics is difficult, the engineering is slow, and the software problem is still largely unsolved. None of that makes quantum uninteresting. It just makes it incompatible with the way markets like to price stories.
What complicates things now is timing. As confidence in near-term AI payoffs softens, quantum risks absorbing expectations that have nowhere else to go. That doesn’t change what quantum can eventually become, but it does increase the chance that valuations get ahead of usefulness, especially for companies whose entire identity rests on long-dated breakthroughs.
For investors, the challenge isn’t deciding whether quantum matters. It’s recognising where value can realistically accrue while timelines stretch and narratives rotate.
Quantum computing will likely matter in ways that don’t look obvious today. The mistake is assuming that inevitability translates cleanly into investable returns, or that the market will wait patiently for reality to catch up.



