Watch as quantum hype gradually replace AI hype. Like AI not all of it will be hype but there will be 80:20 ratio of hype to benefit. And the greatest benefit will be to those who trade on hype. Stating the obvious.
Quantum hype will not replace AI hype. Quantum has already had its time of overblown expectations and has cooled off to realistic expectations. It won’t be cool or hyped again until general purpose quantum computing becomes a reality.
The quantum industry now knows (more or less) its own limitations for the next 5-10 years. That knowledge may not extend to investors, especially if they're looking for a new speculative bubble rather than the actual outcome of quantum R&D.
Rigetti, a company with $11B market cap is playing in a smaller league. OpenAI will have that much in revenue this year, and has a market cap almost 50x higher. The hype is not the same.
Yeah, that's fair. I agree RGTI (and QUBT, +100% last 6mo) both have much smaller mcaps, but they also kind of seem like (maybe?) more reflective of quantum hype than companies which have a quantum department but aren't defined by their quantum capabilities right now.
>It won’t be cool or hyped again until general purpose quantum computing becomes a reality.
Which, to be clear, could be never. Aside from factoring numbers, solving certain optimization problems, and simulating quantum systems, there are very few known applications where quantum computers even theoretically outperform classical computers.
Of course, it's possible that as quantum computers become more powerful/robust, it will inspire discovery of new classes of problems/algorithms that they excel at. But I'm not holding my breath.
I’m not saying that AI is definitely here to stay this time, but it won’t be quantum replacing it.
AI has cooled in the past, even generating the term “AI winter,” but it’s hot now. Quantum could be big again, but this current wave of specialized computers is played out. There won’t be hype at the level that AI currently gets until there is general purpose quantum computing.
Huh, I've actually been pleasantly surprised at how much hype there isn't every time one of these companies with a quantum side gig (Google, Microsoft) announces some new paper or finding. They're usually accompanied by levelheaded press releases announcing what was done and why, not breathless hyperbole. There are generic optimistic statements from executives, but nothing like what happens with AI. I dunno, it feels to me like they're pretty realistic about what QC might and might not accomplish in the coming decade.
For QC startups it's a different story, but of course startups have to hype themselves into the stratosphere to survive, so I don't really hold it against them (or at least, not any more than I would any other startup).
I doubt quantum hype will go very far. I sometimes talk to a guy who works in quantum computing. He claims that even if quantum computing becomes feasible, it will be mainly for very specialized applications. The wider public probably won’t be very excited about encryption or fluid dynamic simulation.
That’s very different from AI that will be applicable to very general and broad areas
On a more serious note, I’m not convinced that solving NISQ scaling with ML can overcome interconnect scaling. Interconnects are needed to make logical qubits, but introduce increased error rates.
Even with better hardware, the scaling problem remains hard. It’s like getting LLM agents thru complex agentic paths and the hard problem of decreasing error rates.
Lack of foresight in AI?
They own one of the world's largest GPU/TPU fleets. They have some of the most advanced research and a path to deploy products. What exactly else should they have foreseen?
Doubling down on LLMs specifically (which came out of their own research). Also getting to a point to commercialize it first. Their TPU fleet wasn’t really optimized for LLMs until recently afaik.
LLMs are only 3-5 years old (NLP is much older OFC), for all we know they'll be a dead end in research like LSTM are today - LLM/Multi Modal just look super hot. "Attention is all you need" was released in 2017, it took 5 years to prove it was useful, for all we know the new hot thing has already been published and LLMs are obsolete - Google might have been right to wait.
Besides I dont think the top people at Google's DeepMind - and I can only "infer" this from watching them speek online - actually think LLM's are "the one".
Is the goal GAI or to add as much as possible to the market cap? I was specifically talking about the latter and why they got leapfrogged by OpenAI which has been valued at a substantial fraction of Google’s overall value despite having a fraction of the revenue. If Google had managed to generate this much value for themselves they’d be respected differently but for now it seems like they missed the AI tech stack today and are playing catchup for like the next 5 years regardless of where AI evolves later.
Larry's original goal for Google was always to be a revenue vehicle to reach AGI although I don't think Sundar is interested in anything except revenue/profit.
Note also that many of Google's previous attempts with LLM generated significant press controversy, and it was in Google's interest to let other groups take the heat for a while while the overton window shifted.
Not OP, but maybe lack of achievements in AI is a better way to phrase it. They had one of the earliest, serious AI programs and did an enormous amount of work, but for whatever reason one would like to attribute they've consistently fell behind other competitors. I mean Grok, which was started quite recently, is better than Gemini in most tasks and arguably has a stronger market position. Google should have mopped the floor with them given their funding and the age of their team but that never happened.
Google has led AI research for years with Google Brain and DeepMind, but they have been overtaken by OpenAI now, and perhaps soon also by other players who are building gigantic data centers. Google's core mistake was arguably that they were far too late in coming up with a model that was competitive with GPT-4. Now ChatGPT has likely more than an order of magnitude more subscribers than Gemini. At the same time, Google's search engine is increasingly replaced by ChatGPT, drying up their main source of revenue: search ads.
Recently one of their top reasoning researchers switched to OpenAI, likely leaking Google's internal reasoning secret sauce, if they even knew anything that OpenAI didn't. I doubt they will catch up.
You can look at comparables to see how that'll workout
Arguably the most mainstream and well-known subscription company is Netflix. People all around the world know about it, most have subscriptions and yet it hasn't "taken over" the world.
That's before we consider any other company (Anthropic, Google) can undercut OpenAI in price or just simply be good
Unless something drastically changes, OpenAI will have tough time justifying high valuation.
Google invented the transformer architecture, so I genuinely have no clue what you're talking about. Do you actually think they should have kept it a secret from the world? Do you think Google would genuinely be a better company if they monopolized the tech?
In this day-and-age, it feels like HN refuses to advocate for anything that isn't a monopoly. Are we incapable of imagining competitive markets now?
Well, I don't think it's particularly related to AI, but Google has been known to make... interesting claims regarding quantum performance, at least once per year, to justify some kind of advance.
To this day, I wonder if Google knew that they couldn't be the ones to unleash AI unto the world. They clearly had the wherewithal and expertise to do it (Vaswani et al, 2017), but were under so much antitrust pressure at the time that it seemed inconceivable that they could be the ones to introduce such a polarizing technology to the world. What kind of firestorm would have rained down on them if they were the first.
Or, you might think, if Google had the technology, and they knew how to turn it into a trillion-dollar product, it's beyond ridiculous to think they would just hand the win over to someone else.
I think they just saw it as slop, they were working to make it more reliable and accurate. Releasing it first would tarnish their name, it just wasn't ready. OpenAI had no name to tarnish, so people were more willing to deal with the subpar experience as they refine it.