If AGI is accomplished, there’s unlikely to be a “secret sauce” to it (or a patentable sauce), and accomplishing AGI won’t by itself constitute a moat.
Maybe. Moats are often surprising. Google’s moat is just that people think of Google when they think of search. Bing could be significantly better than Google, and in fact, a lot of people think it is, and still not get anywhere.
A lot of people said Microsoft’s Windows moat in desktop operating systems was gone when you could do most of the things that a program did inside a browser instead, but it’s been decades now and they still have a 70% market share.
If you establish a lead in a product, it’s usually not that hard to find a moat.
Google’s moat is their search index and infrastructure (which is significantly larger-scale than an LLM), and the fact that non-Google/Microsoft web crawlers are being blocked by most websites.
Windows’ moat is enterprise integration, and the sheer amount of software targeting it (despite appearances, the whole world doest’t run on the web), including hardware drivers (which, among other things, makes it the gaming platform that it is).
OpenAI could build a moat on integrations, as I mentioned.
Eh, Bing’s index and infrastructure are perfectly adequate and they’ve still got a single digit market share. One might argue other people dont have them (others once did) because Google’s brand moat drowned the competition and makes nobody else bother.
OpenAI could build a moat in a lot of different ways including ones that haven’t been thought of yet.
Aren't they proving the opposite of your proposed alternative already? A limited AI is not making them money and since every new model becomes obsolete within a year, they can't just stop and enjoy the benefits of the current model.
The fact that it isn’t making money now isn’t indicative it never will. I can think of a lot of very large tech companies who people once said the same about.
That's the thing, nothing points to a world with a single winner in AI models. I get what you are saying, but not sure OpenAI can survive the burn unless they build an unmatchable AGI. And that's pure speculation at this point.
I mean, someone needs to rise to the top, unless society as a whole just says "There's no value here." and frankly there's too much real value right now for that. So someone's surviving, at least at the service level. Maybe they just end up building off of open source models, but I can't see how the best brains in the business don't find a way to get paid to make these models. Am I missing something?
There’s definitely a future for LLMs from an enterprise point of view. Even current capability models will be widely used by companies. But it’s seems that will be highly commoditized space, and OpenAI lacks the deep pockets and infrastructure capabilities of Meta and Google to distribute that commodity at the lowest cost.
OpenAI valuation is reliant IMO on them on them 1) AGI possible through NNs, 2) them developing AGI first and 3) it being somewhat hard to replicate. Personally I’d probably stick 10%, 40%, and 10% on those but I’m sure others would have very different opinions or even disagree with my whole premise.
I am not saying that LLMs don't provide value, just that this value might not be captured exclusively by OpenAI in the future. If the idea is that OpenAI will have an unmatched competitive advantage over everyone else in this area, then that has already been proven to be wrong. The rest is speculation about AGI, the genius or Altman, etc.
That’s not the definition of AGI that has been in wide use within the research community for two decades prior to the founding of OpenAI.
The founders of OpenAI were drawn from an intellectual movement that made very specific, falsifiable predictions about the pipeline from AGI (original definition) to superintelligence, predictions which have since been entirely falsified. OpenAI talks about AGI as if it were ASI, because in their minds AGI inevitably leads to ASI in very short order (weeks or months was the standard assumption). That has proven not to be the case.
General: able to solve problem instances drawn from arbitrary domains.
Intelligence: definitions vary, but the application of existing knowledge to the solution of posed problems works here.
Artificial. General. Intelligence. AGI.
As in contrast to narrow intelligence, like AlphaGo or DeepBlue or air traffic control expert systems, ChatGPT is a general intelligence. It is an AGI.
What you are talking about is, I assume, a superintelligence (ASI). Bostrom is careful to distinguish these in his writing. Bostrom, Yudkowsky et al make some implicit assumptions that led them to believe that any AGI would very quickly lead to ASI. This is why, for example, Yudkowsky has a very public meltdown two years ago, declaring the sky is falling:
(Ignore the date. This was released on April 1st to give plausible deniability. It has since become clear this really represents his view.)
The sky is not falling. ChatGPT is artificial general intelligence, but it is not superintelligence. The theoretical model used by Bostrom et al to model AGI behavior does not match reality.
Your assumptions about AGI and superintelligence are almost certainly downstream from Bostrom and Yudkowsky. The model upon which those predictions were made has been falsified. I would recommend reconsidering your views and adjusting your expectations accordingly.
I appreciate these definitions and distinctions. Thanks for sharing. You've helped me understand that I need a better, more precise vocabulary about this topic. I think on an abstract level I would think of AGI as "the brain that's capable of understanding", but I really then have no way to truly define "understanding" in the context of something artificial. Maybe ChatGPT "understands" well enough, if the output is the same.
It does understand to a certain degree for sure. Sometimes it understands impressively well. Sometimes it seems like a special needs case. Ultimately its understanding is different than that of a human’s.
The issue with the “once OpenAI achieves AGI [sic], everything changes” narrative is that it is based off models with infinite integrals in them. If you assume infinite compute capability, anything becomes easy. In reality as we’ve seen, applying GPT-like intelligence to achieve superhuman capabilities, where it is possible at all, is actually quite difficult, field-specific, and time intensive.
If they fall short of AGI there are still many ways a more limited but still quite useful AI might make them worth far more than Meta.
I don’t know how to handicap the odds of them doing either of these at all, but they would seem to have the best chance at it of anyone right now.