That's natural given that they mostly produce hardware several layers of abstraction distant from the end user value, companies need to buy the hardware before they can start delivering their own value. AI model training is not value by itself if there's no use-case for the model that can be charged for.
I see it playing out one of two ways. Either Nvidia are selling shovels in a gold rush, the rush will end, and the business will dry up (after they have made a lot of money!). Or AI sticks/takes off, and Nvidia are selling a commodity too far from the value, like most electronic component manufacturers, and they'll maintain significant market share but have their margins reduced to a fraction of what they were before (after they made a lot of money!).
The human value doesn't come from ML training or inference, it comes from taking a better photo. The business value comes from drafting a better email. Those companies closer to that value will likely do better in the long run, as they always have done.
Nvidia created CUDA and seeded the ML industry for a decade before chatgpt. They aren't given enough credit for their foresight and strategy. Most companies would have choked the community to death with greed before it ever took off.
There is a reason why CUDA works on every NV gpu but ROCm support is spotty at best and only guaranteed on data center GPUs.
AMD and intels shovels (hardware) are fine. The ecosystem is the problem. The fundamental difference is AMD/intel see it as an upsell whereas nvidia is willing to invest in long term organic growth. The problem is the C suite and the difference between companies run by founders and bean counters.
We're actually in agreement, it's just that analogies are a blunt instrument.
I'm saying that Intel and AMD made single-purpose GPUs useful only for graphics. Whether that's because of the software or hardware is immaterial. Effectively, it's one product in the same sense that an iPhone is one product to a consumer, but technically it's the iPhone device + iOS the software + Apple services such as iCloud, music, etc...
i have yet to hear of anyone actually using AI for something properly
only exception im excited about is the non-main characters from video games, where a lot of the random NPCs, can now actually bring some more fun to the game.
I run in production a system that uses LLM translation and summerization from hundreds of sources in dozens of languages. Users are extremely satisfied by the results that are far cheaper and far higher quality than what was available before
It is an inhouse system in a niche market, not available for sale. I use the OpenAI api for now with very good results, though long term I would prefer to have an on-premise solution if quality and scope (in terms of supported languages) can be maintained. Codewise, as you can imagine, the AI is a very small part of the codebase, but without it the system would be pretty useless.
I think many underestimate the true usefulness the current generation of AI has already achieved because a lot of it is in traditional, boring, bespoke or inhouse LoB systems whereas the press always focuses on public B2C
I have seen plenty of very good internal AI Demos which we are adding to our products. From GenAI stuff, to image analysis, lightweight agents who answer proper questions.
I used chatgpt 3 days ago to generate a script for me. Saved me probably an hour too.
We use it also in my startup for tasks which we wouldn't even tried without ML models because the quality of old libraries were to bad. Like pdf catalog to text, image classification and segmentation.
Claiming no one is using MLMs “properly” despite the various scientific and industrial use cases (vision systems, robots, protein folding, drug simulation, etc) while being “excited” for something as pathetically trivial as a text generator with a text-to-speech tacked on for your mass-produced open world games. Truly peak HN.