Not a vendor per se, but since they pretty much prevent you from installing any updated drivers I'll also say: Apple's OpenGL support is horrid. Like, buggy, entirely behind the times, incredibly slow. I tend to reboot my mac into windows whenever I need to do actual graphics coding, which is a shame.
A friend of mine was working on a cross platform OpenGL project and wanted to use some OpenGL extension that was originally proposed by a couple of people from Apple and somebody at Nvidia.
You would assume that having proposed it, Apple would have implemented it. Nope. Windows only.
Apple advertises their support for OpenGL 4 in Mac OS 10.9, but they're only up to 4.1. That's the version of the spec from 2010. No tessellation or compute shaders, among other things that were added in 4.2, 4.3, or 4.4. In particular, 4.3 would be a big step because of parity with OpenGL ES 3.
Ah, you're right. Still, compute shaders are a pretty big deal to be missing, especially for offloading calculations like particle systems and realtime global illumination to the GPU.
10.10 announcements should be starting up in not too long, so maybe we'll see improvements.
larger (proven) market share is my guess. I have no stats on that by the way, it's a pure guess. The closest thing I ahve is I looked at the humble bundle stats, and scrolled down to the first games bundle which is :
http://support.humblebundle.com/customer/portal/attachments/...
Well, I see a lot of developers bitching about how hard it is to port stuff for Linux, but I don't recall so much bad mouthing when it came to porting stuff for Mac in the first place.
A friend of mine works on ports for both platforms, and he vastly prefers the graphics driver situation on Linux, both in terms of driver quality and vendor support.
Sheesh, has it really been the same for that long? I dropped out (only) about 5 years ago and had a similar thought. I enjoyed how the author didn't name names but made it pretty obvious to anyone who's dealt with this stuff before, even tangentially.
To be fair, I've never dealt with these drivers before, but I used to read Phoronix regularly and the companies were easy to identify from that knowledge alone.
Vendor C #1 = Intel on Linux
Vendor C #2 = Intel on Windows
Given the "open source wiz kids to keep driver #1 plodding forward" and "GL on this platform is totally a second class citizen" comments in each description.
I can't imagine anyone taking it up, as it'd be a monstrous project, but it'd be interesting to see someone port the Linux OSS Intel drivers to Windows and/or OSX.
You'd need access to the Windows OpenGL ICD DDK which is one of the few things Microsoft doesn't hand out freely and without signing a NDA (unless the situation somehow changed withing the past few years).
OTOH the ICD API should be easy enough to reverse engineer. The registry keys, where the OpenGL ICD is registered are well known and there are plenty of drivers in plenty of versions you can dissect to learn how to talk "ICD".
I don't know why you were downvoted for this. OpenGL spec is a mess. I think a lot of the driver issues arise from the fact that OpenGL has so much backwards compatibility and so much complexity.
Not to mention how obtuse it is to learn. Even without the backwards compatibility issues (which are very real), the entire mental model of how OpenGL works is completely messed up.
Things like having a texture ID, which you then bind to a particular target on a particular texture unit in order to use it make so little sense to someone learning it for the first time. On the CPU, I just pass the pointer to my image to a function to manipulate it. I don't have to put it into a special slot of a special structure in a special place in memory! It took me years to understand many of these things, and I see others struggling in the exact same way on Stack Overflow, for example. So sad.
Yeah, the API is basically insane. Not to mention how many "best" practices aren't. For instance, I remember a few years ago the advice was always "Use VBO's, don't use display lists they're deprecated". Ok, but when I benchmarked them against each other display lists were still twice as fast for the geometry I was rendering than very carefully constructed VBO's. Wtf.
If I'm not mistaken there are hardware reasons why VBOs will never be as fast as other methods, at least that is what I dimly recall having heard from a talk by a guy at Valva / Nvidia.
Texture binding in GL is utterly insane. The D3D model, as I understand it is really straightforward - textures are basically pointers to some information on an image buffer, and you can store a texture directly into a texture uniform. So to 'bind' a texture in D3D you just store it directly into the sampler. I forget whether sampling options like filtering mode are part of that, or part of a separate structure - either way, in GL sampling is part of the texture, while in D3D it is separately configurable which is a GODSEND.
IIRC in modern versions of D3D (10+? 11+?) they've expanded on this and just have the general concepts of views and buffers, so that you can treat textures and vertex buffers as if they share underlying properties, and shader code can manipulate them in similar ways. This is great for compute and GPU-accelerated processing and GPU feedback.
You can separate Sampler and Texture in OpenGL too (and they were informally separate since multitexturing extension circa GL 1.2). But when not using a Sampler, legacy Texture settings still apply of course :)
The difficulty of implementation is at most half the problem, IMO. Driver vendors can afford to maintain a 'merged' version of the spec with extension diffs applied, and can pay experts to acquire and retain knowledge on the breadth of the spec.
The challenges arise in validation and actual development. Validating that a driver works correctly is VERY difficult due to the complexity of the spec, and you can't really afford to have a huge test team with as much experience and knowledge as your driver development team. Even once you've validated and shipped your driver, you can't know how end-users are going to exercise it.
Then as a developer, not only do you have poor knowledge of the spec, but you have no knowledge of how each vendor interpreted the spec and whether or not their implementation matches their expectations.
As the surface area of OpenGL and the complexity of each entry point grow, this is only getting worse.
It's not just affecting 3D applications that need OpenGL though. I find it sad that we have no modern standard akin to VESA / VBE -- to get any reasonable graphics at all at your native display resolution, you have to live with an incredibly complicated (and unreliable) graphics stack. Purely CPU based rendering (which was fast enough in the 90s) is no longer a choice really.
> to get any reasonable graphics at all at your native display resolution, you have to live with an incredibly complicated (and unreliable) graphics stack.
Actually no. OpenGL in no way deals with setting up the display or creating a window on the screen. That has always been the responsibility of the underlying graphics infrastructure (KMS, X11, GDI, etc.)
> Purely CPU based rendering (which was fast enough in the 90s) is no longer a choice really.
In fact on Linux you can use KMS and the fbdev without making use of OpenGL. Heck, mplayer and ffmpeg even support do operate on the fbdev without going through a windowing system – just naked writes to the graphics framebuffer.
> I find it sad that we have no modern standard akin to VESA / VBE
Actually there is such a standard, it's called EGL. However EGL by itself is graphics stack agnostic and has been designed to be usable on a wide range of plattforms and graphics infrastructures. So you still have to use some kind of operating system dependent API to open the display device, but then you can use that display device handle with EGL to create abstract surfaces that OpenGL, OpenVG and other API can use to draw on.
EGL is completely different from VBE. VBE was a hardware interface that got you modesetting, a framebuffer, and vsync for any hardware with a single driver that was always there as a fallback. EGL's just an API for applications to talk to the drivers the same way regardless of the window system, but you still need the multitude of drivers. The modern analogue to VBE would be one of the EFI graphics standards, if there weren't two of the damn things, and if anybody would bother properly implementing EFI.
True, but it's a single API you can ideally use across a multitude of platforms. That's a huge step forward compared to the plethora of APIs you have/had to deal with: WGL, GLX/X11, AGL, Cocoa, Carbon, etc.
From a user space process programmer's perspective the graphics device is some abstract thing, represented by the operating system through a unified API.
When it comes to actually setting the framebuffer mode on the hardware, well, in theory it sounds nice to have a common hardware standard like VESA to support this. But then such a low level interface was of little use to user space applications running in memory protected environments.
For a long time the X server was required to be SUID root because it drilled a hole through memory protection using ioperm so that it could talk to the graphics chip directly; but talking VESA required some code of the Video BIOS to execute, which technically requires a real mode environment the X server also included a 8086 emulator to run the Video BIOS code in. We had to live with this mess until KMS came along.
From a programmer's perspective KMS is the far nicer, much less complex solution, even on the low level. Yes, it requires dedicated code for each kind of GPU, yes there is some code duplication. But the advantage is a huge reduction in complexity: Not interacting with a Video BIOS (or a EFI driver) means, that you don't have to provide a runtime or execution interface in your kernel for them to operate in. Writing a universal emulator/VM, verifying that it always does the correct thing is much harder, that punching down a few dozen lines per GPU class to deal with the low level mode setting stuff.
To expound on this comment: SDL1 defaults to fully software rendered output. You can hack in opengl hardware rendering, but it's fairly common for SDL1 games and such to be fully software rendered and run fine.
SDL2 in general is hardware rendered.
SDL1 is a good counter-example to "software rendering isn't quick enough anymore". You can make perfectly performant 2d games with sdl1's software rendering... 3d, not so much.
People usually use SDL1 with OpenGL, and I don't understand what part of it is "hack". Is using OpenGL with a manually created context with tens of platform ifdefs less hacky? Or is EGL/glut/<your favorite wrapper> the one true way of doing things?
It's not a hack in that it's held together by tape, precarious, or any such thing; it's a hack because it's going around the back of SDL to do the graphics, even though SDL is supposed to do all the graphics for you.
Perhaps my wording was poor. It's more of a hack than in SDL2, where all the SDL functions support hardware rendering with no need to touch OpenGL directly ever.
The general point still stands that you can write performant software rendering in SDL1.
I am talking about the layers below SDL. The layers that give rise to questions like will the graphics card be supported on my favorite OS if I buy this new computer?. These layers are responsible for detecting and configuring your adapters & monitors, setting the resolution, colour depth, interrupts for vsync, dma for fast frame copying, etc. If these things do not work right, you are in for a sad sad desktop experience, even if your CPU were fast enough to do all the 3D magic you want.
Hilariously, using software SDL for your framebuffer (at least on X.org) will be much slower than using OpenGL PBOs. Don't talk about hardware acceleration here -- all we're doing is transferring a finished frame to the display. At this point, we already need much more than the "small" amount of code to implement KMS. Can it break? Yes it can. Does it break? Yes it does. Graphics driver issues are still among the most common problems I witness people struggle with, as far as getting their OS to run smoothly goes.
Most of those things pretty much stem from the fact that OpenGL is nowhere near priority for those companies - D3D game performance in benchmarks is what sells the accelerators. Additional features like HW video encoders are second.
I'm completely missing the mobile space in this writeup, but I bet it is a situation even worse than ATI (no way to update the GPU drivers, short of a full OS upgrade on Android)
It absolutely is from what I've heard. There are devices where the Mali GPU works well in some games/apps with one version of the OS and are severely broken in a newer version because the driver broke a lot of things to make other apps work.
It's one of the big reasons I'm hoping that with efforts like the driver for GPU for the RPI being open sourced that we'll start to see some more consistent drivers for mobile chipsets.
Or move the whole rendering to a server in the cloud, where any awfulness is at least going to be consistent and predictable. This is what NV were (are?) pushing as "Grid", though I haven't heard all that much about it. They say they can keep latency at acceptable levels, but it strains credulity a little bit.
I have the misfortune of developing OpenGL ES stuff. And I cannot overstate how bad the drivers are.
It's pure bliss to use normal desktop gl implementations because they don't constantly crash on you. They don't have weird breaking bugs. They don't leak memory like no tomorrow. They don't play fast and loose with precision. They don't have absolutely retarded performance regressions.
Basically the drivers only work if you use Unity or some other well known engine. The driver makers don't really bother making real ES drivers, they just make something that doesn't crash Unity.
Speaking of Unity + Android, our mobile game is seeing a huge increase in crashes on Android 4.4.2 (the latest) on Samsung GPUs (PowerVR). Apparently it's due to a bug in the driver. So, I guess the mantra of "don't crash Unity" doesn't always succeed =)
I have been wondering about this for long time. The biggest cost in GPU development isn't the Hardware itself. But the drivers. Nvidia famously said they have much more software engineers then hardware guys. And as history can tell having great Hardware on paper means nothing when your Drivers aren't being up to standard ( S3, PowerVR on Desktop). Hence the smaller group of GPU maker were forced out because they dont have the resources to complete on the software front.
And yet over the decades nothing has improved. When Browser wanted to do hardware Acceleration there were many Laptop GPU being blacklisted simply because they dont have any drivers update. Situation is much better on Mac because the drivers and testing happens to be the same people.
There are people who wanted the GPU to be just another CPU. Intel Larrabee. But none of them has succeeded.
I wouldn't have thought with GPU IP taking rounds, drivers quality being in the hands of vendor would have improve the situation abit. However it seems no one wants to invest into it.
So do we have no solution for this? Rumors has it Apple are designing their own GPU. May be they are tackling the drivers problem themselves by getting rid of it?
Not really, at least for those of us that seated at an Intel talk about Larrabbe on GDCE back in 2009, on how Larrabee would revolutionize graphics programming.
ATI drivers are horrible on the Mac OS X platform too, even though Apple controls all their drivers on the platform. Even though Intel's GPUs are slow their drivers are very stable and work most of the time.
It's a bit of both. I believe Apple maintain the app-facing side of the GL implementation. Backend plugins are provided by IHVs to talk to their specific hardware, but with Apple still controlling the release channels.
AFAIK there's no route for an IHV to get an updated driver to a user without going through Apple, put it that way.
I think you misunderstood something. If you read the info text for the driver on the linked page, it explicitly mentions that you need a _separate_ driver for CUDA.
That, and sometimes device A is x% faster than competing device B because the driver for device A is faster, not because of the silicon. Open that up, and you give away your competitive advantage.
Also, there are always rumors about $LOW_COST_PRODUCT being the same silicon as $HIGH_COST_PRODUCT from the same manufacturer, just with a few features turned off in the driver. This probably isn't true, but nobody can rule it out.
While technically the same hardware, isn't it the case that the "higher end" products are the units off the production line that met a higher QA bar?
That's my understanding of how it works for CPUs. A four-core CPU with questionable functionality on the 4th core may be sold as a "3 core" CPU. Depending on how "questionable" the 4th was though, it might be possible to use it anyway.
I don't think production quality is the only reason. Sometime it's cheaper to design one product and sell it as multiple products to capture more value. For example, some people are willing to pay 1k for a graphics card while others are only willing to pay 300. So you sell the same graphics card to both groups but for one you artificially lower it's capability. This allows you to capitalize on the market a lot more efficiently than selling your product either at a low price or a high price.
With Tesla cards, you get a professional-oriented driver that is good for Maya and CUDA, but not optimized for games. There are also a few hardware features that are important for pros but are not an issue for gamers --stuff like ECC RAM, double-precision fp, better handling of multiple 3D viewports.
But, it's my understanding that most of what you pay for when you buy a Tesla card is support. If you call Nvidia saying Maya has a driver problem with your Tesla card, they will pay attention. If Maya has a problem running on a GeForce card, they will direct you to the forums.
Especially you have only one product to produce on the highly expensive PCB/chip manufacturing lines - if you experience demand shifts, just reflash the BIOS and change the packaging.
Way cheaper (and more flexible!) than ramping up different production lines.
Lots of chip companies do this. I worked for a company that sold a whole line of different chips at different price points that all used the same die. They had different packages and they all had different internal pads on the die connected to ground so that the chip could detect what mode it was in. The firmware could read this and detect which hardware was enabled or disabled.
I remember the management being very secretive about this since they didn't want their customers to think they were being ripped off by buying the "expensive" chip…
I don't meant to sound critical of this practice… From an cost perspective it makes a lot of sense to do it this way—the cost to layout, test, and create all the masks for a custom chip is huge. So it makes sense to want to cram as much into one chip instead of making 2 or 3 or 4. That way the one time cost of creating the masks and tooling up at the foundry are amortized over all the products that use the die.
> IIRC one could also flash a consumer-grade card with a Tesla BIOS and "convert" a couple-hundred-dollars-card into a thousand-dollars-card.
No, this guy was soldering resistors on his $1000 card to spoof the PCI vendor:device IDs and fool the driver to enable a software features (4 displays at once). The same could have been done by patching the kernel.
IIRC Nvidia fixed the "bug" that made this work but enabled the feature on the consumer cards (it was available on Windows but not Linux for reasons unknown).
But he did not get access to the hardware features which are fused off.
And as others have said, every chip manufacturer out there does same. Intel has 20+ models of their most recent CPUs, which are probably all the same silicon or perhaps a few different designs. i5's are "crippled" i7's (perhaps ones that were not 100% successfully manufactured), but you get them at a discount.
And some of the drivers will blatantly cheat the benchmarks. I remember stories of one driver that behaved differently if called from an executable named QUAKE.EXE - if you renamed Quake you got slower, more "correct" rendering.
It's not a rumor, I mean nvidia advertised the same chip in e.g. the GTX 680 and the Quadro K5000. The K5000 got ECC RAM and a lower clock rate, but the GPU die was the same. All the difference was in the drivers.
I think that's actually happened a few times before. I can remember some hardware hacks that actually caused the gfx card to be recognized as a higher end card, causing it's features to become enabled.
Wether or not that's in the ROM or driver is up for debate though.
There are also game / application specific optimizations and tricks in those drivers that show up as those "30% increase of performance in game X" changelogs.
Embedded [nVidia] developers "optimizing" games by rewriting and hand compiling shaders to replace shaders in shipped games isn't enough of a clue as to why these drivers aren't OSS for you?
The article is rich with the reasons behind this. Some of it is IP encumbered, some of it is benchmark hubris. All of it is the shroud of secrecy that the graphics market prided itself on coming back to bite them in the ass as their driver stacks are now massive piles of virtually unmaintainable code - code that can't even be rewritten because the companies themselves (well, at least AMD and Intel) are hiding details about how the hardware works from itself!
Also those drivers are huge (really, HUGE) and have incorporated pieces of code with widely differing IPs, patents previous companies and other sources for which opensourcing would probably be a major legal hassle and lawsuit risk.
Also... if you're nVidia / ATi... what's the gain in giving optimization paths and software optimization database to the competitor? Those drivers are full of application / game specific optimizations to make them look better / run faster / workaround bugs.
When left to their own devices companies have little tendency to cooperate on that level. The proprietary drivers represent an enormous investment, it would be near-impossible for a new company to enter the market and develop competitive proprietary driver stacks for DirectX and OpenGL.
Maybe some day in the future the various GPUs will have code gen backends in LLVM/Microsoft compilers without needing secret & broken vendor drivers and we'll be down to one set of bugs per platform, instead of <dx-win|ogl-win|ogl-mac|ogl-linux-proprietary|ogl-linux-free> x <amd|nvidia|intel> = 15 combos.
Along with all the good responses to your question, it's worth pointing out that Intel has completely open-sourced their driver and GPU, and ATI has open-sourced their GPU and produced a driver that can do about 90% of what the closed-source driver can do.
Primary issue is IP rights. For example, DXTC (S3TC) is patent-encumbered, but many GPU features are not only patent-encumbered but are trade secrets - vendors like NVIDIA have in the past had trade secret, vendor-only extensions for things like a special texture filtering algorithm for shadow maps. So in many cases a driver vendor may not even be able to open their source if they want to, because it contains proprietary trade secrets from a third party, where they're using third party code to compile shaders or whatever.
In my experience of writing a GL Windows desktop app alone, driver bugs have been the #1 cause of stability problems for real end users. It's a complete nightmare, with pretty much every manufacturer. I guess that's what happens when hardware companies need to write software.
No time for a startup, and we had previously used D3D and had other annoying problems (horrible text rendering, annoying redist issues, etc), so it just had its own set of problems.
Is it practical to create a driver abstraction that masks cross vendor issues and provides a consistent interface to the dev? Like what jquery did for browsers.
> Is it practical to create a driver abstraction that masks cross vendor issues and provides a consistent interface to the dev? Like what jquery did for browsers.
Yes and no. The situation with OpenGL and drivers is quite different to browsers and jQuery.
OpenGL drivers are generally quite good in implementing the API as it is specified and this is tested with a huge bunch of conformance suites. It's not like ancient browsers where one vendor's understanding of the CSS box model is different from the others'.
However, OpenGL has a huge number of different versions, some require hardware support (major versions like GL 4.x vs. 3.x), while others might be software only additions (minor versions, GL 3.2 to 3.3). And then there are lots of API extensions that may or may not be available. This is a relatively simple problem to solve and we have tools like Regal and ANGLE to patch the little things.
But the real problem is functional bugs. Incorrect pixels on the screen ranging from a minor annoyance to a completely destroyed image. And more severe issues like application crashes, completely corrupted display/desktop to blue screens and kernel hangs. This is something that cannot be fixed.
A lot of engines do this to some extent (eg. OGRE3D, Irrlicht, et al.) but there are pitfalls there as well. While jquery does a good job of shimming in hacks for older browsers the world of graphics programming isn't really that flexible and even newer drivers can still be a total grab bag of what they support - think of it like this: you have the latest version of Firefox and you can support everything but rendering text in italics. How would you shim that in? Can you? What if it tells you it supports it but it actually slants text the other way? What if it renders correctly but doing it the normal way takes an hour but there's a hack for only this operating system and version of Gecko that works. Also there really isn't a manual and StackOverflow is full of questions that contain all your search terms but are actually for a completely unrelated matter.
So you drop it from the common interface that your abstraction presents because it's just not consistent enough....
There are also a TONNE of shader abstraction languages that transcribe to HLSL or GLSL on the fly for the same reason.
tldr; Engine developers stare into the abyss and somehow the abyss gives back a projection matrix.
I assume driver incompatibility and things of this nature are one reason why so many games are built on engines like Unreal, Crytek, Unity, etc.? At some point you need to be working on shipping a game, not writing driver compatibility code that a framework could/should handle for you.
What's horrible is I've done almost zero OpenGL dev and I could pick these out immediately based on anecdotal experiences with drivers on Windows. Vendor B might be getting better (supposedly), but I still have a tendency to shy away from their drivers (thus, them) due to past experiences.
Well let me put it like this. I bought a Radeon 9600 back in 2003 when ATi still existed as an independent company. In the time since then, I haven't seen enough improvement to justify buying anything other than nVidia.
To be fair, they are better - I haven't heard them lately crashing the kernel so hard the alt-sysrq combos don't work.
Not covert at all. Nvidia engineers come to your office and work on your code for you. They know more secret sauce and contacts inside their company, so they can help you get your performance up on their chips.
Even though this is offtopic I'll answer: Not going to happen.
Nvidia hates OpenCL as it eats their Cuda business and Intel performs market segmentation, therefore their Linux implementation supports only CPU and Xeon Phi and Windows supports the GPU. They don't want server builders to purchase their desktop chips with GPU's. They want them to purchase the massively expensive Xeon Phi and use normal Xeon CPU's.