The joys of working with Optimus technology..

Have you ever heard about Optimus technology? It’s a software/hardware solution, where you have both Intel & NViDiA GPUs located on the same board. From strictly practical point of view, the main idea is to let the user enjoy passive work of Intel GPU (and.. err, that’s about it 🙂 ), with NViDia GPUs kicking in when needed. The kicking-in is handled by introducing an additional routing layer on top of GL and DX stacks, which is responsible for forwarding all the calls to either of implementations, depending on how resource-demanding the user’s actions are.

As far as theory is considered, that’s about it.

The most tricky part for the driver is to discover when it is the right time to switch to the other GPU, without having the user cast his laptop on the floor due to rather unpleasant excessive heat suddenly starting to blow on his balls. And that’s where things get ugly. To cut the story short – if you expected complex heuristics or neural networks trying to work out where the threshold is, you should go back to the drawing board. There is no such thing and this can lead to a gazillion of hours of your development time literally going down the drain, as there’s a HUGE difference between capabilities of both GPUs (not to mention the quality of OpenGL implementation, but that’s another story..).

So. If you ever find yourself trying to resolve a bug occurring on a specific laptop only, with the laptop theoratically being capable to handle a given OpenGL or DirectX version (“At least that’s what the specs say, m’kay!“), you’d better check if it’s Optimus-enabled. If it is, either go to NVIDIA Control Panel and disable the devil OR read the following link. It really could save your manday 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *