Is it possible to force a GPU to use
highp floats in shaders when the OpenGL version being used is 2.0? I know that precision modifiers are basically a GLES concept, but
floats do seem to use
mediump by default on a desktop I was provided for testing, and it`s driving me nuts.
I`ve stumbled across that problem while running my GL project under Win10 with a GTX1070 GPU (driver version 23.21.13-9135). I don`t even have
#version set in my shaders, so I expected the precision to be automatically maxed out.
AFAIK, OpenGL 2.x standard does not define any ways to hint the GPU that at least 20 of 24 mantissa bits are significant (passing integers through floats, since GL 2.0 cannot operate on integers natively), but that`s the first time in 3++ years that I`ve seen a desktop GPU do such a thing, so I`ve never even thought it possible.