Makes it possible to have both debug and release libraries installed. If
both libraries are present when finding the package, proper version is
used based on what configuration is used in depending project.
Until we have proper extension loader implemented (caused
FramebufferGLTest to assert).
On a related note, NVidia drivers 334.21 support *a lot* of ES2
extensions. Wow.
With ARB_multi_bind it is needed to associate the texture with some
target before calling glBindTextures(), otherwise the texture is
treated as invalid.
Would cause random weird issues with texture configuration/upload if
ARB_multi_bind is available and EXT_direct_state_access is not. Probably
not an issue, since EXT_direct_state_access is probably available on all
drivers which support also ARB_multi_bind.
Use ColorFormat.h header, ColorFormat and ColorType enums instead.
Amazingly enough there was a bug in Magnum.h (ColorType was typedef'd to
itself instead of ImageType).
Until now the textures were bound to layers, which was rather confusing,
especially when binding layered textures to layers (gaah). Also the
wording might have implied that each texture must be in some layer in
order to make it usable in shader. This is no longer the case with (yet
unimplemented) bindless texture, so another reason to remove the
confusion.
All occurences of texture layers were replaced texture binding units to
follow OpenGL naming. It was mostly in the docs, except for
already-deprecated *Layer enums in shaders, but they will be removed
soon anyway.
Compiling fragment and vertex shader simultaenously, at least. Nothing
more can be done for now.
Also removed weird duplicate compile/link calls from MeshVisualizer,
went unnoticed since b9a72bd3d1. Why did I
do that?!
As g_truc said long ago:
https://twitter.com/g_truc/status/352778836657700866
Currently there is not much use of this as the stock shaders are
compiled one by one (and doing it differently would make things
needlessly overcomplicated), but the users can do parallel compilation
of their own shaders.
Also removed a bunch of now-unneeded TODOs and made the linker/compiler
code nearly similar. Also the whole Shader::compile() call now does two
allocations in total instead of two allocations for each shader.
Due to crappy JavaScript design which doesn't count with any integers at
all, the integers needs to be "emulated" inside the 52-bit exponent of
doubles, which means that only 32bit integers can fit there (not to
mention various issues with 32b overflow, which needs to be emulated
somehow to work properly).