Because ARB_DSA doesn't have any way to extract image of single cube map
coordinate, we have to use ARB_get_texture_sub_image instead, thus for
cube maps the whole thing is different. That was implemented, but wasn't
mentioned in the docs and wasn't properly accounted for in
implementation switcher (I was under assumption that ARB_DSA is
equivalent to ARB_get_texture_sub_image, which is not).
ARB_DSA is now preferred in single-bind cases, as it is easier to use
than passing pointers to ARB_multi_bind. ARB_multi_bind was preferred
for single-bind previously simply because EXT_DSA was not in core.
Because there is a lot to say about feature selection for each function,
I took this as a opportunity to remove redundant documentation blocks,
just refer to Texture documentation from everywhere and add extension
requirements and deprecation where needed, so it's clear for each class
what needs what.
ARB_DSA also took the opportunity to finally remove all target enum
values from function calls and because of that the CubeMapTexture has to
handle a bunch of special cases. In order:
- CubeMapTexture::imageSize() now doesn't take face parameter and
returns one value for all faces. I'm thus now also assuming that the
user is sane and called either setStorage() or setImage() with the
same size for all faces. In non-ARB_DSA path I'm thus querying only
size of +X face and returning it as size for all faces. The old
imageSize(Coordinate, Int) overload is still present, but ignores
the first parameter and calls imageSize(Int). It is marked as
deprecated and will be removed in some future release.
- CubeMapTexture::image() now needs to call glTextureSubImage() in
ARB_DSA path to make it possible to extract single face. Other code
paths (EXT_DSA, Robustness and "default") remain the same.
- CubeMapTexture::setSubImage() calls glTextureSubImage3D() in
ARB_DSA, because it is not possible to specify face index in
glTextureSubImage2D(). Other code paths (EXT_DSA and "default")
remain the same.
Implementation of these special cases is extracted into CubeMapTexture
class to avoid pollution of AbstractTexture with incompatible nonsense.
ARB_direct_state_access doesn't have equivalent for glTexImage*D(),
which indicates that these calls should not be used anymore. Also
removed EXT_direct_state_access code path and kept just the plain
glBindTexture() + glTexImage*D(), as I assume that all implementations
which have EXT_DSA have also ARB_texture_storage, thus this alternative
would have no use.
Until now the textures were bound to layers, which was rather confusing,
especially when binding layered textures to layers (gaah). Also the
wording might have implied that each texture must be in some layer in
order to make it usable in shader. This is no longer the case with (yet
unimplemented) bindless texture, so another reason to remove the
confusion.
All occurences of texture layers were replaced texture binding units to
follow OpenGL naming. It was mostly in the docs, except for
already-deprecated *Layer enums in shaders, but they will be removed
soon anyway.
Everything what was in src/ is now in src/Corrade, everything from
src/Plugins is now in src/MagnumPlugins, everything from external/ is in
src/MagnumExternal. Added new CMakeLists.txt file and updated the other
ones for the moves, no other change was made. If MAGNUM_BUILD_DEPRECATED
is set, everything compiles and installs like previously except for the
plugins, which are now in MagnumPlugins and not in Magnum/Plugins.
Finally a non-confusing name, hopefully. Sorry it took me too long. The
original Sampler::maxAnisotropy() (and also
Sampler::maxSupportedAnisotropy()) is now alias to the new one, is
marked as deprecated and will be removed in future release.
Renamed AbstractTexture::maxSupportedLayerCount() to maxLayers(),
which is in fact alias to Shader::maxCombinedTextureImageUnits(). Also
renamed Samples::maxSupportedAnisotropy() to maxAnisotropy(). It now
has slightly confusing naming, will fix that later. Both
original functions are now alias to new ones to retain source
compatibility, will be removed in future releases.
Also printing the values in magnum-info.
* The light didn't catch camera transformation changes, so it was
returning wrong position for most of the time.
* The multiplication was in wrong order, it should be multiplied with
camera matrix from the left.
I need to find an solution for this, because now it is one redundant
matrix*vector multiplication per object per frame again.
This reverts commit 0443bbe286.
Conflicts:
src/Light.cpp
src/Test/LightTest.cpp
Object::setClean() now computes absolute transformation while traversing
through object parents and passes it as parameter to clean(), which is
now virtual a meant to be reimplemented instead of setClean().
Updated and greatly improved unit test.
Saves one matrix*vector multiplication per object per frame. The
position can be now Vector3 like before, because it won't be multiplied
with anything on draw call. Added unit test.
Removed functions at(), set() and add(), everything (and more) can be
now done using operator[]. Accessing matrix elements is now done through
column vectors, e.g.:
Matrix4 a;
a.at(row, col); // before
a[col][row]; // now
Note that because operator[] on Matrix returns column vector (there is
nothing like row vector), the parameter "order" is now swapped.
It was overengineered and unnecessarily complicated. Now the camera is
specified only in Scene::draw(), which eliminates all the needs for
recalculating absolute object transformations on each camera
transformation change. Absolute object transformation is now computed
relative to root object or relative to camera object passed as
parameter. Because of that it is now also possible to draw the scene
using multiple cameras at once.