I expect the drivers to return size of *one* face when I'm querying that
particular face using the pre-DSA or EXT_DSA API and size of *all* six
faces when I'm querying whole texture using DSA API.
One can dream, eh?
It appears that, at least on my NVidia, the returned value does not
depend on whether I'm querying all faces or a single one, but RATHER
is based on whether the texture is immutable or not. How's that
predictable at all?! Workaround in the next commit.
Pre-DSA code paths need to specify for which face we are querying the
level parameters, which meant that all other calls had to specify the
(implicit) target too. I'm also preparing to put a cubemap-specific
workaround in the level parameter query and that really shouldn't be
present in the generic implementation for all texture types.
The other place where a specific target is needed is in setImage()
implementations, but these are rather big chunks of code and I don't
feel like copying these verbatim to cubemap implementation just to
isolate the workaround in one place.
Pre-DSA code path needs to pass specific slice of a cube map to all
getters instead of just GL_TEXTURE_CUBE_MAP. I did that properly for
image size query, which weirdly enough, had its own implementation, but
forgot to do that in compressed image getters and, because I have DSA
drivers, never tested that on pre-DSA contexts.
Using single implementation of image size with explicit target
parameter now.
Because ARB_DSA doesn't have any way to extract image of single cube map
coordinate, we have to use ARB_get_texture_sub_image instead, thus for
cube maps the whole thing is different. That was implemented, but wasn't
mentioned in the docs and wasn't properly accounted for in
implementation switcher (I was under assumption that ARB_DSA is
equivalent to ARB_get_texture_sub_image, which is not).
ARB_DSA is now preferred in single-bind cases, as it is easier to use
than passing pointers to ARB_multi_bind. ARB_multi_bind was preferred
for single-bind previously simply because EXT_DSA was not in core.
Because there is a lot to say about feature selection for each function,
I took this as a opportunity to remove redundant documentation blocks,
just refer to Texture documentation from everywhere and add extension
requirements and deprecation where needed, so it's clear for each class
what needs what.
ARB_DSA also took the opportunity to finally remove all target enum
values from function calls and because of that the CubeMapTexture has to
handle a bunch of special cases. In order:
- CubeMapTexture::imageSize() now doesn't take face parameter and
returns one value for all faces. I'm thus now also assuming that the
user is sane and called either setStorage() or setImage() with the
same size for all faces. In non-ARB_DSA path I'm thus querying only
size of +X face and returning it as size for all faces. The old
imageSize(Coordinate, Int) overload is still present, but ignores
the first parameter and calls imageSize(Int). It is marked as
deprecated and will be removed in some future release.
- CubeMapTexture::image() now needs to call glTextureSubImage() in
ARB_DSA path to make it possible to extract single face. Other code
paths (EXT_DSA, Robustness and "default") remain the same.
- CubeMapTexture::setSubImage() calls glTextureSubImage3D() in
ARB_DSA, because it is not possible to specify face index in
glTextureSubImage2D(). Other code paths (EXT_DSA and "default")
remain the same.
Implementation of these special cases is extracted into CubeMapTexture
class to avoid pollution of AbstractTexture with incompatible nonsense.
ARB_direct_state_access doesn't have equivalent for glTexImage*D(),
which indicates that these calls should not be used anymore. Also
removed EXT_direct_state_access code path and kept just the plain
glBindTexture() + glTexImage*D(), as I assume that all implementations
which have EXT_DSA have also ARB_texture_storage, thus this alternative
would have no use.
Until now the textures were bound to layers, which was rather confusing,
especially when binding layered textures to layers (gaah). Also the
wording might have implied that each texture must be in some layer in
order to make it usable in shader. This is no longer the case with (yet
unimplemented) bindless texture, so another reason to remove the
confusion.
All occurences of texture layers were replaced texture binding units to
follow OpenGL naming. It was mostly in the docs, except for
already-deprecated *Layer enums in shaders, but they will be removed
soon anyway.
Everything what was in src/ is now in src/Corrade, everything from
src/Plugins is now in src/MagnumPlugins, everything from external/ is in
src/MagnumExternal. Added new CMakeLists.txt file and updated the other
ones for the moves, no other change was made. If MAGNUM_BUILD_DEPRECATED
is set, everything compiles and installs like previously except for the
plugins, which are now in MagnumPlugins and not in Magnum/Plugins.
Finally a non-confusing name, hopefully. Sorry it took me too long. The
original Sampler::maxAnisotropy() (and also
Sampler::maxSupportedAnisotropy()) is now alias to the new one, is
marked as deprecated and will be removed in future release.
Renamed AbstractTexture::maxSupportedLayerCount() to maxLayers(),
which is in fact alias to Shader::maxCombinedTextureImageUnits(). Also
renamed Samples::maxSupportedAnisotropy() to maxAnisotropy(). It now
has slightly confusing naming, will fix that later. Both
original functions are now alias to new ones to retain source
compatibility, will be removed in future releases.
Also printing the values in magnum-info.