SVGA3D has broken handling of glTex[ture][Sub]Image*D() for 1D arrays,
2D arrays, 3D textures and cube map textures where it uploads just the
first slice in the last dimension. This is only with copies from host
memory, not with buffer images. Seems to be fixed in Mesa 13, but I have
no such system to verify that on. Relevant commit in the Mesa sources:
2aa9ff0cda
This is one of the uglier workarounds -- I had to reintroduce multiple
code paths for glTexImage() which were removed when implementing ARB_DSA
and the workaround consists of basically a bunch of functions that are
slicing the image and calling the original implementations with each
slice.
The GL queries are needed only if the user-provided pixel storage
doesn't contain enough information, in particular dimensions and byte
size of compression block.
So in case they are present, no query for compressed image size is done
for full image queries (just saving one API call) and no queries for
block dimensions and byte size for subimage queries (saving four API
calls and not requiring ARB_internalformat_query2).
The documentation was fixed and tests were updated to take the
ARB_internalformat_query2 requirement into account.
Followup to previous commit -- links to opengl.org are now redirected to
khronos.org and the extension links have the same format for both GL and
GLES. That allows me to remove some of the Doxygen aliases and use just
a single set of the functions for both GL and GLES.
Allows to break the dependency on the <Magnum/CubeMapTexture.h> header
in Framebuffer, TextureState and elsewhere. The old
CubeMapTexture::Coordinate enum is now just an alias, is marked as
deprecated and will be removed in a future release.
I expect the drivers to return size of *one* face when I'm querying that
particular face using the pre-DSA or EXT_DSA API and size of *all* six
faces when I'm querying whole texture using DSA API.
One can dream, eh?
It appears that, at least on my NVidia, the returned value does not
depend on whether I'm querying all faces or a single one, but RATHER
is based on whether the texture is immutable or not. How's that
predictable at all?! Workaround in the next commit.
Pre-DSA code paths need to specify for which face we are querying the
level parameters, which meant that all other calls had to specify the
(implicit) target too. I'm also preparing to put a cubemap-specific
workaround in the level parameter query and that really shouldn't be
present in the generic implementation for all texture types.
The other place where a specific target is needed is in setImage()
implementations, but these are rather big chunks of code and I don't
feel like copying these verbatim to cubemap implementation just to
isolate the workaround in one place.
Pre-DSA code path needs to pass specific slice of a cube map to all
getters instead of just GL_TEXTURE_CUBE_MAP. I did that properly for
image size query, which weirdly enough, had its own implementation, but
forgot to do that in compressed image getters and, because I have DSA
drivers, never tested that on pre-DSA contexts.
Using single implementation of image size with explicit target
parameter now.
Pain and misery. Majority of functionality for 3D compressed images now
suddenly fails the test -- this is either very vaguely specified or I am
very bad at understanding things or there are bugs in my NVidia drivers.
This was awful feature. Kill me now.
Yeah, sorry, I know, the enums are renamed for second or third time in a
row, first they were Image::Format, then ImageFormat, then ColorFormat
and now PixelFormat. But this time it's final and last time they are
renamed and now everything is finally consistent:
* ColorFormat::DepthComponent -- depth is not a color, thus
PixelFormat::DepthComponent makes a lot more sense.
* There will be PixelStorage classes, which will be stored in images
alonside PixelFormat/PixelType enums, making everything nicely
aligned.
* The GL documentation about glTexImage2D() etc. denotes the <format>
and <type> parameters as format and type of *pixel* data, so now we
are _finally_ consistent with the official naming.
I wonder why did I not choose PixelFormat originally. Anyway, the old
<Magnum/ColorFormat.h> header, ColorFormat, ColorType and
CompressedColorFormat types are now aliases to the new ones, are marked
as deprecated and will be removed in some future release (as always, I'm
waiting at least six months before removing the deprecated
functionality).
With pixel pack/unpack support it will be possible to create views onto
sub-images, renamed the class to reflect that.
The old Magnum/ImageReference.h and ImageReference types are now aliases
to ImageView.h and ImageView types, are marked as deprecated and will be
removed in future release.
Similarly to what's now done with NoInit tags for Containers::Array and
all math types such as Vector, there's now NoCreate tag for creating
wrappers without actually creating the underlying OpenGL object. The
instance is then equivalent to moved-from state. Useful to avoid
needless creation/deletion of OpenGL object in case you would overwrite
the instance later anyway:
Mesh mesh{NoCreate};
std::unique_ptr<Buffer> indices, vertices;
std::tie(mesh, indices, vertices) = MeshTools:compile(...);
The original problem was that I was using 3D wrapping mode (S, T, R) for
cube map textures, but GL_TEXTURE_WRAP_R was not defined on ES2 so it
seemed rather suspicious.
Google seems to be lost on this and most of the online tutorials seem to
be setting GL_TEXTURE_WRAP_R even for cube map textures, so I need to
investigate myself. As seen in the rather old ARB_texture_cube_map
extension, there is no new GL_TEXTURE_WRAP_R token added, it is defined
only for 3D textures and there is no apparent dependency between these
two. The wrap mode for cube maps is defined as follows:
* The sampler determines one of the six faces and then employs
conventional 2D texture mapping on given face.
Thus wrapping mode for CubeMapTexture is now changed to be only
two-dimensional (instead of 3D).
For texture arrays the mode is also only one- or two-dimensional (not
two- or three-dimensional), because, as said in the (also rather old)
EXT_texture_array extension, the texture layer is _always_
(independently of any sampling state) selected as follows:
l = clamp(round(t), 0, num_layers - 1)
Thus wrapping mode for Texture1DArray is now changed to be only
one-dimensional (instead of 2D) and for Texture2DArray only
two-dimensional (instead of 3D).
Wrapping for CubeMapTextureArray is now also two-dimensional instead of
3D (with the original way of thinking it would have needed to be 4D!).
Because ARB_DSA doesn't have any way to extract image of single cube map
coordinate, we have to use ARB_get_texture_sub_image instead, thus for
cube maps the whole thing is different. That was implemented, but wasn't
mentioned in the docs and wasn't properly accounted for in
implementation switcher (I was under assumption that ARB_DSA is
equivalent to ARB_get_texture_sub_image, which is not).
Similar to Framebuffer::read(), it's now possible to get texture image
also in single statement:
Image2D image = texture.image(0, {ColorFormat::RGBA, ColorType::UnsignedByte});
in comparison to the previous way:
Image2D image{ColorFormat::RGBA, ColorType::UnsignedByte};
texture.image(0, image);
The previous way is still kept in the API and not deprecated, as it
might be more usable in some cases.
ARB_DSA is now preferred in single-bind cases, as it is easier to use
than passing pointers to ARB_multi_bind. ARB_multi_bind was preferred
for single-bind previously simply because EXT_DSA was not in core.
Because there is a lot to say about feature selection for each function,
I took this as a opportunity to remove redundant documentation blocks,
just refer to Texture documentation from everywhere and add extension
requirements and deprecation where needed, so it's clear for each class
what needs what.
ARB_DSA also took the opportunity to finally remove all target enum
values from function calls and because of that the CubeMapTexture has to
handle a bunch of special cases. In order:
- CubeMapTexture::imageSize() now doesn't take face parameter and
returns one value for all faces. I'm thus now also assuming that the
user is sane and called either setStorage() or setImage() with the
same size for all faces. In non-ARB_DSA path I'm thus querying only
size of +X face and returning it as size for all faces. The old
imageSize(Coordinate, Int) overload is still present, but ignores
the first parameter and calls imageSize(Int). It is marked as
deprecated and will be removed in some future release.
- CubeMapTexture::image() now needs to call glTextureSubImage() in
ARB_DSA path to make it possible to extract single face. Other code
paths (EXT_DSA, Robustness and "default") remain the same.
- CubeMapTexture::setSubImage() calls glTextureSubImage3D() in
ARB_DSA, because it is not possible to specify face index in
glTextureSubImage2D(). Other code paths (EXT_DSA and "default")
remain the same.
Implementation of these special cases is extracted into CubeMapTexture
class to avoid pollution of AbstractTexture with incompatible nonsense.
ARB_direct_state_access doesn't have equivalent for glTexImage*D(),
which indicates that these calls should not be used anymore. Also
removed EXT_direct_state_access code path and kept just the plain
glBindTexture() + glTexImage*D(), as I assume that all implementations
which have EXT_DSA have also ARB_texture_storage, thus this alternative
would have no use.
In most cases the label is set directly from code, e.g.:
texture.setLabel("diffuse-duck");
Avoiding conversion to std::string and passing char(&)[size] directly
will avoid one allocation and deallocation. Better solution would be to
use std::string_view everywhere, but we're not in C++17 yet.