Deprecated for 2018.04, it's been almost a year since. Whoever is using
Magnum regularly updated already, and who not can always upgrade
gradually (2018.02, 2018.04, 2018.10, 2019.01 etc.).
In particular, the AbstractTexture and AbstractQuery subclasses didn't
have that. I can't remember the rules about noexcept (and the users even
less), so let's make that explicit.
The Sampler class was split into GL::Sampler (which is now mostly just a
placeholder for implementing OpenGL sampler objects), pairs of generic /
GL-specific SamplerFormat / GL::SamplerFormat, SamplerMipmap /
GL::SamplerMipmap, SamplerWrapping / GL::SamplerWrapping enums and the
GL-specific GL::SamplerCompareMode, GL::SamplerCompareFunction,
GL::SamplerDepthStencilMode enums.
The old Sampler class is marked as deprecated and aliases its enum to
the generic enums (or to the GL-specific ones in case the generic
versions are not available).
The general part stays in the root namespace, while the GL-specific
stuff goes to the GL namespace. Also all GL-specific documentation was
moved to relevant APIs in the GL namespace. This finally allows me to
build PixelStorage.cpp as part of the root namespace.
The PixelStorage::pixelSize() function and
PixelStorage::dataProperties() taking a pair of GL::PixelFormat /
GL::PixelType enums is deprecated, use GL::pixelSize() and
dataProperties() taking pixel size directly instead. A lot of code is
still using these, including images; the deprecated aliases are inlined
in the header to avoid a compile-time dependency on the GL library.
At the moment just the GL library itself w/o the tests, and without
backwards compatibility aliases. The following types were left in the
root namespace, despite being in the GL/ directory, as they will get
moved back soon:
* Image, CompressedImage and their dimensional typedefs
* ImageView, CompressedImageView and their dimensional typedefs
* PixelStorage
Not PixelFormat etc., that one will stay in the GL namespace and a
completely new PixelFormat enum will be provided in the root namespace.
Minimal updates (just the include guards) so Git is hopefully able to
detect the rename and track the history properly.
Everything except Magnum::GL doesn't compile now.
The GL queries are needed only if the user-provided pixel storage
doesn't contain enough information, in particular dimensions and byte
size of compression block.
So in case they are present, no query for compressed image size is done
for full image queries (just saving one API call) and no queries for
block dimensions and byte size for subimage queries (saving four API
calls and not requiring ARB_internalformat_query2).
The documentation was fixed and tests were updated to take the
ARB_internalformat_query2 requirement into account.
Followup to previous commit -- links to opengl.org are now redirected to
khronos.org and the extension links have the same format for both GL and
GLES. That allows me to remove some of the Doxygen aliases and use just
a single set of the functions for both GL and GLES.
Pre-DSA code paths need to specify for which face we are querying the
level parameters, which meant that all other calls had to specify the
(implicit) target too. I'm also preparing to put a cubemap-specific
workaround in the level parameter query and that really shouldn't be
present in the generic implementation for all texture types.
The other place where a specific target is needed is in setImage()
implementations, but these are rather big chunks of code and I don't
feel like copying these verbatim to cubemap implementation just to
isolate the workaround in one place.
Pre-DSA code path needs to pass specific slice of a cube map to all
getters instead of just GL_TEXTURE_CUBE_MAP. I did that properly for
image size query, which weirdly enough, had its own implementation, but
forgot to do that in compressed image getters and, because I have DSA
drivers, never tested that on pre-DSA contexts.
Using single implementation of image size with explicit target
parameter now.
Pain and misery. Majority of functionality for 3D compressed images now
suddenly fails the test -- this is either very vaguely specified or I am
very bad at understanding things or there are bugs in my NVidia drivers.
This was awful feature. Kill me now.
Yeah, sorry, I know, the enums are renamed for second or third time in a
row, first they were Image::Format, then ImageFormat, then ColorFormat
and now PixelFormat. But this time it's final and last time they are
renamed and now everything is finally consistent:
* ColorFormat::DepthComponent -- depth is not a color, thus
PixelFormat::DepthComponent makes a lot more sense.
* There will be PixelStorage classes, which will be stored in images
alonside PixelFormat/PixelType enums, making everything nicely
aligned.
* The GL documentation about glTexImage2D() etc. denotes the <format>
and <type> parameters as format and type of *pixel* data, so now we
are _finally_ consistent with the official naming.
I wonder why did I not choose PixelFormat originally. Anyway, the old
<Magnum/ColorFormat.h> header, ColorFormat, ColorType and
CompressedColorFormat types are now aliases to the new ones, are marked
as deprecated and will be removed in some future release (as always, I'm
waiting at least six months before removing the deprecated
functionality).
With pixel pack/unpack support it will be possible to create views onto
sub-images, renamed the class to reflect that.
The old Magnum/ImageReference.h and ImageReference types are now aliases
to ImageView.h and ImageView types, are marked as deprecated and will be
removed in future release.
Similarly to what's now done with NoInit tags for Containers::Array and
all math types such as Vector, there's now NoCreate tag for creating
wrappers without actually creating the underlying OpenGL object. The
instance is then equivalent to moved-from state. Useful to avoid
needless creation/deletion of OpenGL object in case you would overwrite
the instance later anyway:
Mesh mesh{NoCreate};
std::unique_ptr<Buffer> indices, vertices;
std::tie(mesh, indices, vertices) = MeshTools:compile(...);
The original problem was that I was using 3D wrapping mode (S, T, R) for
cube map textures, but GL_TEXTURE_WRAP_R was not defined on ES2 so it
seemed rather suspicious.
Google seems to be lost on this and most of the online tutorials seem to
be setting GL_TEXTURE_WRAP_R even for cube map textures, so I need to
investigate myself. As seen in the rather old ARB_texture_cube_map
extension, there is no new GL_TEXTURE_WRAP_R token added, it is defined
only for 3D textures and there is no apparent dependency between these
two. The wrap mode for cube maps is defined as follows:
* The sampler determines one of the six faces and then employs
conventional 2D texture mapping on given face.
Thus wrapping mode for CubeMapTexture is now changed to be only
two-dimensional (instead of 3D).
For texture arrays the mode is also only one- or two-dimensional (not
two- or three-dimensional), because, as said in the (also rather old)
EXT_texture_array extension, the texture layer is _always_
(independently of any sampling state) selected as follows:
l = clamp(round(t), 0, num_layers - 1)
Thus wrapping mode for Texture1DArray is now changed to be only
one-dimensional (instead of 2D) and for Texture2DArray only
two-dimensional (instead of 3D).
Wrapping for CubeMapTextureArray is now also two-dimensional instead of
3D (with the original way of thinking it would have needed to be 4D!).
Similar to Framebuffer::read(), it's now possible to get texture image
also in single statement:
Image2D image = texture.image(0, {ColorFormat::RGBA, ColorType::UnsignedByte});
in comparison to the previous way:
Image2D image{ColorFormat::RGBA, ColorType::UnsignedByte};
texture.image(0, image);
The previous way is still kept in the API and not deprecated, as it
might be more usable in some cases.