Array texture and cubemap (array) subimage queries get
ImageFlag*D::Array, cubemap whole image ImageFlag3D::CubeMap, cubemap
array whole image get ImageFlag3D::CubeMap and ImageFlag3D::Array. No
flags are checked when uploading images or when downloading images to
views -- using an array texture as a cube map and vice versa is a valid
case, and others are probably also, so there's no point.
The flag restrictions will come into place when ImageFlag*D::YUp / YDown
etc. are a thing.
All the tests were updated to explicitly check that non-null-terminated
strings get handled properly (the GL label APIs have an explicit size,
so it *should*, but just in case). Also, because various subclasses
override the setter to return correct type for method chaining and the
override has to be deinlined to avoid relying on a StringView include,
the tests are now explicitly done for each leaf class, instead of the
subclass
The <string> being removed from the base class for all GL objects may
affect downstream projects which relied on it being included. In case of
Magnum, the breakages were already fixed in the previous commit.
Compile time improvement for the MagnumGL library alone is 0.2 second or
4% (6.1 seconds before, 5.9 after). Not bad, given that there's three
more files to compile and strings are still heavily used in other GL
debug output APIs and all shader stuff. For a build of just the GL
library and all tests, it goes down from 28.9 seconds to 28.1. Most
tests also still rely quite heavily on std::stringstream for debug
output testing, so the numbers still could go further.
Its only use was for specifying N-dimensional SamplerWrapping because,
compared to a Math::Vector, it had an implicit constructor from a single
value (whereas the Vector has it explicit). I solved that by simply
adding a few single-value overloads where it mattered. There, done. No
need for this weird thing and confusion with Containers::Array anymore.
All places that used it now use Math::VectorN<SamplerWrapping>, but the
class is still included for backwards compatibility purposes, together
with providing implicit conversion from and to a Vector.
Should make new things more discoverable, avoid confusion when a
documented API isn't there and reduce the need for maintaining multiple
separate versions of the docs.
Deprecated for 2018.04, it's been almost a year since. Whoever is using
Magnum regularly updated already, and who not can always upgrade
gradually (2018.02, 2018.04, 2018.10, 2019.01 etc.).
In particular, the AbstractTexture and AbstractQuery subclasses didn't
have that. I can't remember the rules about noexcept (and the users even
less), so let's make that explicit.
The Sampler class was split into GL::Sampler (which is now mostly just a
placeholder for implementing OpenGL sampler objects), pairs of generic /
GL-specific SamplerFormat / GL::SamplerFormat, SamplerMipmap /
GL::SamplerMipmap, SamplerWrapping / GL::SamplerWrapping enums and the
GL-specific GL::SamplerCompareMode, GL::SamplerCompareFunction,
GL::SamplerDepthStencilMode enums.
The old Sampler class is marked as deprecated and aliases its enum to
the generic enums (or to the GL-specific ones in case the generic
versions are not available).
The general part stays in the root namespace, while the GL-specific
stuff goes to the GL namespace. Also all GL-specific documentation was
moved to relevant APIs in the GL namespace. This finally allows me to
build PixelStorage.cpp as part of the root namespace.
The PixelStorage::pixelSize() function and
PixelStorage::dataProperties() taking a pair of GL::PixelFormat /
GL::PixelType enums is deprecated, use GL::pixelSize() and
dataProperties() taking pixel size directly instead. A lot of code is
still using these, including images; the deprecated aliases are inlined
in the header to avoid a compile-time dependency on the GL library.
At the moment just the GL library itself w/o the tests, and without
backwards compatibility aliases. The following types were left in the
root namespace, despite being in the GL/ directory, as they will get
moved back soon:
* Image, CompressedImage and their dimensional typedefs
* ImageView, CompressedImageView and their dimensional typedefs
* PixelStorage
Not PixelFormat etc., that one will stay in the GL namespace and a
completely new PixelFormat enum will be provided in the root namespace.
Minimal updates (just the include guards) so Git is hopefully able to
detect the rename and track the history properly.
Everything except Magnum::GL doesn't compile now.
The GL queries are needed only if the user-provided pixel storage
doesn't contain enough information, in particular dimensions and byte
size of compression block.
So in case they are present, no query for compressed image size is done
for full image queries (just saving one API call) and no queries for
block dimensions and byte size for subimage queries (saving four API
calls and not requiring ARB_internalformat_query2).
The documentation was fixed and tests were updated to take the
ARB_internalformat_query2 requirement into account.
Followup to previous commit -- links to opengl.org are now redirected to
khronos.org and the extension links have the same format for both GL and
GLES. That allows me to remove some of the Doxygen aliases and use just
a single set of the functions for both GL and GLES.
Pre-DSA code paths need to specify for which face we are querying the
level parameters, which meant that all other calls had to specify the
(implicit) target too. I'm also preparing to put a cubemap-specific
workaround in the level parameter query and that really shouldn't be
present in the generic implementation for all texture types.
The other place where a specific target is needed is in setImage()
implementations, but these are rather big chunks of code and I don't
feel like copying these verbatim to cubemap implementation just to
isolate the workaround in one place.
Pre-DSA code path needs to pass specific slice of a cube map to all
getters instead of just GL_TEXTURE_CUBE_MAP. I did that properly for
image size query, which weirdly enough, had its own implementation, but
forgot to do that in compressed image getters and, because I have DSA
drivers, never tested that on pre-DSA contexts.
Using single implementation of image size with explicit target
parameter now.
Yeah, sorry, I know, the enums are renamed for second or third time in a
row, first they were Image::Format, then ImageFormat, then ColorFormat
and now PixelFormat. But this time it's final and last time they are
renamed and now everything is finally consistent:
* ColorFormat::DepthComponent -- depth is not a color, thus
PixelFormat::DepthComponent makes a lot more sense.
* There will be PixelStorage classes, which will be stored in images
alonside PixelFormat/PixelType enums, making everything nicely
aligned.
* The GL documentation about glTexImage2D() etc. denotes the <format>
and <type> parameters as format and type of *pixel* data, so now we
are _finally_ consistent with the official naming.
I wonder why did I not choose PixelFormat originally. Anyway, the old
<Magnum/ColorFormat.h> header, ColorFormat, ColorType and
CompressedColorFormat types are now aliases to the new ones, are marked
as deprecated and will be removed in some future release (as always, I'm
waiting at least six months before removing the deprecated
functionality).
With pixel pack/unpack support it will be possible to create views onto
sub-images, renamed the class to reflect that.
The old Magnum/ImageReference.h and ImageReference types are now aliases
to ImageView.h and ImageView types, are marked as deprecated and will be
removed in future release.
Similarly to what's now done with NoInit tags for Containers::Array and
all math types such as Vector, there's now NoCreate tag for creating
wrappers without actually creating the underlying OpenGL object. The
instance is then equivalent to moved-from state. Useful to avoid
needless creation/deletion of OpenGL object in case you would overwrite
the instance later anyway:
Mesh mesh{NoCreate};
std::unique_ptr<Buffer> indices, vertices;
std::tie(mesh, indices, vertices) = MeshTools:compile(...);
The original problem was that I was using 3D wrapping mode (S, T, R) for
cube map textures, but GL_TEXTURE_WRAP_R was not defined on ES2 so it
seemed rather suspicious.
Google seems to be lost on this and most of the online tutorials seem to
be setting GL_TEXTURE_WRAP_R even for cube map textures, so I need to
investigate myself. As seen in the rather old ARB_texture_cube_map
extension, there is no new GL_TEXTURE_WRAP_R token added, it is defined
only for 3D textures and there is no apparent dependency between these
two. The wrap mode for cube maps is defined as follows:
* The sampler determines one of the six faces and then employs
conventional 2D texture mapping on given face.
Thus wrapping mode for CubeMapTexture is now changed to be only
two-dimensional (instead of 3D).
For texture arrays the mode is also only one- or two-dimensional (not
two- or three-dimensional), because, as said in the (also rather old)
EXT_texture_array extension, the texture layer is _always_
(independently of any sampling state) selected as follows:
l = clamp(round(t), 0, num_layers - 1)
Thus wrapping mode for Texture1DArray is now changed to be only
one-dimensional (instead of 2D) and for Texture2DArray only
two-dimensional (instead of 3D).
Wrapping for CubeMapTextureArray is now also two-dimensional instead of
3D (with the original way of thinking it would have needed to be 4D!).