Partially needed to avoid build breakages because Corrade itself
switched as well, partially because a cleanup is always good. Done
except for (STL-heavy) code that's deprecated or SceneGraph-related APIs
that are still quite full of STL as well.
"Funny" how this is the only API where I can't use glCreateTextures().
Like, it would have been so easy to just stop teaching glGen*() and
all their quirks and "this ID exists but it's not an object until you
bind it somewhere actually" to people altogether, BUT NO! FFS.
And use static functions with an explicit "self" pointer instead. Those
have half the size (8 vs 16 bytes on 64bit x86), which in turn reduces
the state tracker memory use by about 750 bytes. On desktop GL with an
Intel GPU & Mesa this reduces the state tracker allocation size by almot
10%, from 8.3 kB to 7.6 kB. Not bad.
Apart from small memory savings, this also removes the need to include
the full class definiton from the State headers on MSVC (because
on that compiler the member function pointer size is different based on
whether the type definition is known or not, IMAGINE THAT BEING A
FEATURE AND NOT A BUG), leading to less header dependencies and better
incremental compile times there.
This was already done in some cases (and the Vk library used this from
the beginning), and as I'm about to add some more extension-dependent
functionality it felt like a good time to finish that change, finally.
In some cases the *Implementation() could even be dropped in favor of
pointing to the GL API directly (such as is already done for various
glUniform*() calls), that'd be another step -- this is good enough for
now.
Array texture and cubemap (array) subimage queries get
ImageFlag*D::Array, cubemap whole image ImageFlag3D::CubeMap, cubemap
array whole image get ImageFlag3D::CubeMap and ImageFlag3D::Array. No
flags are checked when uploading images or when downloading images to
views -- using an array texture as a cube map and vice versa is a valid
case, and others are probably also, so there's no point.
The flag restrictions will come into place when ImageFlag*D::YUp / YDown
etc. are a thing.
For consistency with how VertexFormat and other enum helpers are named.
The compressedBlockSize() and compressedBlockDataSize() is also renamed
to compressedPixelFormatBlockSize() and
compressedPixelFormatBlockDataSize().
While backwards compatibility aliases are in place, a breaking change
is that Image classes now look for pixelFormatSize() instead of
pixelSize(). This is used e.g. when passing GL::PixelFormat /
GL::PixelType to the image classes, instead of the generic PixelFormat.
While useful, it's unlikely that any project was defining their own
pixel format enum and pixelSize() for a D3D or Metal renderer or
whatnot, so the breakage should have no practical impact.
All the tests were updated to explicitly check that non-null-terminated
strings get handled properly (the GL label APIs have an explicit size,
so it *should*, but just in case). Also, because various subclasses
override the setter to return correct type for method chaining and the
override has to be deinlined to avoid relying on a StringView include,
the tests are now explicitly done for each leaf class, instead of the
subclass
The <string> being removed from the base class for all GL objects may
affect downstream projects which relied on it being included. In case of
Magnum, the breakages were already fixed in the previous commit.
Compile time improvement for the MagnumGL library alone is 0.2 second or
4% (6.1 seconds before, 5.9 after). Not bad, given that there's three
more files to compile and strings are still heavily used in other GL
debug output APIs and all shader stuff. For a build of just the GL
library and all tests, it goes down from 28.9 seconds to 28.1. Most
tests also still rely quite heavily on std::stringstream for debug
output testing, so the numbers still could go further.
This means that instead of 12 separate allocations we have just one,
allocating everything together in a contiguous piece of memory. That
should be also a bit more cache friendly when accessing the state as
it's not scattered around the memory like crazy.
Because there are no Pointer indirections needed anymore, the State
members are just references now. That resulted in a lot of sweeping
changes around the whole GL library, but they're all trivial, changing
`->` to `.`, mostly.
There's two more nested allocations in the TextureState struct, will
take care of them in a separate commit.
This code path is not run there due to the newly added
"intel-windows-broken-dsa-for-cubemaps" workaround, but in case someone
disables it for testing purposes the code should not randomly blow up on
an assertion in compressedPixelFormatPack().
The image() queries were using getCubeLevelParameterivImplementation()
but the subImage() were inherited from AbstractTexture, using a slightly
different implementation. Since we'll need to apply workarounds to all
cubemap APIs, this needs to be consistent.
This is quite big, so:
* There are new Magnum::PixelFormat and Magnum::CompressedPixelFormat
enums, which contain generic API-independent formats. In particular,
PixelFormat replaces GL::PixelFormat and GL::PixelType with a single
value.
* There's GL::pixelFormat(), GL::pixelType(),
GL::compressedPixelFormat() to convert the generic enums to
GL-specific. The mapping is only in one direction, done with a lookup
table (generic enums are indices to that table).
* GL classes taking the formats directly (such as GL::BufferImage) have
overloads that take both the GL-specific and generic format.
* The generic Image, CompressedImage, ImageView, CompressedImageView,
and Trade::ImageData classes now accept the generic formats
first-class. However, it's also possible to store an
implementation-specific value to cover cases where a generic format
enum doesn't have support for a particular format. This is done by
wrapping the value using pixelFormatWrap() or
compressedPixelFormatWrap(). Particular GPU APIs then assume it's
their implementation-specific value and extract the value back using
pixelFormatUnwrap() or compressedPixelFormatUnwrap(). There's also an
isPixelFormatImplementationSpecific() and
isCompressedPixelFormatImplementationSpecific() that distinguishes
these values.
* Many operations need pixel size and in order to have it even for
implementation-specific formats, a corresponding pixelSize()
overload is found via ADL on construction and the calculated size
stored along the format. Previously the pixel size was only
calculated on demand, but that's not possible now. In case such
overload is not available, it's possible to pass pixel size manually
as well.
* In order to support the GL format+type pair, Image, ImageView and
Trade::ImageData, there's now an additional untyped formatExtra()
field that holds the second value.
* The CompressedPixelStorage class is now unconditionally available on
all targets, including OpenGL ES and WebGL. However, on OpenGL ES the
GL APIs expect that it's all at default values.
I attempted to preserve backwards compatibility as much as possible:
* The PixelFormat and CompressedPixelFormat enum now contains generic
API-independent values. The GL-specific formats are present there,
but marked as deprecated. Use either the generic values or
GL::PixelFormat (togehter with GL::PixelType) and
GL::CompressedPixelFormat instead. There's a lot of ugliness caused
by this, but seems to work well.
* *Image::type() functions are deprecated as they were too
GL-specific. Use formatExtra() and cast it to GL::PixelType instead.
* Image constructors take templated format or format+extra arguments,
so passing GL-specific values to them should still work.
The general part stays in the root namespace, while the GL-specific
stuff goes to the GL namespace. Also all GL-specific documentation was
moved to relevant APIs in the GL namespace. This finally allows me to
build PixelStorage.cpp as part of the root namespace.
The PixelStorage::pixelSize() function and
PixelStorage::dataProperties() taking a pair of GL::PixelFormat /
GL::PixelType enums is deprecated, use GL::pixelSize() and
dataProperties() taking pixel size directly instead. A lot of code is
still using these, including images; the deprecated aliases are inlined
in the header to avoid a compile-time dependency on the GL library.
At the moment just the GL library itself w/o the tests, and without
backwards compatibility aliases. The following types were left in the
root namespace, despite being in the GL/ directory, as they will get
moved back soon:
* Image, CompressedImage and their dimensional typedefs
* ImageView, CompressedImageView and their dimensional typedefs
* PixelStorage
Not PixelFormat etc., that one will stay in the GL namespace and a
completely new PixelFormat enum will be provided in the root namespace.
Minimal updates (just the include guards) so Git is hopefully able to
detect the rename and track the history properly.
Everything except Magnum::GL doesn't compile now.
SVGA3D has broken handling of glTex[ture][Sub]Image*D() for 1D arrays,
2D arrays, 3D textures and cube map textures where it uploads just the
first slice in the last dimension. This is only with copies from host
memory, not with buffer images. Seems to be fixed in Mesa 13, but I have
no such system to verify that on. Relevant commit in the Mesa sources:
2aa9ff0cda
This is one of the uglier workarounds -- I had to reintroduce multiple
code paths for glTexImage() which were removed when implementing ARB_DSA
and the workaround consists of basically a bunch of functions that are
slicing the image and calling the original implementations with each
slice.
The GL queries are needed only if the user-provided pixel storage
doesn't contain enough information, in particular dimensions and byte
size of compression block.
So in case they are present, no query for compressed image size is done
for full image queries (just saving one API call) and no queries for
block dimensions and byte size for subimage queries (saving four API
calls and not requiring ARB_internalformat_query2).
The documentation was fixed and tests were updated to take the
ARB_internalformat_query2 requirement into account.
For some reason I thought it's size of one face pre-ARB_DSA and size of
all six faces on ARB_DSA. NVidia drivers (which return different size
based on whether the texture is immutable, which is totally crazy)
didn't help with that at all. Now I realized this while fixing test
failures on Mesa. IIRC I couldn't find a clear explanation in the spec
anyway, so I'll just bite the bullet and assume *this* is the correct
way now. Mesa tests now pass instead of saying that my provided buffer
size is too small.
Allows to break the dependency on the <Magnum/CubeMapTexture.h> header
in Framebuffer, TextureState and elsewhere. The old
CubeMapTexture::Coordinate enum is now just an alias, is marked as
deprecated and will be removed in a future release.
103% of use cases use the returned value directly without checking, so
we might as well do the check ourselves. Added new function hasCurrent()
and added deprecated backward-compatibility conversion and -> operators.
Wow, that creeped to a lot of places.
Last dinosaur from the pointer age.
I expect the drivers to return size of *one* face when I'm querying that
particular face using the pre-DSA or EXT_DSA API and size of *all* six
faces when I'm querying whole texture using DSA API.
One can dream, eh?
It appears that, at least on my NVidia, the returned value does not
depend on whether I'm querying all faces or a single one, but RATHER
is based on whether the texture is immutable or not. How's that
predictable at all?! Workaround in the next commit.
Pre-DSA code paths need to specify for which face we are querying the
level parameters, which meant that all other calls had to specify the
(implicit) target too. I'm also preparing to put a cubemap-specific
workaround in the level parameter query and that really shouldn't be
present in the generic implementation for all texture types.
The other place where a specific target is needed is in setImage()
implementations, but these are rather big chunks of code and I don't
feel like copying these verbatim to cubemap implementation just to
isolate the workaround in one place.
Pre-DSA code path needs to pass specific slice of a cube map to all
getters instead of just GL_TEXTURE_CUBE_MAP. I did that properly for
image size query, which weirdly enough, had its own implementation, but
forgot to do that in compressed image getters and, because I have DSA
drivers, never tested that on pre-DSA contexts.
Using single implementation of image size with explicit target
parameter now.
Pain and misery. Majority of functionality for 3D compressed images now
suddenly fails the test -- this is either very vaguely specified or I am
very bad at understanding things or there are bugs in my NVidia drivers.
This was awful feature. Kill me now.
Makes it far easier to detect pixel storage misconfigurations and
improperly sized data arrays. Data owning classes (Image,
Trade::ImageData) accept Containers::Array<char> while wrappers
(ImageView, BufferImage) accept Containers::ArrayView<const void>.
ImageView reinterprets the passed array as const char to enable pointer
arithmetic on the data.
The old way (constructor/setData() call accepting void*) is now marked as
deprecated and will be removed in some future release. Because decay of
fixed-size arrays to void* is preferred to calling Containers::ArrayView
constructor, there are two more overloads to have proper handling of
const T(&)[n] and std::nullptr_t arguments.
Currently the TgaImporter and TgaImageConverter fail on images with row
length not aligned to 4, will fix that in followup commits.
Removed *Image::dataSize() and added *Image::dataProperties(), which
returns all properties separately for better introspection. This function
is now just an convenience alias to PixelStorage::dataProperties(), which
takes into account proper alignment and all other storage properties.