Once the block sizes are implicitly taken from known format properties
and supplied to GL instead of being queried from GL or required to be
set by the user in CompressedPixelStorage, this commit will be reverted.
Similarly like pixel size is stored in uncompressed images, block size
makes it possible to perform size checks on passed data, slice the images
and so on. It only took over a decade to get that done.
The block properties coming from CompressedPixelStorage are currently
expected to be either not set at all or exactly match what's stored in
the image for given format. The PixelStorage will get eventually
deprecated in favor of a simpler and more flexible representation, but
that's another big chunk of work so it's first done like this.
The GL library tests currently blow up on various assertions and it
isn't yet updated to make use of the known format properties instead of
querying them from GL. That'll be done in the following commits.
The tests now pass, but this damn GL pixel storage API is so convoluted
with so many redundant and mutually conflicting degrees of freedom that
it's impossible to be sure. Looking forward to when I can finally drop
that thing and calculate it on the fly from just size + stride, like it
should have been from the start.
In case of compressed images the tests were absolutely insufficient, as
they were always verifying just a single block, and thus they passed
even though the actual calculation was wrong in several ways.
And then the occupiedCompressedImageDataSize() wasn't tested at all,
even though it should return something completely different from the
function it delegates to.
Fixes to both in the next commit.
It's used only there and only to supply a silly argument to a broken API.
Unfortunately back when adding this utility in 2015 I didn't document
what was it for, which initially made me think it's there for some
suspicious reason. Well, the reason is suspicious, but for an entirely
different reason.
Unpack skip / row height parameters aren't supported there but alignment
is, so be sure to test that. Together with discovering some extremely
weird behavior, likely some shitty ANGLE bug.
This likely originates in pre-2018 state of the repo where the GL
wrapper was part of the main library and CompressedPixelStorage wasn't a
thing on ES simply because there was no such extension.
All places were already consistently using "expected to <stuff>"
describing expectations backed by assertions, except for the Primitives
library. Fix that, plus random other doc fixes.
The MAGNUM_GL_ABSTRACTSHADERPROGRAM_SUBCLASS_DRAW_IMPLEMENTATION() and
MAGNUM_GL_ABSTRACTSHADERPROGRAM_SUBCLASS_DISPATCH_IMPLEMENTATION() *is*
documented so that one stays not underscored.
This was quite nasty, a multi-day effort to trim this down and then
increasingly growing disappointment as I discovered it was affecting
basically any use of the API.
Similarly as done in Aug 2024 in Corrade. When these were a part of the
function signature, they ended up being encoded into the exported
symbol. There are still cases of StridedArrayView slice() having
enable_if in the signature, which amounts to about 18 kB symbols in all
libMagnum*-d.so libraries, but apart from that this is the state before:
$ strings libMagnum*-d.so | grep enable_if | grep -v slice | wc -c
29591
And this is after. All of those are coming from STL, thus from
old or deprecated APIs that still use std::vector, std::tuple and such,
and from the few std::sort() uses.
$ strings libMagnum*-d.so | grep enable_if | grep -v slice | wc -c
4103
In a non-deprecated build it's just this, which is a 10x reduction.
Can't really do much about these maybe exceút for implementing my own
swap() specializations (sigh?), but I think it's fine.
$ strings libMagnum*-d.so | grep enable_if | grep -v slice | wc -c
2904
I also made it consistently use
typename std::enable_if<..., int>::type = 0
instead of
class = typename std::enable_if<...>::type
because the former works correctly also in presence of overloads and
having it used consistently everywhere makes it easier to grep & change
later. All SFINAE is now also excluded from Doxygen output, because it
doesn't make much sense there. It's better to just explain the
restriction in words than with this nasty hack.
Less error-prone because of full type safety and much nicer overall. The
test code originates from a time before slice-to-a-member-function was a
thing and then it got just copied everywhere without much thought.
As the time goes, I have less and less patience trying to figure these
things out. It's likely my fault for using the not-well-tested DSA APIs
or something.
Because, searching for maxSamples() and finding it only for a
Renderbuffer, I got an impression that there isn't an equivalent for
textures. There is, so link to that in the docs.
For integer attributes, that is. This never worked and the MeshGLTest
even had an explicit test case for this, but it took me until now to
realize that this would be best caught at compile time. So now it is.
But since this functionality dates back to 2010, just removing the
values would likely break a lot of code (well, broken code, but still),
so the disallowed values are now deprecated, giving a compile time
warning when used, and are removed only on non-deprecated builds.
The wording of the ARB_compressed_texture_pixel_storage requirement is
slightly different in Texture and such, likely due to being updated there
but not here.
And of course, besides the already-existing workaround for NV where it
reports size in bits instead of bytes, Mesa reports completely broken
pixel sizes in some cases. For DX1 I get 2x2 (?!), for ASTC 10x10 I get
1x1, which is basically unfixable.
I wonder how at all these utilities were used internally with success for
so long. Or were they never used because in every case I ended up
supplying the properties manually?
IIRC this still failed when I did the last changes to these tests in
October 2024 in e9b07ef4ce, so it seems
that the driver got fixed since. There are still the broken cases in
Texture3D tho, updated in the previous commit.
Not sure what was happening here, but my suspicion is that I took what
NVidia did with BPTC when used in 3D textures as the correct thing, and
then tried to shape the test around that. Or something.
Basically, instead of the blocks being 4x4x1 as the format says, the test
overwrote the pixel storage parameters to be four times larger 4x4x4
blocks, and then went with that, trying to shape the other test variants
to match that as well. And failing really bad, because it failed on Mesa,
and the only case where where it matched the (wrong) expectation was on
NVidia and only when the custom (wrong) pixel storage was used.
In all other cases this is called outside of #ifndef CORRADE_NO_ASSERT,
so this is likely some yet-undiscovered copypaste error. OTOH it's such a
corner case of a corner case that I don't think anyone ever hits that
because it'd fail on just anything else anyway:
GL::CubeMapTexture texture;
// nothing done here, not even a format / size specified
texture.compressedSubImage(...);
This extra call originates in ae70a62644
(2015, again) and was used in all places where DSA functions were called
directly without feature-aware dispatch. Since then, some of the cases
were turned into feature-aware dispatches, which means the call isn't
needed in those anymore, so I removed it. And to clarify why is it needed
in the remaining cases I added extra comments.
The uncompressed variant preserves the buffer contents if passed an empty
view, which was done in 5ee461bdbe (2015!)
but no corresponding change was ever done for the compressed variant. So
let's make both do the same, and actually test the code path, which
wasn't done until now either.
For compressed images the expected message was completely wrong, meaning
it'd fail in any case, and they were missing a SKIP on no-assert builds.
Then, for GL::[Compressed]BufferImage the assertions in setData() were
never tested at all.