The function got changed to return just two values instead of three in
7b5ef21bd9 (2018), yet the docs were never
updated.
Also clearly state the units in which row length / image height is
provided. This whole API is a mess tho, and a clear candidate for
deprecation.
The wording of the ARB_compressed_texture_pixel_storage requirement is
slightly different in Texture and such, likely due to being updated there
but not here.
And of course, besides the already-existing workaround for NV where it
reports size in bits instead of bytes, Mesa reports completely broken
pixel sizes in some cases. For DX1 I get 2x2 (?!), for ASTC 10x10 I get
1x1, which is basically unfixable.
I wonder how at all these utilities were used internally with success for
so long. Or were they never used because in every case I ended up
supplying the properties manually?
IIRC this still failed when I did the last changes to these tests in
October 2024 in e9b07ef4ce, so it seems
that the driver got fixed since. There are still the broken cases in
Texture3D tho, updated in the previous commit.
Not sure what was happening here, but my suspicion is that I took what
NVidia did with BPTC when used in 3D textures as the correct thing, and
then tried to shape the test around that. Or something.
Basically, instead of the blocks being 4x4x1 as the format says, the test
overwrote the pixel storage parameters to be four times larger 4x4x4
blocks, and then went with that, trying to shape the other test variants
to match that as well. And failing really bad, because it failed on Mesa,
and the only case where where it matched the (wrong) expectation was on
NVidia and only when the custom (wrong) pixel storage was used.
In all other cases this is called outside of #ifndef CORRADE_NO_ASSERT,
so this is likely some yet-undiscovered copypaste error. OTOH it's such a
corner case of a corner case that I don't think anyone ever hits that
because it'd fail on just anything else anyway:
GL::CubeMapTexture texture;
// nothing done here, not even a format / size specified
texture.compressedSubImage(...);
This extra call originates in ae70a62644
(2015, again) and was used in all places where DSA functions were called
directly without feature-aware dispatch. Since then, some of the cases
were turned into feature-aware dispatches, which means the call isn't
needed in those anymore, so I removed it. And to clarify why is it needed
in the remaining cases I added extra comments.
The uncompressed variant preserves the buffer contents if passed an empty
view, which was done in 5ee461bdbe (2015!)
but no corresponding change was ever done for the compressed variant. So
let's make both do the same, and actually test the code path, which
wasn't done until now either.
They got added in 6d41597d1d in 2018 to
selectively suppress a deprecation warning. But later on the deprecated
API got removed and these stayed, being useless. So just call the
ADL pixelFormatSize() APIs directly.
For compressed images the expected message was completely wrong, meaning
it'd fail in any case, and they were missing a SKIP on no-assert builds.
Then, for GL::[Compressed]BufferImage the assertions in setData() were
never tested at all.
Because there's no format that'd have more than 256-byte pixels anyway,
the theoretically biggest one would be RGBA64F or some such with 32
bytes. Nevertheless, an assert is now in place to verify the bounds as
well as ensuring the pixel size is not zero.
They're not parsed since 6b22a11170
(2020), so there's no point in keeping those workarounds. They're only
kept in utility application sources as they're parsed for pages, and in
tweakable implementations where it's easier to just copypaste the whole
ifdef expression from the header every time instead of modifying it to
not include DOXYGEN_GENERATING_OUTPUT.
And document that. Because the pixel size cannot be determined for it,
and one has to either pass it explicitly or use the templated overload
that figures it out implicitly via ADL. This asserted before, but only
deep inside in pixelFormatSize(), which may be confusing.
I need to do a similar treatment for compressed images with block size
properties so let's first make it behave properly for uncompressed.
I mean, yeah, it's all bad, but at least it works now. The upcoming
internal representation will not be this silly CompressedPixelStorage
anymore and then it will become a bit less bad.
For a proper language-lawyer-safe implementation I'd explicitly call
destructors and then in-place-new the other instance and such, but
that's two more branches and thus twice as many chances to mess up.
All other image classes do that, and thus code generally assumes that
querying it is an immediate operation, not a monster switch over
hundreds of values. Plus this prepares for the future internal
representation that is just sizes + strides instead of the
overcomplicated PixelStorage madness.
SDL 2.24 started adding a SDL2::SDL2 alias, which would avoid some of the
extra branching I had to do in the Find module. Unfortunately, on CMake
before 3.18 it causes static SDL to be marked as not found because the
alias isn't an alias but rather an INTERFACE that links to
SDL2::SDL2-static, which means it doesn't have the
INTERFACE_LINK_LIBRARIES property, which it fails on.
So I'm detecting that and undoing it in order to not fail.
Neither a driver bug nor something wrong with the code. It's just
that with a tight flush rectangle the target texture may have some random
garbage left around the edges, causing the comparison to fail, so it's
now explicitly cleared upfront.
Doesn't happen when running the offending test alone (because I suppose
the memory is coming fresh from the driver, being zeroed out for security
purposes), only when running after the others (where I suspect it's now
reusing previous partially-filled memory which it didn't before). Doesn't
happen on NVidia either.
This got deprecated in 069c81b9cb but
without any visible documentation bit, so it was almost impossible to
trace back to a particular version. Or know about it when using the API.