All other image classes do that, and thus code generally assumes that
querying it is an immediate operation, not a monster switch over
hundreds of values. Plus this prepares for the future internal
representation that is just sizes + strides instead of the
overcomplicated PixelStorage madness.
Compared to Corrade, the improvement in compile time is about a minute
cumulative across all cores, or about 8 seconds on an 8-core system (~2
minutes before, ~1:52 after). Not bad at all. And this is with a
deprecated build, the non-deprecated build is 1:48 -> 1:41.
The LDR/HDR detection from 8da46ef9dc
unfortunately made Emscripten apps crash on startup if --closure was
enabled in linker flags, unless the page was run with
?magnum-disable-extensions=GL_WEBGL_compressed_texture_astc
added to the URL. The fix basically forces me to make the code not rely
on undocumented Emscripten internals anymore, which is nice, however I
now have to duplicate it because of compiler silliness and the
comment:code ratio is not getting any better either.
Back in 2020 when I wrote this I didn't really expect the MeshData to be
directly used for much more than putting them on a GPU, mostly because
that used to be the primary use case with the old MeshData2D /
MeshData3D. So the documentation was focusing mainly on populating a GPU
mesh, and any docs for CPU-side access were added rather hastily.
As now the asset processing use case is much larger, the original docs no
longer made sense. Let's hope this is better.
New since 7ca7e5a62b, and no, I'm not going
to switch from enums to some static constexpr int. Unless this
changed in recent standards, it still means one can take an address of
it. Which shouldn't be possible for a constant as that could
unnecessarily pessimize its perf.
Neither of those are really critically important. Failure not being a
failure is fine (just don't make the shader broken in the first place),
multiple color output bindings should be in the shader source anyway.
This was quite a mess, the whole array name copypasted to each and every
access. I mean, yeah, originally I thought this would be *the* usage
pattern, but oh god this resulted in SO MANY copypaste errors.
Which ultimately means the two annoying NVidia failures in the
TextureGLTest related to pixel storage in compressed 3D images are no
longer failing as a result of cleanup.
Following the spirit of extension-based functionality, the entrypoints
are available always but do something (i.e., call the actual WebGL API)
only if the extension is advertised. Which it is only on Emscripten
3.1.66+ because older versions don't have the corresponding entrypoints,
so there it's marked as disabled.
Additionally, EXT_polygon_offset_clamp is now also working on 3.1.66+,
but there's no wrapper for it yet.
Originally, years ago, it was supplying the same input value on all
targets, and just special-cased the output on ES2 to account for the
lower bit depth. But then SwiftShader arrived, which doesn't actually
care that there's a RGBA4 framebuffer, and performs all calculations at
the full precision. Which, well, makes sense from a perf PoV, kind of, so
I adapted also the input to be quantized to 4 bits instead of 8.
BUT THEN, some Mesa update happened, or maybe it was like this always and
I just didn't realize because I was on NV cards for so long, and that
input which made SwiftShader produced the expected result, was *further
quantized*, deviating further from what was expected.
So I now ditched all that and I'm just comparing with a
sufficiently large delta. If some implementation returns exactly what was
expected for an 8-bit framebuffer even thought the framebuffer is 4-bit,
I don't care.
Some of them were marked as constexpr even though they were calling into
a deinlined function internally. Making sure the default construction is
constexpr at least, and testing that all constexpr functions actually
behave like that.
This name isn't known to it at the time it parses the header (because no
such header gets included for it), which in turn causes Doxygen 1.12 to
generate a dummy ::Platform namespace. Which then gets a priority over
Magnum::Platform when linked to, and because it's dummy, it's reported
as an error because it's not allowed to link to undocumented stuff.
Doxygen 1.12 has no longer a completely insane matcher and discards
those as it should. With 1.8.17 classes had to be referenced with
Corrade:: but functions, typedefs and variables didn't need to be and it
was a complete utter chaos.
Expands the test added in 789c52fd8a, for
which a fix was done in 8f6f4053fc but
which forgot to handle the case where a buffer is unbound. The test now
fails with an invalid error.
This was already done for all application libraries and then also all
contexts in 1c6f77389d, was forgotten here
for some reason. A use case that may need it is a shared library shared
(heh) by multiple test executables.
Vertex buffer offsets are like this already (and I already had a use
case with a mesh of size larger than 4 GB), with index buffers so far I
thought it's not needed, but it makes sense to do that as well -- there
can be a giant index buffer for many meshes and even though the total
drawn element count won't reach 1 billion (or 1 million, even), it can
still go over. Since it was internally already stored as a pointer-sized
value and some (but not all) code was treating it as pointer-sized, this
change just makes sense.
This also fixes "warning C4244: 'return': conversion from 'const
GLintptr' to 'Magnum::Int', possible loss of data" on MSVC, although in
a very different way.
Originally (2012? 2013?) I expected that there would eventually be
OpenGL ES 4.0, thus it made sense to differentiate between ES2, ES3 and
something else ES yet unknown. But as ES4 was increasingly unlikely to
happen, the internal code treated MAGNUM_TARGET_GLES3 as a simple
inverse of MAGNUM_TARGET_GLES2, and only in a very few places,
only adding confusion.
Thus it's now deprecated and defined as a simple inverse of
MAGNUM_TARGET_GLES2 on MAGNUM_TARGET_GLES builds, and none of the
internal code uses it anymore.