Vertex buffer offsets are like this already (and I already had a use
case with a mesh of size larger than 4 GB), with index buffers so far I
thought it's not needed, but it makes sense to do that as well -- there
can be a giant index buffer for many meshes and even though the total
drawn element count won't reach 1 billion (or 1 million, even), it can
still go over. Since it was internally already stored as a pointer-sized
value and some (but not all) code was treating it as pointer-sized, this
change just makes sense.
This also fixes "warning C4244: 'return': conversion from 'const
GLintptr' to 'Magnum::Int', possible loss of data" on MSVC, although in
a very different way.
Originally (2012? 2013?) I expected that there would eventually be
OpenGL ES 4.0, thus it made sense to differentiate between ES2, ES3 and
something else ES yet unknown. But as ES4 was increasingly unlikely to
happen, the internal code treated MAGNUM_TARGET_GLES3 as a simple
inverse of MAGNUM_TARGET_GLES2, and only in a very few places,
only adding confusion.
Thus it's now deprecated and defined as a simple inverse of
MAGNUM_TARGET_GLES2 on MAGNUM_TARGET_GLES builds, and none of the
internal code uses it anymore.
Whoops, this got silently omitted during the massive refactor in
f7a6d79aa0 (Nov 23). Buffers *do* get
destroyed, VAOs not. If you got sudden GPU memory usage issues after a
recent Magnum update, this was why.
This "type erased std::vector member" was done in the times before
growable arrays were a thing, and kind of made sense to go the extra way
to avoid a <vector> include in the header. Except that it made rather
unportable assumptions about std::vector size, which weren't correct for
example with _GLIBXX_ASSERTIONS set.
But what was *completely* unacceptable was that the vector was of one or
another type depending on the GL feature set present in the current
context. Apart from adding a lot of extra *nasty* logic to construction,
moves and destruction, this approach led to the mesh instance asking the
current context on destruction in order to know whether a destructor
should be called on std::vector<Buffer> or std::vector<AttributeLayout>.
Ugh.
Now it's a regular Array member (which isn't *that* heavy to need such
type-erased treatment, although it eventually could be), and thanks to
the AttributeLayout packing improvements in previous commits it's no
longer prohibitively wasteful to just abuse AttributeLayout instances to
store just owning Buffer instances alone -- doing so now wastes only 16
bytes per buffer, compared to 36 before. Given there's usually just one
or two vertex buffers per mesh (compared to attributes, which are
usually 4 or more), it should be fine.
The MeshGLTest::destructMovedOutInstance() test added few commits back
also no longer asserts on no GL context being present.
Saves 8 bytes (104 -> 96 bytes), which were before wasted on 4
byte padding before the 8-byte _indexBufferOffset member and 4 bytes
after the _indexBuffer, which is new there due to the Buffer being 8
instead of 12 bytes now.
On ES2 there's three 32-bit members less, which means this change only
moved the 4-byte after _indexBuffer to after _indexType, i.e. 88 bytes
before and 88 now as well.
Which, thanks to a 3-byte padding being now just 1 byte, makes the
Buffer class 8 bytes large instead of 12. And in turn, the internal
Mesh::AttributeLayout struct is now 40 bytes instead of 48 as there's no
longer an extra 4 bytes of padding to satisfy 8-byte alignment of the
offset member. Still can go lower than that.
The test added in previous commit now passes. Besides the behavior being
different for single- and multi-bind calls, an additional "interesting"
behavior is that the glBindBufferBase() / glBindBufferRange()
apparently also creates the GL object, if not already (in contrast to
the multi-bind APIs which *require* the GL objects to be created
before).
Gotta admit I wasn't aware of this side effect, at all. Once I realized
that, a fix seemed simple at first, only to make a second (re)discovery,
where glBindBuffersRange() / glBindBuffersBase() does *not* have this
side effect.
I get that it's all a shaky pile building on top of a long legacy, but
still, it never ceases to amaze.
If Magnum and Corrade get installed into the same directory,
target_include_directories() or target_link_libraries() with Corrade
before Magnum will result in the (usually stale) installed Magnum
headers being picked over the local ones. Which is unwanted, so try to
always put the local Magnum include path first.
Tested manually by installing to an arbitrary location and editing
configure.h to contain an #error. That failed for the Text library, and
with these changes it now doesn't fail anymore, but that's not a
guarantee that I managed to fix all such cases.
Originally GL::hasTextureFormat() returned false on ES2 for
PixelFormat::R8Unorm, RG8Unorm, RGB8Unorm and RGBA8Unorm because
glTexStorage() didn't work with the matching Luminance, LuminanceAlpha,
RGB and RGBA formats. But since the only ES2 platform is nowadays
basically just WebGL 1, which has neither EXT_texture_rg nor
EXT_texture_storage, this implicit failure made no sense and just made
the textureFormat() (and the new genericPixelFormat() API) useless
there.
Now it maps to them, and it's up to the caller to make sure
glTexStorage() doesn't get called with those, only glTexImage does.
Furthermore, if formats from EXT_texture_rg are used, the
genericPixelFormat() now also provides inverse mapping of them back to
the generic PixelFormat. Before it was basically *no* ES2 TextureFormat
that'd work with either of these, now it's all that have a (vaguely)
corresponding PixelFormat.
An ad-hoc solution was already done in DebugTools::screenshot(), now I
need it in another place. While not as fast as the O(1) mapping from
the generic format to the API-specific ones due to the potentially
linear lookup, it definitely could be useful in general.
Only noticed this now when adding inverse mapping. Sigh. OTOH, with the
inverse mapping in place this will no longer be possible to happen, as
it would cause a compile error due to a duplicate switch case.
This one was spectacular -- ALL uses of it had also #include <tuple> in
order to std::tie() the result into separate major & minor variables. So
much compile time overhead for so little.
Not that C++ STL and exceptions would be anything to take inspiration
from, but there's std::out_of_range. Python IndexError is also specified
as "index out of range", not "bounds".
Partially needed to avoid build breakages because Corrade itself
switched as well, partially because a cleanup is always good. Done
except for (STL-heavy) code that's deprecated or SceneGraph-related APIs
that are still quite full of STL as well.
This reverts commit 6bb0179c65 from 2018,
which in turn reverted commit f6ba4111e1,
which in turn reverted commit 4ce2875262
from 2015. The related Emscripten PR was merged in 2018, so it's safe to
assume everything works as expected nowadays.
Which also means I can finally delete my Emscripten fork that contained
the original branch that attempted to add glDrawRangeElements() in May
2015, before WebGL 2 was even supported in Emscripten, or Firefox.
So far this was only possible by creating a temporary MeshView, while
everything else (index/vertex count, base vertex, base instance, ...)
was changeable directly on the Mesh.
The web isn't broken enough yet, apparently. Support for both of those
extensions was added in early 2020 (and I think I remember even seeing
them listed as supported in some browsers), one of them was renamed
mere two months later, one in January 2023.
And I discovered just by accident, the browsers *of course* don't even
bother advertising both to have some transition period. Or maybe that
transition period happened, for 3 weeks in January, and if some
developer didn't notice in that time, "it's their fault". Or maybe it's
my fault, for attempting to use an extension that was stuck in a "draft
status" for four years. THE WHOLE WEB IS EITHER IN A "DRAFT STATUS" OR
"DEPRECATED", THERE'S NOTHING IN BETWEEN, FFS!
Constant needless churn, UGH.