Following the spirit of extension-based functionality, the entrypoints
are available always but do something (i.e., call the actual WebGL API)
only if the extension is advertised. Which it is only on Emscripten
3.1.66+ because older versions don't have the corresponding entrypoints,
so there it's marked as disabled.
Additionally, EXT_polygon_offset_clamp is now also working on 3.1.66+,
but there's no wrapper for it yet.
Originally, years ago, it was supplying the same input value on all
targets, and just special-cased the output on ES2 to account for the
lower bit depth. But then SwiftShader arrived, which doesn't actually
care that there's a RGBA4 framebuffer, and performs all calculations at
the full precision. Which, well, makes sense from a perf PoV, kind of, so
I adapted also the input to be quantized to 4 bits instead of 8.
BUT THEN, some Mesa update happened, or maybe it was like this always and
I just didn't realize because I was on NV cards for so long, and that
input which made SwiftShader produced the expected result, was *further
quantized*, deviating further from what was expected.
So I now ditched all that and I'm just comparing with a
sufficiently large delta. If some implementation returns exactly what was
expected for an 8-bit framebuffer even thought the framebuffer is 4-bit,
I don't care.
Some of them were marked as constexpr even though they were calling into
a deinlined function internally. Making sure the default construction is
constexpr at least, and testing that all constexpr functions actually
behave like that.
This name isn't known to it at the time it parses the header (because no
such header gets included for it), which in turn causes Doxygen 1.12 to
generate a dummy ::Platform namespace. Which then gets a priority over
Magnum::Platform when linked to, and because it's dummy, it's reported
as an error because it's not allowed to link to undocumented stuff.
Doxygen 1.12 has no longer a completely insane matcher and discards
those as it should. With 1.8.17 classes had to be referenced with
Corrade:: but functions, typedefs and variables didn't need to be and it
was a complete utter chaos.
Expands the test added in 789c52fd8a, for
which a fix was done in 8f6f4053fc but
which forgot to handle the case where a buffer is unbound. The test now
fails with an invalid error.
This was already done for all application libraries and then also all
contexts in 1c6f77389d, was forgotten here
for some reason. A use case that may need it is a shared library shared
(heh) by multiple test executables.
Vertex buffer offsets are like this already (and I already had a use
case with a mesh of size larger than 4 GB), with index buffers so far I
thought it's not needed, but it makes sense to do that as well -- there
can be a giant index buffer for many meshes and even though the total
drawn element count won't reach 1 billion (or 1 million, even), it can
still go over. Since it was internally already stored as a pointer-sized
value and some (but not all) code was treating it as pointer-sized, this
change just makes sense.
This also fixes "warning C4244: 'return': conversion from 'const
GLintptr' to 'Magnum::Int', possible loss of data" on MSVC, although in
a very different way.
Originally (2012? 2013?) I expected that there would eventually be
OpenGL ES 4.0, thus it made sense to differentiate between ES2, ES3 and
something else ES yet unknown. But as ES4 was increasingly unlikely to
happen, the internal code treated MAGNUM_TARGET_GLES3 as a simple
inverse of MAGNUM_TARGET_GLES2, and only in a very few places,
only adding confusion.
Thus it's now deprecated and defined as a simple inverse of
MAGNUM_TARGET_GLES2 on MAGNUM_TARGET_GLES builds, and none of the
internal code uses it anymore.
Whoops, this got silently omitted during the massive refactor in
f7a6d79aa0 (Nov 23). Buffers *do* get
destroyed, VAOs not. If you got sudden GPU memory usage issues after a
recent Magnum update, this was why.
This "type erased std::vector member" was done in the times before
growable arrays were a thing, and kind of made sense to go the extra way
to avoid a <vector> include in the header. Except that it made rather
unportable assumptions about std::vector size, which weren't correct for
example with _GLIBXX_ASSERTIONS set.
But what was *completely* unacceptable was that the vector was of one or
another type depending on the GL feature set present in the current
context. Apart from adding a lot of extra *nasty* logic to construction,
moves and destruction, this approach led to the mesh instance asking the
current context on destruction in order to know whether a destructor
should be called on std::vector<Buffer> or std::vector<AttributeLayout>.
Ugh.
Now it's a regular Array member (which isn't *that* heavy to need such
type-erased treatment, although it eventually could be), and thanks to
the AttributeLayout packing improvements in previous commits it's no
longer prohibitively wasteful to just abuse AttributeLayout instances to
store just owning Buffer instances alone -- doing so now wastes only 16
bytes per buffer, compared to 36 before. Given there's usually just one
or two vertex buffers per mesh (compared to attributes, which are
usually 4 or more), it should be fine.
The MeshGLTest::destructMovedOutInstance() test added few commits back
also no longer asserts on no GL context being present.
Saves 8 bytes (104 -> 96 bytes), which were before wasted on 4
byte padding before the 8-byte _indexBufferOffset member and 4 bytes
after the _indexBuffer, which is new there due to the Buffer being 8
instead of 12 bytes now.
On ES2 there's three 32-bit members less, which means this change only
moved the 4-byte after _indexBuffer to after _indexType, i.e. 88 bytes
before and 88 now as well.
Which, thanks to a 3-byte padding being now just 1 byte, makes the
Buffer class 8 bytes large instead of 12. And in turn, the internal
Mesh::AttributeLayout struct is now 40 bytes instead of 48 as there's no
longer an extra 4 bytes of padding to satisfy 8-byte alignment of the
offset member. Still can go lower than that.
The test added in previous commit now passes. Besides the behavior being
different for single- and multi-bind calls, an additional "interesting"
behavior is that the glBindBufferBase() / glBindBufferRange()
apparently also creates the GL object, if not already (in contrast to
the multi-bind APIs which *require* the GL objects to be created
before).
Gotta admit I wasn't aware of this side effect, at all. Once I realized
that, a fix seemed simple at first, only to make a second (re)discovery,
where glBindBuffersRange() / glBindBuffersBase() does *not* have this
side effect.
I get that it's all a shaky pile building on top of a long legacy, but
still, it never ceases to amaze.
If Magnum and Corrade get installed into the same directory,
target_include_directories() or target_link_libraries() with Corrade
before Magnum will result in the (usually stale) installed Magnum
headers being picked over the local ones. Which is unwanted, so try to
always put the local Magnum include path first.
Tested manually by installing to an arbitrary location and editing
configure.h to contain an #error. That failed for the Text library, and
with these changes it now doesn't fail anymore, but that's not a
guarantee that I managed to fix all such cases.
Originally GL::hasTextureFormat() returned false on ES2 for
PixelFormat::R8Unorm, RG8Unorm, RGB8Unorm and RGBA8Unorm because
glTexStorage() didn't work with the matching Luminance, LuminanceAlpha,
RGB and RGBA formats. But since the only ES2 platform is nowadays
basically just WebGL 1, which has neither EXT_texture_rg nor
EXT_texture_storage, this implicit failure made no sense and just made
the textureFormat() (and the new genericPixelFormat() API) useless
there.
Now it maps to them, and it's up to the caller to make sure
glTexStorage() doesn't get called with those, only glTexImage does.
Furthermore, if formats from EXT_texture_rg are used, the
genericPixelFormat() now also provides inverse mapping of them back to
the generic PixelFormat. Before it was basically *no* ES2 TextureFormat
that'd work with either of these, now it's all that have a (vaguely)
corresponding PixelFormat.
An ad-hoc solution was already done in DebugTools::screenshot(), now I
need it in another place. While not as fast as the O(1) mapping from
the generic format to the API-specific ones due to the potentially
linear lookup, it definitely could be useful in general.
Only noticed this now when adding inverse mapping. Sigh. OTOH, with the
inverse mapping in place this will no longer be possible to happen, as
it would cause a compile error due to a duplicate switch case.
This one was spectacular -- ALL uses of it had also #include <tuple> in
order to std::tie() the result into separate major & minor variables. So
much compile time overhead for so little.
Not that C++ STL and exceptions would be anything to take inspiration
from, but there's std::out_of_range. Python IndexError is also specified
as "index out of range", not "bounds".
Partially needed to avoid build breakages because Corrade itself
switched as well, partially because a cleanup is always good. Done
except for (STL-heavy) code that's deprecated or SceneGraph-related APIs
that are still quite full of STL as well.