The type is now extended to 32 bits. In the GL and Vk libraries it means
one can now do things like
MeshIndexType type = meshIndexTypeWrap(GL_UNSIGNED_BYTE);
and passing that to GL::Mesh or Vk::Mesh will cause it to use the value
directly, instead of doing a mapping from a generic type. The *real* use
case for this is however to allow custom index buffer representations in
Trade::MeshData. Support for that will be hooked up in the following
commit.
It may soon start happening that those will have an
implementation-specific value that is neither a GL or a Vulkan
identifier, so people should not just pass them around without checking.
Looking at the snippets, these seem to have been written back when there
was no builtin shaders yet, it seems, not to mention
MeshTools::compile(), Trade::MeshData or any of the other high-level
APIs. Rather overwhelming to just throw huge code snippets at the user,
explaining a workflow with a custom-made mesh that's going to be drawn
with a custom-made shader, which is like level 999 of using the GL
library.
It was used just for certain corner cases that weren't covered by
{EXT,OES}_draw_elements_base_vertex already, but because of a wrongly
understood extension interaction the EXT/OES multidraw entrypoint (which
isn't implemented on ANGLE) was used from these as well.
And while trying to disable the two extensions for the MeshGLTest I
discovered the test half-expects the ANGLE variant to be implemented and
so I finished it for both the single-draw and multi-draw case.
The extension interaction that makes base-vertex multidraw calls broken
on ANGLE will be fixed in the following commit.
Yes, this is only ever provided by an ANGLE extension, and as you might
have guessed, The Great Google Overlords didn't bother caring to have
this supported outside of their browser.
Since the main point of the low-level API is to make use of the
EXT_multi_draw_arrays / ANGLE_multi_draw / WEBGL_multi_draw extensions
for a reduced driver overhead *and* the builtin gl_DrawID variable, it
just doesn't make sense to provide a fallback to draw() in a loop. When
the extensions are not available, the user has to do extra work in order
to supply proper UBO draw offset for each draw call, and thus the
fallback path has basically no practical use.
On the other hand, draw() taking the MeshView instances is kept as-is --
this one is rather old and so breaking compatibility would be an
unnecessary pain point. Plus, since it has to allocate the contiguous
arrays internally, it's not desirable for any fast-path rendering
anyway.
Because a MeshView might not be the best thing to have when you are
submitting a batch of thousand draws. It takes a strided array views to
allow for more flexibility, but can also detect if the input is already
contiguous and use it as-is.
UNFORTUNATELY the GL 1.0 legacy still continues to stink and so there
has to be a 64-bit-specific overload which is the *actual* variant that
doesn't allocate because glMultiDrawElements takes a `void**` for INDEX
OFFSETS and it's IN BYTES! Which foolish soul designed such a thing back
in the 1860s, I wonder. There's no reason to not have an index offset
in elements because all indices have to have the same type ANYWAY. And
yes, I wasted about three hours debugging driver crashes because I
THOUGHT this parameter takes offset in elements, not bytes.
Also note: on 32-bit platforms this depends on latest Corrade with the
CORRADE_TARGET_32BIT definition. Spent an embarrassing amount of time
wondering why all local builds but Emscripten work.
As usual, the old APIs are still present, but marked as deprecated.
Existing code is not updated yet to ensure I didn't break anything with
this.
This way it's much more intuitive and makes the code shorter and nicer
in many cases. Shaders are now also able to hide irrelevant
draw/dispatch APIs to avoid accidents.
Good thing I checked this -- based on WebGL I was under assumption that
all GPUs have it just 256, but that was really just WebGL limitation.
Also ugh why this query wasn't there since the 90's?
Deprecated for 2018.04, it's been almost a year since. Whoever is using
Magnum regularly updated already, and who not can always upgrade
gradually (2018.02, 2018.04, 2018.10, 2019.01 etc.).
What's left is *a lot* of places taking monstrous
std::vector<std::reference_wrapper> and that can't be changed to
std::vector<Containers::Reference> in a source-compatible way. Even that
would be only a temporary change, since the goal is to fully avoid
dependency on STL in those cases.
The final version of these APIs should take
Containers::ArrayView<Containers::Reference> and be implicitly
convertible froom e.g. std::vector<Containers::Reference>. That's
definitely possible, but not in time for 2019.01, so instead of forcing
users to temporary pass a `{vec.begin(), vec.size()}` everywhere instead
of just `vec`, I'm rather keeping these APIs intact.
The change done in 680144f1c5 was not
properly handling these cases:
* Mesh(NoCreateT) and wrap() were not constructing the internal vector,
which blew up when move-assigning another instance.
* ~Mesh() was not destructing the internal vector if the VAO ID was
zero or non-owning wrap() was used.
Strangely enough none of these were causing *any* problems for me on
Linux (even ASan was totally quiet and due to the unfortunate
combination of bugs even when I assigned totally random data to the
storage vector). This however blew up on MSVC, assuming there the
implementation is more checked.
Because it's possible to construct Mesh with no GL context available,
the move construction and destruction needs to avoid accessing Context
unless really necessary (it would be also unclear which type of vector
should be constructed if we have no context).
Extended the tests to handle hopefully all the cases.
This allows moving Buffer instances without fear of breaking stuff. I
thought this was in since a long time, but doesn't seem so. This
reintroduces a header dependency, but until I have Containers::Storage
or something that *can* store forward-declared type safely, it's
inevitable.
This is actually a preparation to make buffer-owning meshes a
possibility (where I would need an union of vectors otherwise),
nevertheless it removes the dependency on a vector.