Since the main point of the low-level API is to make use of the
EXT_multi_draw_arrays / ANGLE_multi_draw / WEBGL_multi_draw extensions
for a reduced driver overhead *and* the builtin gl_DrawID variable, it
just doesn't make sense to provide a fallback to draw() in a loop. When
the extensions are not available, the user has to do extra work in order
to supply proper UBO draw offset for each draw call, and thus the
fallback path has basically no practical use.
On the other hand, draw() taking the MeshView instances is kept as-is --
this one is rather old and so breaking compatibility would be an
unnecessary pain point. Plus, since it has to allocate the contiguous
arrays internally, it's not desirable for any fast-path rendering
anyway.
Because a MeshView might not be the best thing to have when you are
submitting a batch of thousand draws. It takes a strided array views to
allow for more flexibility, but can also detect if the input is already
contiguous and use it as-is.
UNFORTUNATELY the GL 1.0 legacy still continues to stink and so there
has to be a 64-bit-specific overload which is the *actual* variant that
doesn't allocate because glMultiDrawElements takes a `void**` for INDEX
OFFSETS and it's IN BYTES! Which foolish soul designed such a thing back
in the 1860s, I wonder. There's no reason to not have an index offset
in elements because all indices have to have the same type ANYWAY. And
yes, I wasted about three hours debugging driver crashes because I
THOUGHT this parameter takes offset in elements, not bytes.
Also note: on 32-bit platforms this depends on latest Corrade with the
CORRADE_TARGET_32BIT definition. Spent an embarrassing amount of time
wondering why all local builds but Emscripten work.
As usual, the old APIs are still present, but marked as deprecated.
Existing code is not updated yet to ensure I didn't break anything with
this.
This way it's much more intuitive and makes the code shorter and nicer
in many cases. Shaders are now also able to hide irrelevant
draw/dispatch APIs to avoid accidents.
Good thing I checked this -- based on WebGL I was under assumption that
all GPUs have it just 256, but that was really just WebGL limitation.
Also ugh why this query wasn't there since the 90's?
Deprecated for 2018.04, it's been almost a year since. Whoever is using
Magnum regularly updated already, and who not can always upgrade
gradually (2018.02, 2018.04, 2018.10, 2019.01 etc.).
What's left is *a lot* of places taking monstrous
std::vector<std::reference_wrapper> and that can't be changed to
std::vector<Containers::Reference> in a source-compatible way. Even that
would be only a temporary change, since the goal is to fully avoid
dependency on STL in those cases.
The final version of these APIs should take
Containers::ArrayView<Containers::Reference> and be implicitly
convertible froom e.g. std::vector<Containers::Reference>. That's
definitely possible, but not in time for 2019.01, so instead of forcing
users to temporary pass a `{vec.begin(), vec.size()}` everywhere instead
of just `vec`, I'm rather keeping these APIs intact.
The change done in 680144f1c5 was not
properly handling these cases:
* Mesh(NoCreateT) and wrap() were not constructing the internal vector,
which blew up when move-assigning another instance.
* ~Mesh() was not destructing the internal vector if the VAO ID was
zero or non-owning wrap() was used.
Strangely enough none of these were causing *any* problems for me on
Linux (even ASan was totally quiet and due to the unfortunate
combination of bugs even when I assigned totally random data to the
storage vector). This however blew up on MSVC, assuming there the
implementation is more checked.
Because it's possible to construct Mesh with no GL context available,
the move construction and destruction needs to avoid accessing Context
unless really necessary (it would be also unclear which type of vector
should be constructed if we have no context).
Extended the tests to handle hopefully all the cases.
This allows moving Buffer instances without fear of breaking stuff. I
thought this was in since a long time, but doesn't seem so. This
reintroduces a header dependency, but until I have Containers::Storage
or something that *can* store forward-declared type safely, it's
inevitable.
This is actually a preparation to make buffer-owning meshes a
possibility (where I would need an union of vectors otherwise),
nevertheless it removes the dependency on a vector.
Similarly to pixel formats, there is now generic Magnum::MeshPrimitive
and Magnum::MeshIndexType, which is convertible to GL::MeshPrimitive and
GL::MeshIndexType using GL::meshPrimitive() and GL::meshIndexType(). In
addition, the following is done:
* The original GL::Mesh::IndexType is now GL::MeshIndexType, original
name is now just a typedef.
* GL::Mesh::indexSize() is deprecated in favor of
Magnum::meshIndexTypeSize() and GL::Mesh::indexTypeSize().
* New GL::Mesh::indexType() and GL::MeshView::mesh() getters (not sure
why they were omitted)
* GL::Mesh::indexType(), GL::Mesh::indexTypeSize(),
GL::MeshView::setIndexRange() now expect that the mesh is indexed
(useful property in my opinion, also avoids getting random results).
* The extra MeshPrimitive::LinesAdjacency etc. are still present for
backwards compatibility, but marked as deprecated. Use
GL::MeshPrimitive values instead.
At the moment just the GL library itself w/o the tests, and without
backwards compatibility aliases. The following types were left in the
root namespace, despite being in the GL/ directory, as they will get
moved back soon:
* Image, CompressedImage and their dimensional typedefs
* ImageView, CompressedImageView and their dimensional typedefs
* PixelStorage
Not PixelFormat etc., that one will stay in the GL namespace and a
completely new PixelFormat enum will be provided in the root namespace.
Minimal updates (just the include guards) so Git is hopefully able to
detect the rename and track the history properly.
Everything except Magnum::GL doesn't compile now.