Looking at the snippets, these seem to have been written back when there
was no builtin shaders yet, it seems, not to mention
MeshTools::compile(), Trade::MeshData or any of the other high-level
APIs. Rather overwhelming to just throw huge code snippets at the user,
explaining a workflow with a custom-made mesh that's going to be drawn
with a custom-made shader, which is like level 999 of using the GL
library.
It was rather discouraging to start "Basic usage" with a boring-ass long
snippet. On the other hand showing just compile() first would lead
people to think it's all some opaque magic, so trying to balance that a
bit.
Also why the hell was the compile() snippet showing the horrendous GL
way of specifying attribute formats? This is not great either but at
least not redundant.
For example if we're looking for a 2D image named "cubemap", the
importer will tell us there's 0 2D images, making a potential mistake
(2D vs 3D) more obvious.
Using openMemory() instead of openData() allows the implementation to
assume the data will stay in scope for as long as needed, which can
prevent unnecessary copies in some plugin implementations.
It warranted a new flag, DataFlag::ExternallyOwned, to describe this
kind of memory. I couldn't reuse Owned as that's used for allocations
owned by the instance, which is too little for certain future use cases.
For example returning *Data instances referencing an Owned memory would
mean the user has to assume the memory is gone when the importer
instance is gone, and that's generally not true for memory passed to
openMemory().
Originally I thought I would do this later, but then realized the
existing plugin implementations would need to get all updated again to
be aware of the new flag, with some being forgotten, and it's just
easier to do the whole thing in a single step.
This makes it much less annoying to pass arbitrarily typed data, such as
std::uint8_t or char8_t and what not. It was already done like this for
the new shader converter plugins, where the input is often 32-bit ints
for SPIR-V.
OTOH the internal virtual API is kept with ArrayView<const char>, as
that makes it easier to operate on by the implementations.
This allows to better describe memory ownership and transfer it instead
of forcing the plugins to allocate their own local copy if the import
happens in-place on the imported data. Right now that's mainly for the
openFile() use case, which implicitly allocated an Array with file
contents only to pass it to openData() which then made a copy because it
could not make any assumption about data scope.
In other words, certain plugins (TgaImporter, KtxImporter, DdsImporter,
CgltfImporter and possibly others) will now have their peak memory usage
*halved*.
Hah, so many overloads. Not providing mutable access to keys or layer
offsets as that would break the invariant of the internal array always
being sorted.
Those have 3 pointers at least, my limit for passing by value is trivial
copyability and two pointers. I hope that reflects the actual HW at
least vaguely, heh.
And add a comment explaining why we don't check the pointer for empty
meshes -- otherwise empty interleaved meshes would fail with stuff like
Trade::MeshData: attribute 0 [0xc:0xc] is not contained in passed
vertexData array [0x0:0x0]
which ... helps nobody.
Because otherwise we don't properly test all cases. Case in point --
attributeData(UnsignedInt) wasn't correctly propagating the array size,
causing the new tests to fail. Fix in the next commit.
Long ago, during one of the few initial MeshData iterations. there was
no VertexFormat, but rather Trade::MeshAttributeType. I managed to get
rid of it mostly, except this place.
Originally I wanted to show how to convert a JPEG to an EXR directly,
however after trying and miserably failing to implement that inside
OpenExrImageConverter I realized the plugin is definitely not the place
where to perform such conversion. So this will have to wait until
there's some proper API in TextureTools or somewhere.
In case of --layers and --levels this only works if the input images
have a single level, otherwise --level has to be set. The internal
implementation would be too complex otherwise. As a consequence,
combining a set of 2D mipmapped images into a 3D mipmapped image means
one first has to combine particular 2D image levels to 3D levels and
then combine all 3D levels to a 3D mipmapped image, it can't be done in
a single step and it also can't be done by first combining levels and
then layers.