Originally it was just assuming that any Vector3ub or Color3ub is a
normalized format. That was kinda enough for many cases, but it started
to get annoying with sRGB image comparisons, as those had to be
manually reinterpret with a sRGB-less format in order to pass.
Now the pixel format detection looks at the expected image format as
well, and if the underlying type and component count matches, it
inherits the sRGB and normalized property from it as well. If not, it
falls back to an integer format for vectors, and normalized format for
colors. For vectors this is different from previous behavior but
shouldn't cause any problem in practice -- the only result will be that
the image comparison fails with a different message for pixel format
mismatch than before.
This now also properly and fully tests the pixelFormatFor() helper, and
adds a missing Color3 specialization of it.
Most of the testing scaffolding here is a preparation for the actually
complex formats like BC6/7 or ASTC. Also, it's great to be able to use
Magnum from Python to prepare data for testing the C++ Magnum APIs.
Importers with multi-level mesh support are here since 2020, yet somehow
this plugin never exposed those. Another reason for proper test
coverage. The original triangle.ply was used by AnySceneConverter tests,
so it was moved there instead.
Such a hopeful test coverage, almost there, and yet it doesn't test
everything. It now uses 1D, 2D and 3D KTX2 files with levels taken
directly from KtxImporter tests, the original 1d.ktx2 and 3d.ktx2 are
moved to AnyImageConverter test where they are used to verify metadata
presence.
Allows me to parse array properties in glTF scene node extras and ensure
they're preserved as arrays on export as well even if they would all
have a single item.
It was done only inside the internal addImporterContents()
implementation this function delegates to which was too late, as the
importer was queried before that already.
It's useful there as well, for example the StanfordSceneConverter
implements just single-mesh conversion but is capable of having the
attributes named in a custom way (although it's not implemented at the
moment). Or a mesh optimization plugin can have specialized behavior for
custom attributes but only if it knows what they are.
Not sure what was I thinking here -- if a field size wouldn't fit into a
32bit number, it won't fit into memory of a 32bit system anyway, so
there's no real use for the size to be returned as 64bit always.
Internally it *is* stored as a 64bit number, yes, to have compatible
binary layout on 32bit and 64bit systems, but that doesn't mean the
public API should return that too. And SceneData::fieldSize() is
std::size_t, so this feels really like an accidental brainfart.
The changed return type also means a lot of existing code doesn't need
to do any explicit casting to std::size_t anymore. Yay.
Building a mesh from scratch makes the test much easier to grasp and
allows me to test more corner cases together instead of having to add
special cases for offset-only, array and implementation-specific
attributes.
The test now also uncovers a potential optimization opportunity where a
copy of the attribute metadata could be avoided in some cases. Not
implementing it right now, just adding a TODO and an XFAIL.
A bit sad it took me three years to invent the right name for this
utility, heh. Also moving it together with others to a new
MeshTools/Copy.h header because *this* is the mainly useful API, not
reference() / mutableReference().
MaterialTools and SceneTools will get similar copy() APIs doing the same
thing.
A somewhat inverse / complementary utility for parentsBreadthFirst() --
while the former is useful mainly for convenient parent referencing,
this is for children and nested children. Currently the main use case is
extracting scene subtrees, which is also what the example snippet shows.
Getting a list of direct children is also possible, although for that
it's possible to use the parentsBreadthFirst() as well as the Parent
field directly, simply by scanning for all field entries with given
value.
This allows to filter individual field entries in the scene, such as
for example removing certain mesh assignments that were collapsed
together. A higher-level API that allows filtering all data belonging to
a certain set of objects will be then implemented on top of this one.
Same reasoning as before, the verb suggests it's transforming the
SceneData in some way, which isn't true, it just retrieves the data in a
certain way. And if an API that actually operates on SceneData got
added, it would be easily confused with this one.
Plus, the "order" isn't just one, this orders objects so they're grouped
with a common parent, but what if I wanted to instead order depth first?
Thus it's explicitly saying this is a breadth-first order.
The API got moved to the Hierarchy.h header, removing a need for a
dedicated file and test.
That's a second deprecation of this API in a short while, sorry. This
variant is hopefully the final one, with the previous one I still had
the problem that it contained a verb, which implied that it'd
*transform* the SceneData in some way which (unlike combineFields(),
filterFields() etc.) it didn't, it just extracts some data in a certain
way. This would all cause problems when there are APIs that actually do
perform hierarchy flattening.
It's also moved to a new, more general Hierarchy.h header which will
contain other hierarchy-related APIs. It doesn't make sense to have a
tiny header with just a single function, especially given it doesn't
depend on any heavy headers on its own.
Besides that it also makes the UnsignedInt overloads the main ones, and
the Trade::SceneField secondary, as is already done everywhere else (and
the opposite way was just bad inheritance from flattenMeshHierarchy()
it seems).
Without the asserts, it'd blow up only subsequently in the SceneData
constructor, printing addresses & strides wildly different from what the
input had, causing great confusion.
There also needs to be dedicated handling for placeholder mapping views
in TRS or mesh/material fields, as simply allocating a new mapping view
for each would again trigger an assert in SceneData.
Need to do the same checking in various SceneTools. Took a few
iterations to get right and without causing the same repeated code to
appear in every place this needs to be used. Still not ideal, but at the
very least adding a new enforced shared mapping (such as for mesh views)
won't need that much code and testing.
It asserted in the SceneFieldData constructor due to
Trade::SceneFieldData: distance between string data and field data
expected to fit into 48 bits but got 0x0 and 0xffffdad64ab9
Heh.