Allows me to parse array properties in glTF scene node extras and ensure
they're preserved as arrays on export as well even if they would all
have a single item.
It was done only inside the internal addImporterContents()
implementation this function delegates to which was too late, as the
importer was queried before that already.
It's useful there as well, for example the StanfordSceneConverter
implements just single-mesh conversion but is capable of having the
attributes named in a custom way (although it's not implemented at the
moment). Or a mesh optimization plugin can have specialized behavior for
custom attributes but only if it knows what they are.
Not sure what was I thinking here -- if a field size wouldn't fit into a
32bit number, it won't fit into memory of a 32bit system anyway, so
there's no real use for the size to be returned as 64bit always.
Internally it *is* stored as a 64bit number, yes, to have compatible
binary layout on 32bit and 64bit systems, but that doesn't mean the
public API should return that too. And SceneData::fieldSize() is
std::size_t, so this feels really like an accidental brainfart.
The changed return type also means a lot of existing code doesn't need
to do any explicit casting to std::size_t anymore. Yay.
Building a mesh from scratch makes the test much easier to grasp and
allows me to test more corner cases together instead of having to add
special cases for offset-only, array and implementation-specific
attributes.
The test now also uncovers a potential optimization opportunity where a
copy of the attribute metadata could be avoided in some cases. Not
implementing it right now, just adding a TODO and an XFAIL.
A bit sad it took me three years to invent the right name for this
utility, heh. Also moving it together with others to a new
MeshTools/Copy.h header because *this* is the mainly useful API, not
reference() / mutableReference().
MaterialTools and SceneTools will get similar copy() APIs doing the same
thing.
A somewhat inverse / complementary utility for parentsBreadthFirst() --
while the former is useful mainly for convenient parent referencing,
this is for children and nested children. Currently the main use case is
extracting scene subtrees, which is also what the example snippet shows.
Getting a list of direct children is also possible, although for that
it's possible to use the parentsBreadthFirst() as well as the Parent
field directly, simply by scanning for all field entries with given
value.
This allows to filter individual field entries in the scene, such as
for example removing certain mesh assignments that were collapsed
together. A higher-level API that allows filtering all data belonging to
a certain set of objects will be then implemented on top of this one.
Same reasoning as before, the verb suggests it's transforming the
SceneData in some way, which isn't true, it just retrieves the data in a
certain way. And if an API that actually operates on SceneData got
added, it would be easily confused with this one.
Plus, the "order" isn't just one, this orders objects so they're grouped
with a common parent, but what if I wanted to instead order depth first?
Thus it's explicitly saying this is a breadth-first order.
The API got moved to the Hierarchy.h header, removing a need for a
dedicated file and test.
That's a second deprecation of this API in a short while, sorry. This
variant is hopefully the final one, with the previous one I still had
the problem that it contained a verb, which implied that it'd
*transform* the SceneData in some way which (unlike combineFields(),
filterFields() etc.) it didn't, it just extracts some data in a certain
way. This would all cause problems when there are APIs that actually do
perform hierarchy flattening.
It's also moved to a new, more general Hierarchy.h header which will
contain other hierarchy-related APIs. It doesn't make sense to have a
tiny header with just a single function, especially given it doesn't
depend on any heavy headers on its own.
Besides that it also makes the UnsignedInt overloads the main ones, and
the Trade::SceneField secondary, as is already done everywhere else (and
the opposite way was just bad inheritance from flattenMeshHierarchy()
it seems).
Without the asserts, it'd blow up only subsequently in the SceneData
constructor, printing addresses & strides wildly different from what the
input had, causing great confusion.
There also needs to be dedicated handling for placeholder mapping views
in TRS or mesh/material fields, as simply allocating a new mapping view
for each would again trigger an assert in SceneData.
Need to do the same checking in various SceneTools. Took a few
iterations to get right and without causing the same repeated code to
appear in every place this needs to be used. Still not ideal, but at the
very least adding a new enforced shared mapping (such as for mesh views)
won't need that much code and testing.
It asserted in the SceneFieldData constructor due to
Trade::SceneFieldData: distance between string data and field data
expected to fit into 48 bits but got 0x0 and 0xffffdad64ab9
Heh.
I'm seeing an assert for null string data not being correctly thrown in
a SceneTools API on ARM64 and an overflow of this 48-bit offset seems to
be a culprit. So better have that covered in the constructor already.
That comment made no sense as it *doesn't* call into the header, just a
few lines above. Spent half a century looking for the constructor there
due to this.
I'm (again) postponing implementation of a robust & optimized
Utility::copy() for bit array views. Attempted something, but it got out
of hand very quickly. Since the particular use case here isn't going to
be a perf bottleneck I think, it's fine to do it with just a dumb loop
for now.
It'll be soon needed by other tools and library users as well. The main
difference compared to the original internal API is that the
detection for shared mapping views now takes also sizes and strides into
account, without relying on just the data pointer.
Some additional robustness checks are needed regarding fields that have
mapping sharing enforced (as otherwise it'll fail in the SceneData
constructor which isn't nice), that'll be done in subsequent commits.
The new filterAttributes() API takes a BitArray, which makes the
internals a lot simpler -- no O(n^2) lookup, no growable arrays, and no
need to duplicate the same code in the ID- and name-based variants.
For consistency with MaterialTools and soon MeshTools, the APIs are
moved to a new Filter.h header, with the deprecated variants kept only
in the original FilterAttributes.h.
This is now the preferrable way to set options to plugins that get
delegated to from other plugins, instead of
Until now there wasn't a general command line interface to pass options
to plugins that get delegated to from other plugins, such as image
converters used by scene converters. Due to that limitation, e.g.
GltfSceneConverter had to add an [imageConverter] configuration group
that it then copied over to the chosen image converter. But such
approach obviously doesn't scale -- every converter would have to do the
same, would have to repeat the whole testing process, and basically the
same would need to be done for all importers delegating to image
plugins. Nightmare.
So there's now --set, which allows arbitrary options to be set for
arbitrary plugins, and that's the preferrable way now. To avoid having
to maintain two ways to do the same, the [imageConverter] group will get
removed eventually.
The main use case is being able to specify what concrete plugin gets
used for a particular alias, e.g. to be able to use SpngImporter instead
of PngImporter for faster PNG image loading.
So far the option is implemented only here, as the imageconverter,
shaderconverter and other tools don't really deal with plugins
that delegate to other plugins. Yet.
When using the -P or -M options on a large scene, it can happen that
the conversion would fail for a small number of outliers. For example
there can be some line meshes which aren't supported by MeshOptimizer,
or there are 16-bit images not supported by a certain image conversion
plugin.
Failing the whole operation in that case may be too radical, especially
if the only alternative would be to write a Python script that deals
with the outliers in a custom way. The option simply makes the
processing step pass through the original data unchanged, which may be a
simple and good-enough solution in many of those cases.