When using the -P or -M options on a large scene, it can happen that
the conversion would fail for a small number of outliers. For example
there can be some line meshes which aren't supported by MeshOptimizer,
or there are 16-bit images not supported by a certain image conversion
plugin.
Failing the whole operation in that case may be too radical, especially
if the only alternative would be to write a Python script that deals
with the outliers in a custom way. The option simply makes the
processing step pass through the original data unchanged, which may be a
simple and good-enough solution in many of those cases.
Custom material attributes are enforced to start with a lowercase
letter, so I think it makes sense to be consistent and use the same for
custom scene fields and mesh attributes as well. It's not enforced in
any way but the tests should reflect that choice so new code that gets
written based on these inherits that practice.
Originally I thought I'd save plugins some time if I just give them the
custom indices directly, without being wrapped in a SceneField /
MeshAttribute enum. But in practice that didn't really save anything,
made the interfaces more error-prone due to there being no concrete type
anymore, and all code that delegates to nested importers or converters
had to re-wrap the IDs again, which is *again* error-prone.
Bumping the interface strings because this is a breaking change for the
implementations. Not for users tho, there nothing changes.
Took me a while to realize that tying this to a certain hardcoded field
isn't a good idea. The new variant is useful also for example for
getting absolute light positions or just whatever else. Besides taking a
SceneField there's now also an overload taking a field ID to avoid
double lookups. The only behavioral difference compared to the old API
is that the field is now required to exist, instead of the API being a
silent no-op if not present.
Eventually these APIs may get further extended to take a BitArrayView of
objects for which to calculate the transforms, for example to take only
meshes that are a part of the hierarchy, or meshes that satisfy an
arbitrary other condition. Which will also resolve the remaining
concerns with the API. I'm still keeping it marked as experimental tho,
the usefulness isn't set in stone yet.
The old APIs are marked as deprecated and implemented using the new
ones.
Not everything, and especially not several hundred megabytes of
animation track data. This also prepares it for recording names of
custom animation track targets that were added in the previous commits.
The `Type` was suggesting it'd be some C++ type, definitely not values
like Scaling3D or Translation2D, resulting in a significant "brain
autocompletion error" every time I was using that type.
Unfortunately on AnimationData the trackTargetType() couldn't similarly
get renamed to trackTarget() as there's already trackTarget() that
contains the node ID the target points to, so it's trackTargetName()
instead. Renaming trackTarget() to trackTargetId() wasn't an option as
that would be inconsistent with everything else (TextureTools::image(),
MaterialAttribute::BaseColorTexture, SceneField::Mesh are all IDs but
they don't have an `Id` suffix); renaming to AnimationTrackTargetName
would keep it insanely long and wouldn't make it consistent either
(MeshAttribute, SceneFIeld, MaterialAttribute are all referred to as
"names" yet they don't have a `Name` suffix).
If it crashes or does some other shit, it'll cause the tool to not work
at all. If it fails to load due to whatever other issue, it'll spam the
output for no reason.
Exposes MaterialTools::phongToPbrMetallicRoughness() that got added some
time ago. Most of the code and tests is scaffolding needed for direct
material import and processing outside of the addImporterContents()
automagic, similar to what was already done for meshes and images.
In particular, adding more material conversion options such as
canonicalization or deduplication will be significantly easier now that
the basics are done and tested.
The number of items is known beforehand, so avoid unnecessary
reallocations. When mesh LODs are supported this won't be that clear
anymore, nevertheless it's never bad to pre-reserve.
Using the InPlaceInit constructor instead of an unnecessarily verbose
array<T>({...}) helper, indenting the command-line arguments so they're
easier to distinguish from the rest.
The main property of this feature is that it prints the bounds *in a
canonical type*, not of the actual type that's used. Yet however that
wasn't ever tested. Now it is, and it's also testing behavior for custom
attributes, which don't get their bounds printed (because the canonical
type isn't known for those).
Sometimes it's a vital piece of information, e.g. the file having no
default might lead to it being not displayed correctly as some end-user
application might think it has no scene.
Otherwise it's really, REALLY hard to discover which data are missing in
the output. Especially for files with 1700 meshes, 800 materials, 3600
textures and such.
It doesn't discard meshes that are not a part of the hierarchy, but that
was the plan in the beginning. However, over the time I realized that a
better property for it is that the output is guaranteed to be in the
same order and size as the mesh field in the scene. Because that's what
I relied on in every use case, and every time I had to dig that property
out of the sources because it was deliberately not documented *because*
it was meant to change.
No longer. The compatibility with the mesh field ordering is now
documented, the behavior regarding loose objects also, and if there's
ever a need to discard everything that's not in the reachable hierarchy,
it'll probably get its own API. Because it's useful for general
asset cleanup and other use cases, not just meshes.
I'm still keeping the experimental tag here though, tp be sure.