Back when I wrote the test I didn't know what would be the most
convenient way to use the API yet, so it was unnecessarily verbose in
various places. Now I do.
Also using Utility::copy() to populate the arrays. No need to suffer
this much for no reason.
The second lookup was only needed for a type compatibility assertion,
which is a bit unfortunate. It wouldn't be a problem on a no-assert
build, but for various reasons people shouldn't be using these. Too
dangerous.
Allows me to parse array properties in glTF scene node extras and ensure
they're preserved as arrays on export as well even if they would all
have a single item.
It was done only inside the internal addImporterContents()
implementation this function delegates to which was too late, as the
importer was queried before that already.
It's useful there as well, for example the StanfordSceneConverter
implements just single-mesh conversion but is capable of having the
attributes named in a custom way (although it's not implemented at the
moment). Or a mesh optimization plugin can have specialized behavior for
custom attributes but only if it knows what they are.
Not sure what was I thinking here -- if a field size wouldn't fit into a
32bit number, it won't fit into memory of a 32bit system anyway, so
there's no real use for the size to be returned as 64bit always.
Internally it *is* stored as a 64bit number, yes, to have compatible
binary layout on 32bit and 64bit systems, but that doesn't mean the
public API should return that too. And SceneData::fieldSize() is
std::size_t, so this feels really like an accidental brainfart.
The changed return type also means a lot of existing code doesn't need
to do any explicit casting to std::size_t anymore. Yay.
A bit sad it took me three years to invent the right name for this
utility, heh. Also moving it together with others to a new
MeshTools/Copy.h header because *this* is the mainly useful API, not
reference() / mutableReference().
MaterialTools and SceneTools will get similar copy() APIs doing the same
thing.
Without the asserts, it'd blow up only subsequently in the SceneData
constructor, printing addresses & strides wildly different from what the
input had, causing great confusion.
There also needs to be dedicated handling for placeholder mapping views
in TRS or mesh/material fields, as simply allocating a new mapping view
for each would again trigger an assert in SceneData.
Need to do the same checking in various SceneTools. Took a few
iterations to get right and without causing the same repeated code to
appear in every place this needs to be used. Still not ideal, but at the
very least adding a new enforced shared mapping (such as for mesh views)
won't need that much code and testing.
I'm seeing an assert for null string data not being correctly thrown in
a SceneTools API on ARM64 and an overflow of this 48-bit offset seems to
be a culprit. So better have that covered in the constructor already.
That comment made no sense as it *doesn't* call into the header, just a
few lines above. Spent half a century looking for the constructor there
due to this.
It'll be soon needed by other tools and library users as well. The main
difference compared to the original internal API is that the
detection for shared mapping views now takes also sizes and strides into
account, without relying on just the data pointer.
Some additional robustness checks are needed regarding fields that have
mapping sharing enforced (as otherwise it'll fail in the SceneData
constructor which isn't nice), that'll be done in subsequent commits.
Custom material attributes are enforced to start with a lowercase
letter, so I think it makes sense to be consistent and use the same for
custom scene fields and mesh attributes as well. It's not enforced in
any way but the tests should reflect that choice so new code that gets
written based on these inherits that practice.
Originally I thought I'd save plugins some time if I just give them the
custom indices directly, without being wrapped in a SceneField /
MeshAttribute enum. But in practice that didn't really save anything,
made the interfaces more error-prone due to there being no concrete type
anymore, and all code that delegates to nested importers or converters
had to re-wrap the IDs again, which is *again* error-prone.
Bumping the interface strings because this is a breaking change for the
implementations. Not for users tho, there nothing changes.
In some cases it's needed to release (or copy) the data first and
only then access the field properties through the SceneData to
(optionally) re-route the field views to a new data location. But since
releaseData() was implicitly erasing field data as well, this wasn't
possible and the only other option was to release the field data first
and then access them through the low-level SceneFieldData API with all
convenience lost.
This makes the release*Data() APIs a bit dangerous to use, but that
should be fine -- those aren't meant to be used by regular code anyway.
Similar caveat is with MaterialData already.
Especially the part about non-owned data was lacking, with basically no
information about what are offset-only attributes and fields actually
good for.
Instead of saying "which is not supported" in each assert, which is
vague and might imply that "it eventually will be supported", document
the actual reason in a single place, which is the MeshAttributeData docs.
It looked like it was last touched in 2012. Not great. Also, with this I
can finally stop explaining the four-byte-aligned-row defaults to people
and can just point them to docs.
The perf cost is just too great for these to be enabled always. The only
place where the assertions are kept always is in the batch APIs -- there
it's assumed the function is called on large enough data to offset this
overhead, plus since it's often dealing with large blocks of data the
memory safety is more important than various FP drifts which were the
usual case why other assertions were firing.
The assertion message printed being/end range, which was extremely
uninformative as it didn't show sizes and strides. That form made sense
for reporting if the views weren't contained in the data arrays, but not
here.
Additionally, the existing assertion didn't check stride, which means
that a mapping with 2 items and stride 8 was treated as being equal to a
mapping with 4 items and stride 4. On the other hand, it didn't behave
correctly for offset-only fields, those were always treated as different
from pointer fields even if they were actually matching.
A counterpart / inverse to Verbose, is meant to be used by plugins to
suppress all warnings.
This flag was already used by the ShaderConverter APIs. Fonts don't have
any flags (and don't have any verbose output or warnings at the moment
either), so there it isn't added; audio importers are in a maintenance
mode with no new features added as I'll be eventually merging them with
the general importer interface in Trade.
For more robust handling of non-owning *Data instanced and refcounting
in Python bindings I need to differentiate between, say, a MeshData
referencing global memory (such as Primitives::cubeSolid()) and MeshData
referencing just some temporary allocation or another MeshData (such as
the output of MeshTools::filterOnlyAttributes()).
This means I (and people making their own plugins) don't need to go and
update each and every plugin once the version in the interface string
gets bumped after a (silent) ABI break. Such as when new virtual
functions get added, as those often lead to strange crashes if the
plugins don't get rebuilt after.
The plugins will now use this macro, which means they'll
automatically embed an interface string that was present in the base
class header at build time. However, when the base class updates, the
previous string is still embedded in the plugin binary, which will then
fail to load -- this being automatic doesn't mean the original purpose
is lost. Subsequently rebuilding the plugins from source will make them
pick up the updated interface string again.
Taking Animation::TrackViewStorage wasn't really a good idea, as it
wasn't solving anything -- in order to create it, there needs to be a
TrackView of a concrete type first anyway, and even then it required a
lot of additional verbose typing.
The new constructors take basically what TrackView takes, plus there's
additionally a constructor that takes a typeless value view together
with explicitly specified value and result types, allowing a truly
type-erased usage. On the other hand, the templated variants directly
deduce the types without any additional typing, making the construction
similarly straightforward to e.g. SceneFieldData.
In case of the type-erased constructors, if the interpolator function
isn't supplied explicitly, an implicit one is picked based on the value
type, result type and interpolation. Not all combinations make sense of
course, so this is a new assertion point, however compared to the
previous way where the interpolator was picked from a *typed* TrackView,
this doesn't add any new restriction -- what asserted there, asserts now
as well, and additionally you can't have e.g. a Quaternion track with a
boolean result. I may also be eventually adding assertions to check that
the target name matches the result type -- so e.g. a rotation isn't
specified as an integer and such. Compared to newer APIs like MeshData,
MaterialData or SceneData the AnimationData API has a significant lack
of sanity asserts like this.
Instead of storing Animation::TrackViewStorage directly it now contains
the view pointers, strides and size (where the size is shared by both
keys and values) together with packing the non-pointer values into
existing paddings. Together with reducing the keyframe count to 32 bits
and strides to 16 bits (which is consistent with MeshData and
SceneData), this reduces the size from 80 bytes to 48.
Not using TrackViewStorage also means we can directly accept the
key/value views in constructors, significantly improving the usability.
This also makes it possible to add support for (constexpr) offset-only
track data and thus easy serializability, again similarly to
MeshAttributeData and SceneFieldData.
Given the recent issues with vertex data with size over 4 GB, I feel
this limit might get hit soon as well. So far GPUs don't support vertex
counts larger than 32 bits, so storing them in a 32-bit number matches
the limitation there. Also, a vertex is usually at least 6 bytes (for
3-component positions quantized into 16-bit ints), thus a mesh hitting
this limit would be 24 GB in size. Which fits only on the beefiest
contemporary GPUs.
However I imagine the limit might get raised eventually, for example to
support a use case of a huge sparse mesh where only sub-parts of it are
drawn, and the sub-parts have counts that fit into 32 bits.