Mostly just to avoid the return types changing to incompatible types in
the future, breaking existing code. The internals are currently not
fully ready to operate with 64-bit object IDs, especially the AsArray()
APIs -- those I will have to solve in the future somehow. Returning
64-bit values in the pairs would add four byte padding after basically
each value, which is way too wasteful for the common case.
The Into() APIs could eventually get 64-bit overloads though.
Currently used by the per-object access APIs to make the lookup
constant- or logarithmic-time instead of linear, available for use by
external data consumers as well.
Now it's a field and its corresponding object mapping, instead of
field and "objects":
- Goes better with the concept that there's not really any materialized
"object" anywhere, just fields mapped to them.
- No more weird singular/plural difference between field() and
objects(), it's field() and mapping() now.
- The objectCount() that actually wasn't really object count is now a
mappingBound(), an upper bound for object IDs contained in the object
mapping views. Which is quite self-explanatory without having to
mention every time that the range may be sparse.
This got originally added as some sort of a kludge to make it easy to go
to the parent transformation, assuming Parent and Transformation share
the same object mapping:
parentTransformation = transformations[parents[i]]
But after some ACTUAL REAL WORLD use, I realized that there's often a
set of objects that have a Parent defined, and then another, completely
disjoint, set of objects that have a transformation (for example certain
nodes having no transformation at all because it's an identity). And so
this parent indirection is not only useless, but in fact an additional
complication. Let's say we make a map of the transformations, where
transformationMap[i] is a transformation for object i:
transformationMap = {}
for j in range(len(transformations)):
transformationMap[transformationObject[j]] = transformation[j]
Then, with *no* assumptions about shared object mapping, the indirection
would cause parent transformation retrieval to look like this:
parentTransformation = transformationMap[parentObjects[parents[i]]
While *without* the indirection, it'd be just
parentTransformation = transformationMap[parents[i]]
Because that way one can query a field with *AsArray() and iterate
through it in a single expression. This also resolves the pending issue
where it was more than annoying to fetch object mapping for TRS fields
when only a subset of the fields is available.
This has to be solved on a more systematic level, perhaps even by
switching all types to be 64-bit. In the following commit all *AsArray()
and *Int() functions will output the object IDs as well, meaning this
would need to be handled in each and every API, which is a huge
maintenance burden.
As it's very unlikely that there actually will *ever* be >4G objects,
one possible option would be to introduce some "object ID hash" field
that would provide (contiguous?) remapping of the object ID to 32-bit
values, and the Into() and AsArray() accessors would return this
remapping instead of the original. But then again it'd cause issues with
for example animation references that would still reference the original
64-bit value.
Another thought I had was about allowing a 2x2 / 3x3 matrix to be used
for rotation, but there's the ambiguity with it possibly containing
scaling / shear / whatever, which would then cause EXPLODING HEADACHES
when converting to quaternions / complex numbers.
Even though this API is deprecated and thus not meant to be used, most
existing code is still using the previous APIs and relying on the
backwards compatibiity interfaces. And I wasted quite some time
debugging why the scene looks like empty.
Using hasField() + fieldId() was a bad usage pattern leading to a double
linear lookup, so there's now findFieldId() returning an Optional which
covers both. Similarly for finding an object offset in an field, there's
a findFieldObjectOffset() returning an Optional, fieldObjectOffset()
asserting if an object is not found (for convenience to avoid explicit
error handling on user side) and hasFieldObject().
The internal helpers were also renamed and the offset argument moved to
be last for consistency.
Support utilities needed for SceneTools that will operate on arbitrary
SceneData (adding/removing fields, objects...) without having to know
what each field means and how it needs to be treated.
Honestly the same would make sense for the VertexFormat enum as well.
But not now, later.
This moves the transformation type consistency checks directly into the
constructor and makes the discovered dimension count available through a
getter. As a result, a lot of duplicate assertions (and
corresponding tests) could be removed while tightening the behavior --
before it was possible to have a 2D transformation matrix complemented
with a 3D rotation quaternion and all asserts would pass, yet the user
would be required to call transformations2DAsArray() *but*
translationsRotationsScalings3DAsArray().
Since scenes can exist without any transformation field, there's both
is2D() and is3D() and they both return false in that case.
For a lack of better API right now, this will be used to check if an
object belongs to a certain scene. In the future, when BitArray is a
thing, there will be a dedicated field with a mask telling which objects
belong to the scene and which not, with a fallback that goes through all
field object mappings and collects objects that have at least one field.
Useful for cases when it's desirable to extract just some elements, for
example just for a particular object ID. Or when allocating an
arbitrarily large array isn't wanted for perf/memory reasons.
These need to query field size for allocating the output array which
means looking up the field by name, but the same lookup was then done
again in the fooInto() implementation.