Now it's a field and its corresponding object mapping, instead of
field and "objects":
- Goes better with the concept that there's not really any materialized
"object" anywhere, just fields mapped to them.
- No more weird singular/plural difference between field() and
objects(), it's field() and mapping() now.
- The objectCount() that actually wasn't really object count is now a
mappingBound(), an upper bound for object IDs contained in the object
mapping views. Which is quite self-explanatory without having to
mention every time that the range may be sparse.
This got originally added as some sort of a kludge to make it easy to go
to the parent transformation, assuming Parent and Transformation share
the same object mapping:
parentTransformation = transformations[parents[i]]
But after some ACTUAL REAL WORLD use, I realized that there's often a
set of objects that have a Parent defined, and then another, completely
disjoint, set of objects that have a transformation (for example certain
nodes having no transformation at all because it's an identity). And so
this parent indirection is not only useless, but in fact an additional
complication. Let's say we make a map of the transformations, where
transformationMap[i] is a transformation for object i:
transformationMap = {}
for j in range(len(transformations)):
transformationMap[transformationObject[j]] = transformation[j]
Then, with *no* assumptions about shared object mapping, the indirection
would cause parent transformation retrieval to look like this:
parentTransformation = transformationMap[parentObjects[parents[i]]
While *without* the indirection, it'd be just
parentTransformation = transformationMap[parents[i]]
Because that way one can query a field with *AsArray() and iterate
through it in a single expression. This also resolves the pending issue
where it was more than annoying to fetch object mapping for TRS fields
when only a subset of the fields is available.
This has to be solved on a more systematic level, perhaps even by
switching all types to be 64-bit. In the following commit all *AsArray()
and *Int() functions will output the object IDs as well, meaning this
would need to be handled in each and every API, which is a huge
maintenance burden.
As it's very unlikely that there actually will *ever* be >4G objects,
one possible option would be to introduce some "object ID hash" field
that would provide (contiguous?) remapping of the object ID to 32-bit
values, and the Into() and AsArray() accessors would return this
remapping instead of the original. But then again it'd cause issues with
for example animation references that would still reference the original
64-bit value.
Honestly I don't care much, this is just that the original
PrimitiveImporter tests started to fail because they expected
object3DForName() to return -1 for a 2D name, but that's no longer the
case. It would be possible to fix this, but I doubt anyone ever relied
on such behavior for 2D scenes, so just add a test that acknowledges
this new behavior.
As the comment says -- before the user code expected that if the scene
hierarchy is broken, particular objects will fail to import. Now the
whole scene fails to import, so we don't even get to know the actual
(expanded) deprecated 2D/3D object count. To reduce the suffering,
return at least the dimension-less object count there. It won't include
the duplicates from the single-function-object conversion but better
than reporting 0.
Another thought I had was about allowing a 2x2 / 3x3 matrix to be used
for rotation, but there's the ambiguity with it possibly containing
scaling / shear / whatever, which would then cause EXPLODING HEADACHES
when converting to quaternions / complex numbers.
Even though this API is deprecated and thus not meant to be used, most
existing code is still using the previous APIs and relying on the
backwards compatibiity interfaces. And I wasted quite some time
debugging why the scene looks like empty.
Using hasField() + fieldId() was a bad usage pattern leading to a double
linear lookup, so there's now findFieldId() returning an Optional which
covers both. Similarly for finding an object offset in an field, there's
a findFieldObjectOffset() returning an Optional, fieldObjectOffset()
asserting if an object is not found (for convenience to avoid explicit
error handling on user side) and hasFieldObject().
The internal helpers were also renamed and the offset argument moved to
be last for consistency.
To become the central piece of an upcoming SceneTools library, now I
need it just to implement duplication of objects that have more than one
mesh/light/camera/... assignment.
Was waiting for this to happen, as going through all the objects would
have been too messy. Plugins implemented using the legacy API will
now only show the parent field in SceneData, but since those eventually
get all ported, that's fine.
Same as with MeshData2D/3D, the original ObjectData API and plugin
interfaces are preserved to keep existing code as well as existing
importer implementations working. As Magnum's own importers will get
updated to the new SceneData workflow, a backward compatibility layer
provided that translates it to the subset that the legacy ObjectData
understands.
With this commit, both existing plugin code can build (and test against)
the new workflow, and any ports to the new workflow can test against the
legacy interfaces. Except that for now the compatibility layer doesn't
deal with objects that have more than one mesh or for example a light
and a camera attached, this will be done in a separate step.
Support utilities needed for SceneTools that will operate on arbitrary
SceneData (adding/removing fields, objects...) without having to know
what each field means and how it needs to be treated.
Honestly the same would make sense for the VertexFormat enum as well.
But not now, later.
This moves the transformation type consistency checks directly into the
constructor and makes the discovered dimension count available through a
getter. As a result, a lot of duplicate assertions (and
corresponding tests) could be removed while tightening the behavior --
before it was possible to have a 2D transformation matrix complemented
with a 3D rotation quaternion and all asserts would pass, yet the user
would be required to call transformations2DAsArray() *but*
translationsRotationsScalings3DAsArray().
Since scenes can exist without any transformation field, there's both
is2D() and is3D() and they both return false in that case.
For a lack of better API right now, this will be used to check if an
object belongs to a certain scene. In the future, when BitArray is a
thing, there will be a dedicated field with a mask telling which objects
belong to the scene and which not, with a fallback that goes through all
field object mappings and collects objects that have at least one field.
Useful for cases when it's desirable to extract just some elements, for
example just for a particular object ID. Or when allocating an
arbitrarily large array isn't wanted for perf/memory reasons.
These need to query field size for allocating the output array which
means looking up the field by name, but the same lookup was then done
again in the fooInto() implementation.
It was mistakenly thought to be replaced by the EXT_color_buffer_float
(which replaces WEBGL_color_buffer_float and in addition lists both 16-
and 32-bit float variants). But since there are still those stupid
patents for rendering to 32-bit float attachments, certain hardware
supports only rendering to 16-bit and not 32-bit, so the "superset"
extension isn't enough to be able to discover which hardware can
render to half-floats.
Also updated (hopefully all) docs to list this extension as being an
option on WebGL 2 as well.