It's dangerous, as in case of failure it will attempt to print them as
strings. Plus now with latest de-std-string-ification of TestSuite it
causes the compilation to fail due to an ambigupus overload.
This should eventually be catched and disallowed directly by the Tester
class.
With really huge materials it's kinda useless to not know which layer
the error happened in -- and usually it's exactly because the layer
indices were specified wrong.
It was a clever harmless trick. Well, it was way more harmless than it
was clever, but even then it caused UBSan to complain. And that's Not A
Good Thing for various reasons, so let's just comply.
The main bad effect of this change is a *slightly* larger list of
exported symbols but until we actually get rid of the major bloats like
<iostream>, <string> and the like, this is not going to have any
measurable impact.
This mirrors what's done already for implementation-specific vertex
formats, thus:
* Ability to construct the classes without tripping up when trying to
check for type size in various asserts
* Providing a zero-size type-erased access in indices() and
mutableIndices()
* Disallowing typed and convenience access
Also not something the classic GPU vertex pipeline can handle, but
useful for other scenarios. Subsequently a support for array indices
will be added, allowing to directly represent for example OBJ files,
where each attribute has its own index buffer.
This is not something the classic GPU vertex pipeline can handle
(except maybe Vulkan, which can handle zero strides for instanced
attributes?), but useful for other scenarios. This means existing code
needs to be aware of and handle the new corner case.
No functional change, just splitting them to two separate headers and
two separate tests. These will eventually become public SceneTools
APIs... once I figure out better naming.
Mostly just to avoid the return types changing to incompatible types in
the future, breaking existing code. The internals are currently not
fully ready to operate with 64-bit object IDs, especially the AsArray()
APIs -- those I will have to solve in the future somehow. Returning
64-bit values in the pairs would add four byte padding after basically
each value, which is way too wasteful for the common case.
The Into() APIs could eventually get 64-bit overloads though.
Not really important right now as the SceneData from these are used only
in internal deprecated APIs, but at least it might speed up the
children2D() / children3D() queries. Mainly done so I don't forget to do
this later when these APIs are published in the SceneTools library.
What's not done is the rather complex logic in the single-function
conversion utility, where a field could retain the implicit/ordered
flags in *some* scenarios. There's too many corner cases so better be
conservative and don't preserve anything rather than mark something as
ordered while it's no longer the case. The corner cases are hopefully
all checled for (and XFAIL'd) in the test.
Currently used by the per-object access APIs to make the lookup
constant- or logarithmic-time instead of linear, available for use by
external data consumers as well.
Now it's a field and its corresponding object mapping, instead of
field and "objects":
- Goes better with the concept that there's not really any materialized
"object" anywhere, just fields mapped to them.
- No more weird singular/plural difference between field() and
objects(), it's field() and mapping() now.
- The objectCount() that actually wasn't really object count is now a
mappingBound(), an upper bound for object IDs contained in the object
mapping views. Which is quite self-explanatory without having to
mention every time that the range may be sparse.
This got originally added as some sort of a kludge to make it easy to go
to the parent transformation, assuming Parent and Transformation share
the same object mapping:
parentTransformation = transformations[parents[i]]
But after some ACTUAL REAL WORLD use, I realized that there's often a
set of objects that have a Parent defined, and then another, completely
disjoint, set of objects that have a transformation (for example certain
nodes having no transformation at all because it's an identity). And so
this parent indirection is not only useless, but in fact an additional
complication. Let's say we make a map of the transformations, where
transformationMap[i] is a transformation for object i:
transformationMap = {}
for j in range(len(transformations)):
transformationMap[transformationObject[j]] = transformation[j]
Then, with *no* assumptions about shared object mapping, the indirection
would cause parent transformation retrieval to look like this:
parentTransformation = transformationMap[parentObjects[parents[i]]
While *without* the indirection, it'd be just
parentTransformation = transformationMap[parents[i]]
Because that way one can query a field with *AsArray() and iterate
through it in a single expression. This also resolves the pending issue
where it was more than annoying to fetch object mapping for TRS fields
when only a subset of the fields is available.
This has to be solved on a more systematic level, perhaps even by
switching all types to be 64-bit. In the following commit all *AsArray()
and *Int() functions will output the object IDs as well, meaning this
would need to be handled in each and every API, which is a huge
maintenance burden.
As it's very unlikely that there actually will *ever* be >4G objects,
one possible option would be to introduce some "object ID hash" field
that would provide (contiguous?) remapping of the object ID to 32-bit
values, and the Into() and AsArray() accessors would return this
remapping instead of the original. But then again it'd cause issues with
for example animation references that would still reference the original
64-bit value.
Honestly I don't care much, this is just that the original
PrimitiveImporter tests started to fail because they expected
object3DForName() to return -1 for a 2D name, but that's no longer the
case. It would be possible to fix this, but I doubt anyone ever relied
on such behavior for 2D scenes, so just add a test that acknowledges
this new behavior.
As the comment says -- before the user code expected that if the scene
hierarchy is broken, particular objects will fail to import. Now the
whole scene fails to import, so we don't even get to know the actual
(expanded) deprecated 2D/3D object count. To reduce the suffering,
return at least the dimension-less object count there. It won't include
the duplicates from the single-function-object conversion but better
than reporting 0.
Another thought I had was about allowing a 2x2 / 3x3 matrix to be used
for rotation, but there's the ambiguity with it possibly containing
scaling / shear / whatever, which would then cause EXPLODING HEADACHES
when converting to quaternions / complex numbers.
Even though this API is deprecated and thus not meant to be used, most
existing code is still using the previous APIs and relying on the
backwards compatibiity interfaces. And I wasted quite some time
debugging why the scene looks like empty.
Using hasField() + fieldId() was a bad usage pattern leading to a double
linear lookup, so there's now findFieldId() returning an Optional which
covers both. Similarly for finding an object offset in an field, there's
a findFieldObjectOffset() returning an Optional, fieldObjectOffset()
asserting if an object is not found (for convenience to avoid explicit
error handling on user side) and hasFieldObject().
The internal helpers were also renamed and the offset argument moved to
be last for consistency.
To become the central piece of an upcoming SceneTools library, now I
need it just to implement duplication of objects that have more than one
mesh/light/camera/... assignment.
Same as with MeshData2D/3D, the original ObjectData API and plugin
interfaces are preserved to keep existing code as well as existing
importer implementations working. As Magnum's own importers will get
updated to the new SceneData workflow, a backward compatibility layer
provided that translates it to the subset that the legacy ObjectData
understands.
With this commit, both existing plugin code can build (and test against)
the new workflow, and any ports to the new workflow can test against the
legacy interfaces. Except that for now the compatibility layer doesn't
deal with objects that have more than one mesh or for example a light
and a camera attached, this will be done in a separate step.
Support utilities needed for SceneTools that will operate on arbitrary
SceneData (adding/removing fields, objects...) without having to know
what each field means and how it needs to be treated.
Honestly the same would make sense for the VertexFormat enum as well.
But not now, later.