Not that C++ STL and exceptions would be anything to take inspiration
from, but there's std::out_of_range. Python IndexError is also specified
as "index out of range", not "bounds".
Partially needed to avoid build breakages because Corrade itself
switched as well, partially because a cleanup is always good. Done
except for (STL-heavy) code that's deprecated or SceneGraph-related APIs
that are still quite full of STL as well.
Allows me to parse array properties in glTF scene node extras and ensure
they're preserved as arrays on export as well even if they would all
have a single item.
I'm seeing an assert for null string data not being correctly thrown in
a SceneTools API on ARM64 and an overflow of this 48-bit offset seems to
be a culprit. So better have that covered in the constructor already.
In some cases it's needed to release (or copy) the data first and
only then access the field properties through the SceneData to
(optionally) re-route the field views to a new data location. But since
releaseData() was implicitly erasing field data as well, this wasn't
possible and the only other option was to release the field data first
and then access them through the low-level SceneFieldData API with all
convenience lost.
This makes the release*Data() APIs a bit dangerous to use, but that
should be fine -- those aren't meant to be used by regular code anyway.
Similar caveat is with MaterialData already.
The assertion message printed being/end range, which was extremely
uninformative as it didn't show sizes and strides. That form made sense
for reporting if the views weren't contained in the data arrays, but not
here.
Additionally, the existing assertion didn't check stride, which means
that a mapping with 2 items and stride 8 was treated as being equal to a
mapping with 4 items and stride 4. On the other hand, it didn't behave
correctly for offset-only fields, those were always treated as different
from pointer fields even if they were actually matching.
Passes for SceneData but fails for MeshData due to 32-bit types used by
accident. The two also have a vastly different calculations in the range
checks, should unify that first.
Uses StridedBitArrayView underneath. Waited for this container to exist,
because implementing this using bools and wasting 8x more memory wasn't
a good option. Plus, being able to address single bits opens a
possibility to describe individual bits in enum flags, whereas the only
other option would be to take the whole flag as an opaque type
containing "some bit values".
Apparently I forgot to actually test these -- in order to fit the string
data pointer without making SceneFieldData too large, it'đ stored as an
offset from fieldData. And that's something not expressible in a
constexpr context. Thus the only way to create a constexpr string field
is by using the offset-only constructor (which is now appropriately
tested).
This change at least allowed me to move the constructor to the cpp file,
saving on header size and using more lightweight assertions.
Finally got an idea how to provide various options store these
efficiently, so it's implemented now. Five different storage variants
times four different type sizes.
These APIs are mostly just for debugging purposes, not widely used, so
it doesn't make sense to have them as constexpr in the header. (Plus the
returned void view is useless in a constexpr context anyway.)
This header size is getting out of hand, so every stripped bit counts.
Also now that they're no longer constexpr, I can go back to using
regular assertions. The reinterpret_cast<> wasn't needed either.
I realized those are too annoying when writing a glTF exporter which
contains a lot of switches over enums. And as further shown by the diff,
those were only inflicting additional pain in *all* switch statements,
nothing else, no other added value. And everywhere else the helpers are
the designated way to deal with those, so there's no point in having an
explicit enum value denoting start of a "custom range".
It wasn't even any convenient to have it in the enum, as the extra
effort needed for casting actually made it *exactly* the same length as
if I'd just use a separately-defined constant.
Again not publicly documented because I don't like the naming and I
don't have the full behavior and interactions figured out yet -- i.e.,
an array of VertexFormats would be printed with Debug::packed as a long
string of characters without any whitespace. Not good, thus this
feature probably needs to be split in two, with this being named
"compact" or something else.
Mostly just to avoid the return types changing to incompatible types in
the future, breaking existing code. The internals are currently not
fully ready to operate with 64-bit object IDs, especially the AsArray()
APIs -- those I will have to solve in the future somehow. Returning
64-bit values in the pairs would add four byte padding after basically
each value, which is way too wasteful for the common case.
The Into() APIs could eventually get 64-bit overloads though.
Currently used by the per-object access APIs to make the lookup
constant- or logarithmic-time instead of linear, available for use by
external data consumers as well.
Now it's a field and its corresponding object mapping, instead of
field and "objects":
- Goes better with the concept that there's not really any materialized
"object" anywhere, just fields mapped to them.
- No more weird singular/plural difference between field() and
objects(), it's field() and mapping() now.
- The objectCount() that actually wasn't really object count is now a
mappingBound(), an upper bound for object IDs contained in the object
mapping views. Which is quite self-explanatory without having to
mention every time that the range may be sparse.
This got originally added as some sort of a kludge to make it easy to go
to the parent transformation, assuming Parent and Transformation share
the same object mapping:
parentTransformation = transformations[parents[i]]
But after some ACTUAL REAL WORLD use, I realized that there's often a
set of objects that have a Parent defined, and then another, completely
disjoint, set of objects that have a transformation (for example certain
nodes having no transformation at all because it's an identity). And so
this parent indirection is not only useless, but in fact an additional
complication. Let's say we make a map of the transformations, where
transformationMap[i] is a transformation for object i:
transformationMap = {}
for j in range(len(transformations)):
transformationMap[transformationObject[j]] = transformation[j]
Then, with *no* assumptions about shared object mapping, the indirection
would cause parent transformation retrieval to look like this:
parentTransformation = transformationMap[parentObjects[parents[i]]
While *without* the indirection, it'd be just
parentTransformation = transformationMap[parents[i]]
Because that way one can query a field with *AsArray() and iterate
through it in a single expression. This also resolves the pending issue
where it was more than annoying to fetch object mapping for TRS fields
when only a subset of the fields is available.
This has to be solved on a more systematic level, perhaps even by
switching all types to be 64-bit. In the following commit all *AsArray()
and *Int() functions will output the object IDs as well, meaning this
would need to be handled in each and every API, which is a huge
maintenance burden.
As it's very unlikely that there actually will *ever* be >4G objects,
one possible option would be to introduce some "object ID hash" field
that would provide (contiguous?) remapping of the object ID to 32-bit
values, and the Into() and AsArray() accessors would return this
remapping instead of the original. But then again it'd cause issues with
for example animation references that would still reference the original
64-bit value.