And doing all the automagic of unpacking packed types, converting
positions *and* normals/tangents/bitangents, and also an overload for
transforming texture coordinates.
Such a simple thing and yet so complex and nasty to test.
No functional change, just splitting them to two separate headers and
two separate tests. These will eventually become public SceneTools
APIs... once I figure out better naming.
Probably a leftover from when these dependencies were handled in a
much shittier way? For as long as I remember, enabling WITH_GL_INFO
always enabled WITH_GL and WITH_WINDOWLESSWHATEVERAPPLICATION
implicitly.
And prefere to use it onver OpenEXR in most AnyImage{Converter,Importer}
tests, unless the test really needs something that only OpenEXR has
(such as the verbose output for threads or configuration that needs to
be set on both export and import to make the import succeed).
Similar to sceneconverter's --profile option, measuring import and
conversion time. This also means that sceneconverter's --profile now
includes image import time, which wasn't done before.
Because this tool is often useful to debug broken importers, it's not
wise to parse scenes even if just --info-materials was specified to
gather material reference count. The new behavior is that reference
count will be listed only if both the referer and referee data type info
is specified.
This also allowed me to remove the redundant `references` field from
Info structures -- now just the arrays are used, and if the array is
empty, then it covers both the case when refcounts were not calculated
(in which case the displayed info would be wrong) and when there's no
refered data at all (in which case the reference count info would be
superfluous).
There was also an issue that texture reference count was not calculated
when --info-materials was not set, this is now the desired behavior
(except that it's not printing the invalid zero count at all).
Because a texture can be referenced by multiple attributes in a single
materials, the reference count might be higher than actual material
count, which would be confusing. Since this is mainly for informational
purposes (finding unreferenced data in a file), it doesn't make sense to
spend time working on a proper material counter right now.
Of course, discovering bugs RIGHT AFTER I merge to master. At least now
I can't have the OCD urge to squash that back to
7dc2dea45b and
dd9f4747b7, saving some time.
With the previous commits, existing plugin implementations built and ran
against the new code, however it introduced several ABI breaks meaning
that existing plugin binaries would crash. This forces them to be
recompiled to match the new version string.
Mostly just to avoid the return types changing to incompatible types in
the future, breaking existing code. The internals are currently not
fully ready to operate with 64-bit object IDs, especially the AsArray()
APIs -- those I will have to solve in the future somehow. Returning
64-bit values in the pairs would add four byte padding after basically
each value, which is way too wasteful for the common case.
The Into() APIs could eventually get 64-bit overloads though.
Not really important right now as the SceneData from these are used only
in internal deprecated APIs, but at least it might speed up the
children2D() / children3D() queries. Mainly done so I don't forget to do
this later when these APIs are published in the SceneTools library.
What's not done is the rather complex logic in the single-function
conversion utility, where a field could retain the implicit/ordered
flags in *some* scenarios. There's too many corner cases so better be
conservative and don't preserve anything rather than mark something as
ordered while it's no longer the case. The corner cases are hopefully
all checled for (and XFAIL'd) in the test.
Currently used by the per-object access APIs to make the lookup
constant- or logarithmic-time instead of linear, available for use by
external data consumers as well.
Now it's a field and its corresponding object mapping, instead of
field and "objects":
- Goes better with the concept that there's not really any materialized
"object" anywhere, just fields mapped to them.
- No more weird singular/plural difference between field() and
objects(), it's field() and mapping() now.
- The objectCount() that actually wasn't really object count is now a
mappingBound(), an upper bound for object IDs contained in the object
mapping views. Which is quite self-explanatory without having to
mention every time that the range may be sparse.
This got originally added as some sort of a kludge to make it easy to go
to the parent transformation, assuming Parent and Transformation share
the same object mapping:
parentTransformation = transformations[parents[i]]
But after some ACTUAL REAL WORLD use, I realized that there's often a
set of objects that have a Parent defined, and then another, completely
disjoint, set of objects that have a transformation (for example certain
nodes having no transformation at all because it's an identity). And so
this parent indirection is not only useless, but in fact an additional
complication. Let's say we make a map of the transformations, where
transformationMap[i] is a transformation for object i:
transformationMap = {}
for j in range(len(transformations)):
transformationMap[transformationObject[j]] = transformation[j]
Then, with *no* assumptions about shared object mapping, the indirection
would cause parent transformation retrieval to look like this:
parentTransformation = transformationMap[parentObjects[parents[i]]
While *without* the indirection, it'd be just
parentTransformation = transformationMap[parents[i]]