Instead of storing Animation::TrackViewStorage directly it now contains
the view pointers, strides and size (where the size is shared by both
keys and values) together with packing the non-pointer values into
existing paddings. Together with reducing the keyframe count to 32 bits
and strides to 16 bits (which is consistent with MeshData and
SceneData), this reduces the size from 80 bytes to 48.
Not using TrackViewStorage also means we can directly accept the
key/value views in constructors, significantly improving the usability.
This also makes it possible to add support for (constexpr) offset-only
track data and thus easy serializability, again similarly to
MeshAttributeData and SceneFieldData.
This removes the remaining need for reinterpret_cast anything in this
class (it's moved to four places in TrackView instead), and the
constructor doesn't need to be templated anymore either.
It was there to allow creating const views, because the other one had
conflicting deduction for the V template parameter. The proper fix is to
use std::remove_const<V>::type instead.
Given the recent issues with vertex data with size over 4 GB, I feel
this limit might get hit soon as well. So far GPUs don't support vertex
counts larger than 32 bits, so storing them in a 32-bit number matches
the limitation there. Also, a vertex is usually at least 6 bytes (for
3-component positions quantized into 16-bit ints), thus a mesh hitting
this limit would be 24 GB in size. Which fits only on the beefiest
contemporary GPUs.
However I imagine the limit might get raised eventually, for example to
support a use case of a huge sparse mesh where only sub-parts of it are
drawn, and the sub-parts have counts that fit into 32 bits.
Not everything, and especially not several hundred megabytes of
animation track data. This also prepares it for recording names of
custom animation track targets that were added in the previous commits.
Again, similarly to what's done for custom MeshAttribute and SceneField
values already. I'm bumping the importer interface version as adding new
virtual functions is a silent ABI breakage, but it's good to do in any
case as the AnimationTrackTarget enum was extended to 16 bits and the
values got shifted.
For consistency with what's already done for MeshAttribute and
SceneField. The ::Custom enum value is deprecated in favor of these, the
only actually breaking change is that the debug printer now subtracts
32768 for custom values (consistently with custom mesh attributes and
scene fields), while it printed the absolute enum value before.
The `Type` was suggesting it'd be some C++ type, definitely not values
like Scaling3D or Translation2D, resulting in a significant "brain
autocompletion error" every time I was using that type.
Unfortunately on AnimationData the trackTargetType() couldn't similarly
get renamed to trackTarget() as there's already trackTarget() that
contains the node ID the target points to, so it's trackTargetName()
instead. Renaming trackTarget() to trackTargetId() wasn't an option as
that would be inconsistent with everything else (TextureTools::image(),
MaterialAttribute::BaseColorTexture, SceneField::Mesh are all IDs but
they don't have an `Id` suffix); renaming to AnimationTrackTargetName
would keep it insanely long and wouldn't make it consistent either
(MeshAttribute, SceneFIeld, MaterialAttribute are all referred to as
"names" yet they don't have a `Name` suffix).
So it has 32k values for custom targets, instead of just 127. This
makes it consistent with MeshAttribute, which also provides 32k values,
while SceneField has a whole 31-bit range to make it possible to store
arbitrary ECS identifiers as well.
Since this is an ABI break, I'm also shifting the values by 1 to have
zero used for an invalid value, consistently with SceneField,
MeshAttribute etc.
If it crashes or does some other shit, it'll cause the tool to not work
at all. If it fails to load due to whatever other issue, it'll spam the
output for no reason.
The code right below is querying the `log` option, which wasn't added.
Becomes a problem when Magnum is compiled w/o GL support, e.g. for a
custom WebGPU renderer.
Passes for SceneData but fails for MeshData due to 32-bit types used by
accident. The two also have a vastly different calculations in the range
checks, should unify that first.
Done this way only in the Phong shader, everywhere else it's just the
MAGNUM_ASSERT_GL_EXTENSION_SUPPORTED() macro. Some WIP code that I
forgot to clean up?
So it's possible to have light culling enabled on, say, 64 lights, but
with only at most 3 applied each draw, allowing the shader compiler to
unroll the loop if it makes sense. This also better prepares for SSBO
support where the total light count would be unbounded and thus the
value ignored, and thus the value can be 0.
This prepares for SSBO support where the total count is unbounded (and
thus the value is ignored, thus it can be 0).
Also regroup the doc paragraphs so it's clear what's related to UBO
usage and what applies to classic uniforms as well.