Heh, I forgot to run the full test suite after the changes in
1eb1eec271 and then the CI accidentally
had all rendering tests skipped due to missing plugins (which got fixed
in the previous commit, d1ee0b7f7e), so
that didn't catch it either. Sigh.
This is a -- long overdue -- breaking change to the rendering output of
this shader, finally adding support for lights that get darker over
distance. The attenuation equation is basically what's documented in
LightData, and the distinction between directional and point lights is
made using a newly added the fourth component of position (which means
the old three-component setters are all deprecated). This allows the
shader code to be practically branchless, which I find to be nice.
This breaks basically all rendering output so all existing Phong and
MeshTools::compile() test outputs had to be regenerated.
This is a breaking change that changes the signature, sorry -- if you
were using concatenate() for mesh concatenation before, enjoy the new
less strange signature, if you were using it for making the mesh owned
before (which was a strange and not very well thought out use case),
please use the recently added owned() instead. I thought about adding an
overload for backwards compatibility, but it would need to allocate to
work. This way with the breakage it's ensured you actually change to the
right API.
This also cleans up a lot of ugly code in the internals and resolves one
XFAIL in removeDuplicates().
The array size is always last, defaulting to 0. This makes it consistent
with the offset-only constructor and removes two unnecessary overloads.
It's a breaking change, but I don't think array attributes have many
users yet -- and better to do this now than later. In any case, sorry
about breaking your code.
The old one is deprecated, and will be removed in a future release.
Unfortunately, to avoid deprecation warnings, all use of NoInit in the
Math library temporarily have to be Magnum::NoInit This will be cleaned
up when the deprecated alias is removed.
I also tried storing only a 32-bit index and have the base pointer stored
in ArrayEqual/ArrayHash also, but that didn't really improve amything
much (probably because the allocated items are 8-byte aligned anyway) and
only made the code a lot less clear.
One less executable to build, and we need to test more variants. The
original measured thing ("when to remove duplicates") is no longer really
relevant.
Interestingly enough the fuzzy variant isn't that much slower.
Something fishy going on in there, caused by the algorithm overwriting
the key values (and the map relying on them being immutable).
Interestingly enough the fuzzy variants works on GCC's libstdc++ even
though the key data get changed every entry, fails only on libc++.
Basically using the same idea as with the discrete version -- having the
second dimension dynamic, together with restricting the implementation to
just Float and Double.
According to the SubdivideRemoveDuplicatesBenchmark, this makes the
implementation slightly slower. I presume this is due to how minmax and
offsets are calculated which is quite cache-inefficient as it goes over
the same memory block multiple times. Added a TODO for later.