Heh, I forgot to run the full test suite after the changes in
1eb1eec271 and then the CI accidentally
had all rendering tests skipped due to missing plugins (which got fixed
in the previous commit, d1ee0b7f7e), so
that didn't catch it either. Sigh.
This is a -- long overdue -- breaking change to the rendering output of
this shader, finally adding support for lights that get darker over
distance. The attenuation equation is basically what's documented in
LightData, and the distinction between directional and point lights is
made using a newly added the fourth component of position (which means
the old three-component setters are all deprecated). This allows the
shader code to be practically branchless, which I find to be nice.
This breaks basically all rendering output so all existing Phong and
MeshTools::compile() test outputs had to be regenerated.
This is a breaking change that changes the signature, sorry -- if you
were using concatenate() for mesh concatenation before, enjoy the new
less strange signature, if you were using it for making the mesh owned
before (which was a strange and not very well thought out use case),
please use the recently added owned() instead. I thought about adding an
overload for backwards compatibility, but it would need to allocate to
work. This way with the breakage it's ensured you actually change to the
right API.
This also cleans up a lot of ugly code in the internals and resolves one
XFAIL in removeDuplicates().
The array size is always last, defaulting to 0. This makes it consistent
with the offset-only constructor and removes two unnecessary overloads.
It's a breaking change, but I don't think array attributes have many
users yet -- and better to do this now than later. In any case, sorry
about breaking your code.
One less executable to build, and we need to test more variants. The
original measured thing ("when to remove duplicates") is no longer really
relevant.
Interestingly enough the fuzzy variant isn't that much slower.
Something fishy going on in there, caused by the algorithm overwriting
the key values (and the map relying on them being immutable).
Interestingly enough the fuzzy variants works on GCC's libstdc++ even
though the key data get changed every entry, fails only on libc++.
Basically using the same idea as with the discrete version -- having the
second dimension dynamic, together with restricting the implementation to
just Float and Double.
According to the SubdivideRemoveDuplicatesBenchmark, this makes the
implementation slightly slower. I presume this is due to how minmax and
offsets are calculated which is quite cache-inefficient as it goes over
the same memory block multiple times. Added a TODO for later.
I spotted a potential bug -- and it clearly *is* a bug. The test doesn't
reduce the data size in the last dimension, leaving a duplicate (and
unused) item at the end.
I need a variant of removeDuplicatesFuzzyInPlace() that doesn't allocate
the output array but instead puts the data into a pre-existing location.
The discrete / non-fuzzy variant had this already, but this one not.
A breaking change, sorry, but I don't want to add yet another layer of
backwards compatibility on APIs that are in master for just a month or
so. This is a test for how many people actually use these APIs -- if
nobody complains, great!
Conflating the fuzzy operation with the discrete one wasn't a good idea,
as people could be unintentionally using the (slower) fuzzy variant on
data that could be easily deduplicated using the discrete variant. One
such case is in the icosphere primitive, and I'm going to look at that
right now.
A bit unfortunate that the test needs ES3.2 and GS, but I got nothing
better right now. Not handling ObjectId yet, for that I need to
implement instancing first (so yes, GCC/Clang will still warn about an
unhandled switch case).