It gives the same result, nevertheless something is not right when it
comes to negatively scaled meshes. Postponing the rest of the
investigation to later.
This is a breaking change that changes the signature, sorry -- if you
were using concatenate() for mesh concatenation before, enjoy the new
less strange signature, if you were using it for making the mesh owned
before (which was a strange and not very well thought out use case),
please use the recently added owned() instead. I thought about adding an
overload for backwards compatibility, but it would need to allocate to
work. This way with the breakage it's ensured you actually change to the
right API.
This also cleans up a lot of ugly code in the internals and resolves one
XFAIL in removeDuplicates().
The array size is always last, defaulting to 0. This makes it consistent
with the offset-only constructor and removes two unnecessary overloads.
It's a breaking change, but I don't think array attributes have many
users yet -- and better to do this now than later. In any case, sorry
about breaking your code.
The old one is deprecated, and will be removed in a future release.
Unfortunately, to avoid deprecation warnings, all use of NoInit in the
Math library temporarily have to be Magnum::NoInit This will be cleaned
up when the deprecated alias is removed.
Otherwise, in case of SDL and GLFW, where we don't really know the DLL
name, it would create a file named `bin` instead of copying into a newly
created bin/ directory if it doesn't exist yet. That happens in case of
a static build, where there are no DLLs and thus
CMAKE_RUNTIME_OUTPUT_DIRECTORY gets never created.
Originally, \def_vk was used for enum values (equivalently to how
\def_gl is used for "enum" values in GL), but I also need to reference
actual defines such as VK_VERSION_MINOR(), so renamed it to \val_vk and
reused \def_vk for actual defines.
I also tried storing only a 32-bit index and have the base pointer stored
in ArrayEqual/ArrayHash also, but that didn't really improve amything
much (probably because the allocated items are 8-byte aligned anyway) and
only made the code a lot less clear.
One less executable to build, and we need to test more variants. The
original measured thing ("when to remove duplicates") is no longer really
relevant.
Interestingly enough the fuzzy variant isn't that much slower.
Something fishy going on in there, caused by the algorithm overwriting
the key values (and the map relying on them being immutable).
Interestingly enough the fuzzy variants works on GCC's libstdc++ even
though the key data get changed every entry, fails only on libc++.
Basically using the same idea as with the discrete version -- having the
second dimension dynamic, together with restricting the implementation to
just Float and Double.
According to the SubdivideRemoveDuplicatesBenchmark, this makes the
implementation slightly slower. I presume this is due to how minmax and
offsets are calculated which is quite cache-inefficient as it goes over
the same memory block multiple times. Added a TODO for later.