The perf cost is just too great for these to be enabled always. The only
place where the assertions are kept always is in the batch APIs -- there
it's assumed the function is called on large enough data to offset this
overhead, plus since it's often dealing with large blocks of data the
memory safety is more important than various FP drifts which were the
usual case why other assertions were firing.
The change won't make any difference in Release, but in Debug builds it
avoids calling into set() for every element. This function showed up
pretty high in profiler output for DebugTools::CompareImage, now it
doesn't anymore. In particular, for ShadersMeshVisualizerGLTest it
results in ~10% reduction in run time, which is quite significant.
The
(size - 1)/8 + 1
expression is the same as
(size - 1)/8 + 8/8
which is the same as the following in all cases except when size == 0,
for which it underflows:
(size - 1 + 8)/8 == (size + 7)/8
To avoid needless surprises, I'm changing to what's used everywhere
else, including the BitArray, and also reusing the already-calculated
value for the _data array size.
The article this API was originally based on assumes a scenario which
just *isn't* matching usual practices here, giving wrong results. Too
bad I didn't spend more time questioning the proof there and just
blindly assumed it's correct because everyone said so.
Won't be typing all that reasoning again in the commit message, see the
changelog entry and the comment in the test for details.
For generic code, which would otherwise have to invent some SFINAE
"use castInto() if the types are different and Utility::copy()
otherwise" nastiness in every such case, and that's just annoying.
Counterparts to the sRGB-converting APIs, for when one doesn't want to
perform sRGB conversion. Or for "wrong sRGB" workflows. Named like this
and not just `fromRgbInt()` to make the calls at least a bit suspicious.
I should probably go over all Math tests and include Magnum.h there. No
idea why did I do it like this back in 2010, maybe to have the Math
library "independent" from the rest of Magnum? That can be done even
without having to suffer like this...
The result of e.g. -15.0_degf is not Deg, but rather
Math::Unit<Math::Deg, Float>, and those didn't have a corresponding
TweakableParser defined. Now they do.
Funny how I didn't run into this until now.
I spent a significant amount of time reinventing the wheel, i.e.
figuring out how to use a cross product to calculate distance of a point
to a line. Only to subseqently have a breakthrough discovery of
Distance::linePoint() that actually does the same.
The distance APIs and 2D and 3D cross-products are now linked together
the math snippet is clarified, and both the 2D and 3D distance use the
same equation, saving one unnecessary vector subtraction. The
equivalence of the two equations is listed directly on that Wolfram
link.
The line segment direction length is crucial for knowing where the
intersection happens so normalizing it beforehand will have unintended
consequences. OTOH, in case of the second (non-segment) variant, the
second parameter can but doesn't have to be normalized.
It was implemented only for the Half type and not the others, and I just
felt like using it on a vector now, 12 years after the Vector class got
first added.
Apart from returning const T instead of T it kinda worked, but in case
of floating-point vectors it tried to operate with
`std::pair<std::size_t, const T>` internally and failed miserably.
Because it somewhat confusingly may have implied that it's really
composed of 8-bit bools, and not bits. The same reasoning was used to
pick the name for Corrade's Containers::BitArray.
Backwards compatibility aliases are in place as usual, however the
internal BoolVectorConverter is now BitVectorConverter and there
unfortunately cannot be any backwards compatibility. This breaks only
GLM and Eigen integration in the magnum-integration repo, which I'm
fixing immediately. I don't expect any user code to use this internal
helper. For regular vectors maybe, for this one definitely not.
Similar to the change done in Corrade, see the commit for details:
878624ac36
Wow, this is probably the most backwards-compatibility code I've ever
written. Can't wait until I can drop all that.
Because yes of course dealing with a JSON just isn't possible
without having to make decisions between insufficiently imprecise
integers and unnecessarly overprecise doubles.
This makes it possible to conveniently do things like
Containers::StridedArrayView1D<Float> array = …;
Vector4 vector{NoInit};
Utility::copy(array, vector); // or the other way around
which is especially useful together with the new JSON classes. In some
cases this means the function is no longer constexpr, but those weren't
constexpr because it was useful for anything, they were only because it
was possible. So this breakage shouldn't do any harm I think.
Certain Clang-based IDEs (CLion) "emulate" a compiler by inheriting all
its defines, which means one gets __clang__ defined but also __GNUC__
set to 11 or whatever, breaking all these assumptions.
While branching on a compiler is rather common, checking a particular
compiler version should be needed only rarely. Thus minimize use of such
macros to make them easier to grep for.
It limits the support for CMake 3.12+, but it's much less verbose and I
don't expect people to use ancient CMake versions with IDEs like Xcode
or VS anyway, so this should be fine.