Such as being able to print the contents as hexadecimal. Doing this just
for Vector and not any other math types, as those are all floating point
where it doesn't make sense. And for Nanoseconds, which are integers,
hexadecimal printing makes no sense either.
Got a suggestion that lerp() could be optimized to be one arithmetic
operation less. While valid in certain cases, it would break in case the
endpoints have wildly different magnitudes. Unfortunately that was only
my personal knowledge, not backed by regression tests. Now it is.
The change in 87c7eea1e2 caused a breakage
with old FindMagnum modules, which use if(MAGNUM_TARGET_GLES3) to decide
if they should link to libGLES.
This is now still only defined for deprecated builds, so non-deprecated
builds with old FindMagnum will fail.
Originally (2012? 2013?) I expected that there would eventually be
OpenGL ES 4.0, thus it made sense to differentiate between ES2, ES3 and
something else ES yet unknown. But as ES4 was increasingly unlikely to
happen, the internal code treated MAGNUM_TARGET_GLES3 as a simple
inverse of MAGNUM_TARGET_GLES2, and only in a very few places,
only adding confusion.
Thus it's now deprecated and defined as a simple inverse of
MAGNUM_TARGET_GLES2 on MAGNUM_TARGET_GLES builds, and none of the
internal code uses it anymore.
Without this it would fall back to physical DPI, i.e. taken from the
display dimensions. Which sometimes *is* correct, but often isn't what
the users want -- it's common to have a high DPI screen on a laptop
(such as a full HD on a 15.6") but still only use 100% scaling even
though it's a bit tiny. And often it's completely incorrect, depending
on how accidentally misconfigured the system is, and the users won't
even know because almost nothing uses the physical DPI value by default.
This I like, a notification sufficiently in advance, that a certain
version of an image is deprecated. Not the whole OS version altogether,
not the platform as a whole.
Like the Deg / Rad classes, these are for strongly-typed representation
of time. Because the current way, either with untyped and imprecise
Float, or the insanely-hard-to-use and bloated std::chrono::nanoseconds,
was just too crappy.
This is just the types alone, corresponding typedefs in the root
namespace, and conversion from std::chrono. Using these in the Animation
library, in Timeline, in DebugTools::FrameProfiler, GL::TimeQuery etc.,
will eventually and gradually follow.
Breaking change, but the new behavior makes a lot more sense. Hopefully
not that significant breakage -- I don't assume people regularly worked
with angles this way.