Got a suggestion that lerp() could be optimized to be one arithmetic
operation less. While valid in certain cases, it would break in case the
endpoints have wildly different magnitudes. Unfortunately that was only
my personal knowledge, not backed by regression tests. Now it is.
The change in 87c7eea1e2 caused a breakage
with old FindMagnum modules, which use if(MAGNUM_TARGET_GLES3) to decide
if they should link to libGLES.
This is now still only defined for deprecated builds, so non-deprecated
builds with old FindMagnum will fail.
Originally (2012? 2013?) I expected that there would eventually be
OpenGL ES 4.0, thus it made sense to differentiate between ES2, ES3 and
something else ES yet unknown. But as ES4 was increasingly unlikely to
happen, the internal code treated MAGNUM_TARGET_GLES3 as a simple
inverse of MAGNUM_TARGET_GLES2, and only in a very few places,
only adding confusion.
Thus it's now deprecated and defined as a simple inverse of
MAGNUM_TARGET_GLES2 on MAGNUM_TARGET_GLES builds, and none of the
internal code uses it anymore.
Without this it would fall back to physical DPI, i.e. taken from the
display dimensions. Which sometimes *is* correct, but often isn't what
the users want -- it's common to have a high DPI screen on a laptop
(such as a full HD on a 15.6") but still only use 100% scaling even
though it's a bit tiny. And often it's completely incorrect, depending
on how accidentally misconfigured the system is, and the users won't
even know because almost nothing uses the physical DPI value by default.
This I like, a notification sufficiently in advance, that a certain
version of an image is deprecated. Not the whole OS version altogether,
not the platform as a whole.
Like the Deg / Rad classes, these are for strongly-typed representation
of time. Because the current way, either with untyped and imprecise
Float, or the insanely-hard-to-use and bloated std::chrono::nanoseconds,
was just too crappy.
This is just the types alone, corresponding typedefs in the root
namespace, and conversion from std::chrono. Using these in the Animation
library, in Timeline, in DebugTools::FrameProfiler, GL::TimeQuery etc.,
will eventually and gradually follow.
Breaking change, but the new behavior makes a lot more sense. Hopefully
not that significant breakage -- I don't assume people regularly worked
with angles this way.
Basically what Vector has already, need this for integer representation
of time, i.e. that
1.0_sec*1.25
gives back 1.25_sec, where the internal representation is a 64-bit
nanosecond value.
Just in case the codegen was a bit different between the two. Also this
makes it more likely that the actually tested lines are shown in code
coverage.
To allow people to cherry-pick just a subset of them if other code
defines literals that may conflict. I first did that the same way as
STL (so both namespaces inline), only to subsequently discover the
horror that all literals are implicitly available in the enclosing
Math namespace, thus preventing no conflicts at all. So the Literals
namespace isn't getting inline, only the inner ones.
This is also in preparation for introduction of
Literals::ConstexprColorLiterals that would provide a constexpr variant
of the _srgbf literals at the expense of having a large LUT in a header
file.