It takes too long in debug mode, causing timeouts on the Windows CI due
to older MSVC having truly amazing virtually unmatched debug build
performance.
I made the binary data use 16-bit integers instead of 32-bit to make
them occupy less space and forgot to update it here. Also they're in a
different path now.
Just the dumbest possible idea I had, and it compares surprisingly well
in both efficiency (~comparable to stb_rect_pack) and speed
(significantly faster than stb_rect_pack with tons of tiny images,
slower with larger ones -- would probably need to SIMD Math::max() and
such, haha). It's the very first implementation without any additional
improvements I have in mind, so it'll likely improve further.
Includes a benchmark with a bunch of "datasets" extracted from both
fonts and large glTF models. The stb_rect_pack file isn't commited as
it's not useful apart from this single benchmark, put it to
AtlasTestFiles/ and recompile.
I mean, yes, it was already SIGNIFICANTLY better than the atlas() that
took a vector and returned one as well, but still. One of the usual use
cases is that there's an array of structs containing both sizes offsets
and the offsets get written back.
The original versions are deprecated as I really don't see any
convenient use case for these, especially given they return a pair and
not just an array.
It won't contain just font metrics anymore. Also don't require the
struct to be zero-initialized if opening fails -- simply allow the
plugins to return garbage in that case and save the values only if
opening actually succeeded.
Strictly speaking this isn't an ABI change as the return value isn't
part of the function signature and the struct is still the same, so the
plugin interface version isn't bumped for this change.
All std::string arguments are now a StringView, what returned a
std::pair is now a Pair. STL compatibility headers are included on
deprecated builds to ease porting, as usual.
The only *really* breaking changes are in the internals, where an
ArrayView<const char32_t> is used instead of std::u32string, which is in
line with the change done in Utility::Unicode::utf32(); and a Triple is
returned instead of a std::tuple. Behaviorally nothing changed except
that fillGlyphCache() now asserts if the input string contains invalid
UTF-8 (which is also in line with the cahnge done in Utility::Unicode).
This one was spectacular -- ALL uses of it had also #include <tuple> in
order to std::tie() the result into separate major & minor variables. So
much compile time overhead for so little.
Despite what the standard tries to say. I bet a large portion of
<type_traits> is impossible to implement without it, which is why all
STL implementations define it there already.
There's Containers::Triple now for this instead. Printing a message to
Error in case of a failure might have made sense back in 2010, but now
it absolutely doesn't, so it's additionally wrapped in an Optional now.
Also looks like the actual use without "convenient" std::tie() is a lot
less verbose. Haha.
Using Containers::Pair allows me to make certain Range APIs constexpr
that weren't possible in C++11 before. Compared to std::pair it's also
trivially copyable, which is a nice property when storing it in various
growable containers.
As usual, the <Corrade/Containers/PairStl.h> include is in place to help
people with porting, although in many cases this change will be
breaking. I had to do it at some point anyway, so the earlier it is the
better.
Instead of defining the same types and vaguely risking little
differences. Typedefs that don't exist in Magnum.h (such as integer
quaternions) and typedefs that differ from Magnum.h (such as
using Vector<4, T> instead of Vector4<T>) stay as typedefs, to make it
clear what *deliberately* differs and what not.
Typedefs that didn't conflict with the template types in Math (such as
Vector3us) are removed entirely, as the typedef from Magnum.h can be
used directly in that case, without any `using`.
I did this back in 2010 because it "felt like the right thing to do",
given that all of Magnum depended on Math and not vice versa. But,
strictly speaking, Math already uses typedefs from Magnum/Types.h so why
it couldn't also bring in the Corrade namespace, and the
Debug/Warning/Error names too. Having to type out Corrade:: in all these
was really just a waste of time, weird inconsistency in docs and an
extra roadblock for whoever might want to contribute anything there.
I.e., when the page is zoomed. Until now it was only triggered when the
actual canvas was resized, which happens with the zoom only if the
canvas fills the whole window.
This makes them consistent with window and framebuffer size queries,
that are also not cached but queried every time. It fixes a case where a
global UI scaling change in the OS triggered a viewport event but the
event didn't actually have the DPI scaling value updated.
It doesn't however handle actual explicit DPI change events yet, that's
another nightmare altogether.