This is going to get called from fillGlyphCache() that takes a string,
and thus should be better than one virtual call per character. The
single-character variant still stays, as it's a nice convenience API.
Eventually glyphSize() will get a similar treatment as well.
It now delegates to Assimp for ASCII files, along with StlImporter that
does the same but was already marked as having minor caveats. Better
than nothing, and it's not my problem that Assimp leaks and crashes like
mad on broken files.
This all looked obviously correct so I never questioned it, but the spec
itself has the order mixed up for an unexplainable reason so it doesn't
match between MouseEvent and MouseMoveEvent.
This is what both SDL and GLFW do, so it makes sense to be consistent.
Without it it's also impossible to handle keyboard shortcuts such as
Ctrl-C when editing text, which is rather silly.
Originally (2012? 2013?) I expected that there would eventually be
OpenGL ES 4.0, thus it made sense to differentiate between ES2, ES3 and
something else ES yet unknown. But as ES4 was increasingly unlikely to
happen, the internal code treated MAGNUM_TARGET_GLES3 as a simple
inverse of MAGNUM_TARGET_GLES2, and only in a very few places,
only adding confusion.
Thus it's now deprecated and defined as a simple inverse of
MAGNUM_TARGET_GLES2 on MAGNUM_TARGET_GLES builds, and none of the
internal code uses it anymore.
Like the Deg / Rad classes, these are for strongly-typed representation
of time. Because the current way, either with untyped and imprecise
Float, or the insanely-hard-to-use and bloated std::chrono::nanoseconds,
was just too crappy.
This is just the types alone, corresponding typedefs in the root
namespace, and conversion from std::chrono. Using these in the Animation
library, in Timeline, in DebugTools::FrameProfiler, GL::TimeQuery etc.,
will eventually and gradually follow.
Breaking change, but the new behavior makes a lot more sense. Hopefully
not that significant breakage -- I don't assume people regularly worked
with angles this way.
To allow people to cherry-pick just a subset of them if other code
defines literals that may conflict. I first did that the same way as
STL (so both namespaces inline), only to subsequently discover the
horror that all literals are implicitly available in the enclosing
Math namespace, thus preventing no conflicts at all. So the Literals
namespace isn't getting inline, only the inner ones.
This is also in preparation for introduction of
Literals::ConstexprColorLiterals that would provide a constexpr variant
of the _srgbf literals at the expense of having a large LUT in a header
file.
Whoops, this got silently omitted during the massive refactor in
f7a6d79aa0 (Nov 23). Buffers *do* get
destroyed, VAOs not. If you got sudden GPU memory usage issues after a
recent Magnum update, this was why.
To be used with the recently added MaterialTools::removeDuplicates() for
example, or internally by other upcoming scene tools such as importer
filtering.
This "type erased std::vector member" was done in the times before
growable arrays were a thing, and kind of made sense to go the extra way
to avoid a <vector> include in the header. Except that it made rather
unportable assumptions about std::vector size, which weren't correct for
example with _GLIBXX_ASSERTIONS set.
But what was *completely* unacceptable was that the vector was of one or
another type depending on the GL feature set present in the current
context. Apart from adding a lot of extra *nasty* logic to construction,
moves and destruction, this approach led to the mesh instance asking the
current context on destruction in order to know whether a destructor
should be called on std::vector<Buffer> or std::vector<AttributeLayout>.
Ugh.
Now it's a regular Array member (which isn't *that* heavy to need such
type-erased treatment, although it eventually could be), and thanks to
the AttributeLayout packing improvements in previous commits it's no
longer prohibitively wasteful to just abuse AttributeLayout instances to
store just owning Buffer instances alone -- doing so now wastes only 16
bytes per buffer, compared to 36 before. Given there's usually just one
or two vertex buffers per mesh (compared to attributes, which are
usually 4 or more), it should be fine.
The MeshGLTest::destructMovedOutInstance() test added few commits back
also no longer asserts on no GL context being present.
Want to construct them without a GL context present, and Optional is too
wasteful. Also adding it to the AbstractGlyphCache base, where it skips
also allocating the internal PIMPL state, because it's not going to get
used for anything in a NoCreate'd instance anyway.
The test added in previous commit now passes. Besides the behavior being
different for single- and multi-bind calls, an additional "interesting"
behavior is that the glBindBufferBase() / glBindBufferRange()
apparently also creates the GL object, if not already (in contrast to
the multi-bind APIs which *require* the GL objects to be created
before).
Especially given that nullptr causes an assert. All call sites basically
ended up passing a &font and all that extra annoyance just doesn't make
sense.
Given this API is still relatively recent, I'm not bothering with
backwards compatibility.