Based on the actual text direction (either explicitly set or detected),
these resolve to either *Left or *Right. For the Text::Renderer it's
done automatically inside (and there's no way to actually set the
direction from outside due to the API being ancient and limited), for
the align*() utils the alignment has to be explicitly resolved using a
new alignmentForDirection() utility.
This all looked obviously correct so I never questioned it, but the spec
itself has the order mixed up for an unexplainable reason so it doesn't
match between MouseEvent and MouseMoveEvent.
This is what both SDL and GLFW do, so it makes sense to be consistent.
Without it it's also impossible to handle keyboard shortcuts such as
Ctrl-C when editing text, which is rather silly.
Originally (2012? 2013?) I expected that there would eventually be
OpenGL ES 4.0, thus it made sense to differentiate between ES2, ES3 and
something else ES yet unknown. But as ES4 was increasingly unlikely to
happen, the internal code treated MAGNUM_TARGET_GLES3 as a simple
inverse of MAGNUM_TARGET_GLES2, and only in a very few places,
only adding confusion.
Thus it's now deprecated and defined as a simple inverse of
MAGNUM_TARGET_GLES2 on MAGNUM_TARGET_GLES builds, and none of the
internal code uses it anymore.
Like the Deg / Rad classes, these are for strongly-typed representation
of time. Because the current way, either with untyped and imprecise
Float, or the insanely-hard-to-use and bloated std::chrono::nanoseconds,
was just too crappy.
This is just the types alone, corresponding typedefs in the root
namespace, and conversion from std::chrono. Using these in the Animation
library, in Timeline, in DebugTools::FrameProfiler, GL::TimeQuery etc.,
will eventually and gradually follow.
Breaking change, but the new behavior makes a lot more sense. Hopefully
not that significant breakage -- I don't assume people regularly worked
with angles this way.
Whoops, this got silently omitted during the massive refactor in
f7a6d79aa0 (Nov 23). Buffers *do* get
destroyed, VAOs not. If you got sudden GPU memory usage issues after a
recent Magnum update, this was why.
This "type erased std::vector member" was done in the times before
growable arrays were a thing, and kind of made sense to go the extra way
to avoid a <vector> include in the header. Except that it made rather
unportable assumptions about std::vector size, which weren't correct for
example with _GLIBXX_ASSERTIONS set.
But what was *completely* unacceptable was that the vector was of one or
another type depending on the GL feature set present in the current
context. Apart from adding a lot of extra *nasty* logic to construction,
moves and destruction, this approach led to the mesh instance asking the
current context on destruction in order to know whether a destructor
should be called on std::vector<Buffer> or std::vector<AttributeLayout>.
Ugh.
Now it's a regular Array member (which isn't *that* heavy to need such
type-erased treatment, although it eventually could be), and thanks to
the AttributeLayout packing improvements in previous commits it's no
longer prohibitively wasteful to just abuse AttributeLayout instances to
store just owning Buffer instances alone -- doing so now wastes only 16
bytes per buffer, compared to 36 before. Given there's usually just one
or two vertex buffers per mesh (compared to attributes, which are
usually 4 or more), it should be fine.
The MeshGLTest::destructMovedOutInstance() test added few commits back
also no longer asserts on no GL context being present.
Want to construct them without a GL context present, and Optional is too
wasteful. Also adding it to the AbstractGlyphCache base, where it skips
also allocating the internal PIMPL state, because it's not going to get
used for anything in a NoCreate'd instance anyway.
The test added in previous commit now passes. Besides the behavior being
different for single- and multi-bind calls, an additional "interesting"
behavior is that the glBindBufferBase() / glBindBufferRange()
apparently also creates the GL object, if not already (in contrast to
the multi-bind APIs which *require* the GL objects to be created
before).
So it's possible to shape the text even before having all glyphs ready.
That's one reason, second reason is that this is a more common behavior
-- it usually doesn't make sense to make the text jump based on whether
it's "zaxaca", "KEKEKE" or "yqpyq".
The original alignment based on glyph bounds is now moved into dedicated
`*GlyphBounds` variants. Additionally the `*LeftGlyphBounds` were
changed to subtract the initial glyph offset as well, `*Integer` now
rounds only in the direction where it's needed because a division by 2
happened, and there's a set of `*Bottom*` values that somehow weren't
there before.
The original implementation wrongly assumed that the input and output
pixel centers align, which would only be a case if the ratio of the
input and output sizes would be odd. Which it in practice isn't, usually
it's a 1024x1024 texture scaled down to 128x128 or something like that.
The flipped test cases added in the previous commit now pass.
According to the benchmark, the new code is very slightly slower (~815
µs vs ~805 before). The new code isn't really more complex than the old
one, it just does slightly different work -- there are new corner case
in the initial logic for marking the pixel inside or outside, on the
other hand some corner cases that had to be handled in the previous case
are no longer a thing.
If it's not, it's a programmer error (i.e., don't use Luminance or
packed formats, won't work), and since there's no way for the API to
report a failure in a programmatic way, this was causing hard-to-track
errors.
Replaces the previous, grossly inefficient AbstractLayouter which was
performing one virtual call per glyph (!). It's now also reusable,
meaning it doesn't need to be allocated anew for every new shaped text,
and it no longer requires each and every font plugin to implement the
same redundant glyph data fetching from the glyph cache, scaling etc. --
all that is meant to be done by the users of AbstractShaper, i.e.
Renderer. The independency on a glyph cache theorerically also means it
can be used for a completely different, non-texture-based way to render
text (such as direct path drawing directly on the GPU), although I won't
be exploring that path now.
It also exposes an interface for specifying script, language,
direction and typographic features. Such interface will be currently
only implemented in HarfBuzz, but that's the intent -- to provide a
flexible enough interface to support all possible use cases that a font
or a font plugin may support, instead of exposing a least common
denominator and then having no easy way to shape a text in a non-Latin
script or use a fancy OpenType feature the chosen font has.
The old public interface is preserved for backwards compatibility,
marked as deprecated, however the virtual APIs are not, as supporting
that would be too nasty. I don't think any user code ever implemented a
font plugin so this should be okay.
To ensure smooth transition with no regressions, the Renderer class and
MagnumFont tests still use the old API in this commit, and their test
pass the same way as they did before (except for two removed MagnumFont
test cases which tested errors that are now an assertion in the
deprecated layout() API and thus cannot be tested from the plugin
anymore). Porting them away from the deprecated API will be done in
separate commits.
The class now supports incremental filling, multiple fonts, texture
arrays, removes all reliance on STL containers and is finally properly
documented.
To avoid complete breakage of every use, as much as possible was kept as
deprecated APIs -- in particular the reserve() with the nasty
std::vectors, the insert() that assumes a 2D cache and a single font
and textureSize() that returns a 2D vector. Those behave the same as
before, but will assert if the cache is an array or contains more than
one font.
On the other hand, begin() / end() access with std::unordered_map iterators
(ew!) was removed as the internals simply aren't a hashmap anymore. The
image() that returned an Image2D is now used to fill the glyph cache
instead of querying its potentially processed contents, and returns a
MutableImageView3D. I considered keeping it and adding sourceImage()
instead, but such naming turned out to be too inconsistent. For querying
processed image data (such as with the distance field cache) there's a
new processedImage() query, guarded by new GlyphCacheFeature bits -- if
both ImageProcessing and ProcessedImageDownload is set, it can be used
to retrieve the processed image (so, similar as ImageDownload was
before), and if neither is set, the cache contents are queryable
directly through image(), without needing any special support from
the GPU API.
Existing code is updated only in the minimal way possible to ensure that
no serious breakage was introduced by reimplementing the deprecated APIs
on top of the new backend. Porting away from deprecated APIs will be
done in next commits. The GlyphCache and DistanceFieldGlyphCache have
their public API kept intact for now, as a similar rework will be needed
for them as well.
Additionally, the MagnumFont and MagnumFontConverter plugins aren't
compiling yet as they require substantial changes to deal with the new
glyph cache features. That is not the case with other plugins in the
magnum-plugins repository tho, for those the backwards compatibility
"just works". On the other hand, since layout of the AbstractGlyphChange
changed, I'm bumping the AbstractFont plugin interface version to
force-trigger a rebuild of dependent projects. Because I ran a stale
magnum-player binary, it worked without crashing or GL errors but just
didn't show ANY text whatsoever due to ABI differences, and I wasted
some precious minutes before realizing that a simple rebuild would fix
it.
Ugh. Was using this to verify that the glyph cache was correctly
populated, only to end up with a GL error that I thought was coming from
the glyph cache itself and not here. Wasted too much time on that.
An ad-hoc solution was already done in DebugTools::screenshot(), now I
need it in another place. While not as fast as the O(1) mapping from
the generic format to the API-specific ones due to the potentially
linear lookup, it definitely could be useful in general.
Only noticed this now when adding inverse mapping. Sigh. OTOH, with the
inverse mapping in place this will no longer be possible to happen, as
it would cause a compile error due to a duplicate switch case.
The overhead of maintaining two classes with only very slight
differences in the API and the internals being basically identical is
not worth it. Too much potential for inconsistencies and doc errors.
Additionally, when I attempted to use it for the reworked Text glyph
cache, I realized I'd need to wrap them both under a common interface,
allowing easy use for both 2D and 2D array textures. And then it's
easier to just have the Atlas class done that way directly instead of
papering over that in a downstream API.
All std::string arguments are now a StringView, what returned a
std::pair is now a Pair. STL compatibility headers are included on
deprecated builds to ease porting, as usual.
The only *really* breaking changes are in the internals, where an
ArrayView<const char32_t> is used instead of std::u32string, which is in
line with the change done in Utility::Unicode::utf32(); and a Triple is
returned instead of a std::tuple. Behaviorally nothing changed except
that fillGlyphCache() now asserts if the input string contains invalid
UTF-8 (which is also in line with the cahnge done in Utility::Unicode).