There's really no need to allocate 56 MB of image *and* texture data
just to verify the constructor is called. This makes the Emscripten test
OOM and there's really no need for that.
Was postponed in 8168a06bab because the
needed DebugTools::textureSubImage() was not there yet. Now I have (a
simplified version of) it done, so use it.
Same approach as done e.g. in the Ui library -- taking advantage of the
base class already allocating an internal state struct, deriving from it
and putting the state there instead of either having it as a class
member (at the cost of extra header dependency) or as a separate state
struct (at the cost of extra allocation). The only downside is a virtual
destructor in the state struct, but compared to the alternatives that's
completely fine.
One obvious use case is rendering a text that's just a bunch of \n
characters. With the current code, it would result in a completely empty
rectangle no matter how many newlines there would be, which isn't right
because -- even if nothing is actually visible -- it would break
anything that relies on the bounding rectangle for alignment and
positioning of other content.
Another case is for example the Ui library, which -- once ported to
use the new Renderer instead of the lower-level APIs -- wants to use the
bounding rectangle for cursor and selection positioning. And if it would
be empty for empty text, it'd mean empty input boxes would have
invisible cursor. Not good.
The old Renderer implementation seems to have handled this correctly --
but in f3d6ab4916, when porting to the new
APIs, I discarded that from the test, thinking it was some unnecessary
behavior. It wasn't, it was just never tested directly so I forgot it
was needed.
The tests all pass exactly as they did before, apart from slightly
different assertion messages. I moved the guts from Renderer.cpp to
RendererGL.cpp as the code relies on both GL and STL, to not have it
spread over too many places.
Apart from the fact that it blew up when the text was indeed empty,
which is fixed now (ugh), it was kinda obvious with the current
implementation but won't be when the guts get replaced with something
reasonable. So ensure we don't break existing use cases.
This is a replacement for the existing AbstractRenderer and Renderer2D /
Renderer3D classes, with no STL or other shittiness like excessive
allocations, and with a much better feature set. Deprecation of the old
APIs is going to happen next.
Compared to the old implementation it doesn't make use of the
complicated buffer mapping because -- unlike in 2012 -- such behavior
was since deemed questionable as the driver (or whichever translation
layer like ANGLE or Zink or Apple's GL-over-Metal) may still make a copy
anyway and doing so prevents buffer orphaning. So it's right now just
plain setData() / setSubData() calls. *If* it becomes some sort of a
bottleneck (which I doubt), I may reconsider, or add something else like
double buffering.
Builds upon RendererCore and generates index and vertex data from the
glyph positions and IDs. Documentation again coming later once
everything is in, next is a RendererGL which populates a GL::Mesh with
these.
A higher-level stateful wrapper around the low-level utilities, capable
of rendering text formed from multiple lines and multiple fonts. Will be
used as a backend for a new Renderer / RendererGL implementation.
No usage docs yet, those will come once the other two classes are made
as well.
So far the Renderer doesn't work with that, and neither the builtin
Vector shaders are be able use it, but gotta start somewhere.
I wanted to have the texture contents tested on ES, but it turns out
that implementing DebugTools::textureSubImage() for arrays is blocked on
another feature I badly need to finish first. Sigh.
One was a Doxygen workaround that shouldn't be needed anymore and an
API reference missing from e1c9c4d007, the
other was an inconsistency in a test that was just weird but shouldn't
affect anything.
The naming was adopted from CSS direction-aware alignment and it took me
quite a while to realize it feels alien, since everything else is using
"begin" and "end", not "start" and "end". So I'm renaming for
consistency.
These enums were originally introduced about 8 months ago but I don't
think anyone is using these yet, since they're not really advertised
anywhere and the existing to-be-deprecated high-level text renderer API
is so shitty that I don't expect it to be used for BIDI text and such,
at all. Sorry if you do, heh.
Neither a driver bug nor something wrong with the code. It's just
that with a tight flush rectangle the target texture may have some random
garbage left around the edges, causing the comparison to fail, so it's
now explicitly cleared upfront.
Doesn't happen when running the offending test alone (because I suppose
the memory is coming fresh from the driver, being zeroed out for security
purposes), only when running after the others (where I suspect it's now
reusing previous partially-filled memory which it didn't before). Doesn't
happen on NVidia either.
Compared to Corrade, the improvement in compile time is about a minute
cumulative across all cores, or about 8 seconds on an 8-core system (~2
minutes before, ~1:52 after). Not bad at all. And this is with a
deprecated build, the non-deprecated build is 1:48 -> 1:41.
So the users don't accidentally create an instance with the processed
format / size being different only to get greeted by a nasty bounds or
format assertion inside flushImage() / doSetImage().
Now it's only allowed to be done by a subclass (such as
DistanceFieldGlyphCacheGL, or, ahem, MagnumFont), and doSetImage()
additionally has an assert to notify the implementer that a subclass
needs to provide its own override.
This also feels kind of messy, honestly. It's as if the
DistanceFieldGlyphCache should never have been a subclass, maybe?
They don't have anything specific to DistanceFieldGlyphCacheGL and on
the other hand contain special-casing for Luminance vs Red on ES2 that's
specific to GlyphCacheGL internals.
Additionally, in DistanceFieldGlyphCacheGL the code could assume that
the pixel format is either R8 with EXT_texture_rg present or RGBA8
without, here it has to check for EXT_texture_rg presence as
GlyphCacheGL can use R8 even without EXT_texture_rg.
Like with TextureTools::DistanceField, to make room for non-GL
implementations. Old names are deprecated aliases, same for headers. The
only remaining bit in the Text library is the Renderer class, but for
that one I first need to invent a non-shitty API so I can just deprecate
the old thing as a whole and create something reasonable from scratch in
RendererGL.
Along with the bits in Text library this is one of the last things that
still assume OpenGL present by default.
As usual, the old name and header is now a deprecated typedef.
The internal GL texture format (especially the R8 vs Luminance mess on
ES2) is now considered an implementation detail and shouldn't affect
common use in any way.
The format is now required always, in order to prepare for use cases
where colored glyphs are a thing as well. Additionally, to match the
recent change in AbstractGlyphCache, the processed format is specified
separately, allowing the input and processed formats to be decoupled.
Which ultimately fixes the regression on ES2 and WebGL 1 where it was no
longer possible to call font.fillyGlyphCache() on a
DistanceFieldGlyphCache.
Also, as there's now a generic format on input, another ES2-specific
issue is now fixed as well, in particular a case where a GL error
would be emitted on drivers with EXT_texture_storage because an unsized
format is passed to setStorage(). This was a problem since a long time
ago, but I ignored it because it didn't affect WebGL 1 and all drivers
that exposed EXT_texture_storage exposed EXT_texture_rg, effectively
circumventing this issue. Or so I think, at least.
The constructors taking either a GL::TextureFormat or no format at all
are now deprecated aliases to the new functionality.
RGB was a hopeful intention that never really worked in practice. Or, if
it worked, it couldn't be queried back, and the driver used RGBA
internally anyway.
The refactor in 6707534ce6 somehow didn't
account for the need to have a different pixel format for the input and
the processed texture, which is needed on ES2 / WebGL 1 because there's
no renderable single-channel format. Which means distance field text
rendering was broken since then.
This commit is the first part of fixing that regression. It makes the
AbstractGlyphCache aware of the processing being done, decoupling the
input and processed format and size. Which also allows the base
implementation to provide interfaces that so far were only limited to
DistanceFieldGlyphCache, thus eventually making it possible for font
plugins to supply a pregenerated distance field image through
fillGlyphCache() and not just through createGlyphCache().
The two DistanceFieldGlyphCache APIs that are now directly on the
AbstractGlyphCache are now deprecated aliases to the new functionality.
The actual fix for the ES2 / WebGL 1 regression will come in the next
commit, as it's pretty much a separate step that involves updating the
GlyphCache as well.
Essential for text selection and editing in cases where mapping from the
input text to the actual shaped glyphs is nontrivial. I.e., in case of
HarfBuzzFont; both FreeTypeFont and StbTrueTypeFont perform a 1:1
translation from input (UTF-8) characters to glyphs so there it isn't
as important.
Having it next to AbstractShaper didn't make sense, as there's various
use cases where font features are specified but the caller code doesn't
even know there's any shaper involved. Such as in the UI library.
Based on the actual text direction (either explicitly set or detected),
these resolve to either *Left or *Right. For the Text::Renderer it's
done automatically inside (and there's no way to actually set the
direction from outside due to the API being ancient and limited), for
the align*() utils the alignment has to be explicitly resolved using a
new alignmentForDirection() utility.
This is going to get called from fillGlyphCache() that takes a string,
and thus should be better than one virtual call per character. The
single-character variant still stays, as it's a nice convenience API.
Eventually glyphSize() will get a similar treatment as well.
Took me quite a while to realize what was going on, but in retrospect
it's obvious -- the rasterizer just *rounds* the sub-pixel-positioned
glyph quads to nearest pixels. Which then can cause the neighboring
glyph data to leak to it (because the texture is then sampled not
directly on the edge pixel, but slightly outside of it), or it can also
cut away the edge, when it gets rounded in the other direction.
This was a problem with the original -- laughably inefficient -- atlas
packer as well, but because that packer had excessive padding around
everything, only the second edge-cutting artifact manifested, and that
one is rather subtle unless you know what to look for.
This means the packing is now slightly worse than before and sizes that
previously worked may no longer fit anymore. But since the new atlas
packer is relatively new (well, from September, time sure flies
different here), and the improvement compared to the original packer is
still quite massive, I don't think this is a problem.