In short this means the users no longer need to care about FindSDL2 etc.,
it'll just get done automatically. They don't need to drag along
FindMagnum / FindCorrade anymore either, but having them gives a much
more reasonable error message if the library cannot be located at all, so
I still recommend having them.
Before, if e.g. find_package(Magnum REQUIRED Vk) was called and the
installed Magnum didn't have Magnum::Vk enabled at all, it would still
try to call find_package(Vulkan) and do all other setup which could then
fail as well, resulting in way too many noise printed.
Now the decision about the _FOUND variable is moved all the way to the
top, and the actual target setup is only done afterwards.
Sometimes it's desirable to just make a mesh indexed unconditionally,
which wasn't really possible with this thing, and having to special-case
that at the call site is very annoying. Now it makes a mesh
trivially indexed if it has any other primitive than the ones listed,
and if it's already indexed, it's just passed through.
Following the spirit of extension-based functionality, the entrypoints
are available always but do something (i.e., call the actual WebGL API)
only if the extension is advertised. Which it is only on Emscripten
3.1.66+ because older versions don't have the corresponding entrypoints,
so there it's marked as disabled.
Additionally, EXT_polygon_offset_clamp is now also working on 3.1.66+,
but there's no wrapper for it yet.
This is also the first time they just stopped bothering and introduced a
name that is neither in the official gl.xml nor in the
google-private ANGLE-specific gl.xml. So yeah, gotta make my own gl.xml.
I wonder how other people do this. Do they just type out the headers
themselves? Or what's the process for this fucking trash fire these
days??
So the users don't accidentally create an instance with the processed
format / size being different only to get greeted by a nasty bounds or
format assertion inside flushImage() / doSetImage().
Now it's only allowed to be done by a subclass (such as
DistanceFieldGlyphCacheGL, or, ahem, MagnumFont), and doSetImage()
additionally has an assert to notify the implementer that a subclass
needs to provide its own override.
This also feels kind of messy, honestly. It's as if the
DistanceFieldGlyphCache should never have been a subclass, maybe?
Now, instead of accessing the texture directly, it uses the public
setProcessedImage() API. For which it needs to subclass the cache in
order to advertise ImageProcessing, which means the class can no longer
have the implementations private.
I don't really like this, ideally none of this would be needed and
MagnumFont would implement fillGlyphCache() instead. Not sure how to do
that yet tho as there's still the issue with DistanceFieldGlyphCacheGL
being RGBA, so this is a half-assed solution until a better one is
found... or the plugin gets ditched completely.
They don't have anything specific to DistanceFieldGlyphCacheGL and on
the other hand contain special-casing for Luminance vs Red on ES2 that's
specific to GlyphCacheGL internals.
Additionally, in DistanceFieldGlyphCacheGL the code could assume that
the pixel format is either R8 with EXT_texture_rg present or RGBA8
without, here it has to check for EXT_texture_rg presence as
GlyphCacheGL can use R8 even without EXT_texture_rg.
Like with TextureTools::DistanceField, to make room for non-GL
implementations. Old names are deprecated aliases, same for headers. The
only remaining bit in the Text library is the Renderer class, but for
that one I first need to invent a non-shitty API so I can just deprecate
the old thing as a whole and create something reasonable from scratch in
RendererGL.
Along with the bits in Text library this is one of the last things that
still assume OpenGL present by default.
As usual, the old name and header is now a deprecated typedef.
The internal GL texture format (especially the R8 vs Luminance mess on
ES2) is now considered an implementation detail and shouldn't affect
common use in any way.
The format is now required always, in order to prepare for use cases
where colored glyphs are a thing as well. Additionally, to match the
recent change in AbstractGlyphCache, the processed format is specified
separately, allowing the input and processed formats to be decoupled.
Which ultimately fixes the regression on ES2 and WebGL 1 where it was no
longer possible to call font.fillyGlyphCache() on a
DistanceFieldGlyphCache.
Also, as there's now a generic format on input, another ES2-specific
issue is now fixed as well, in particular a case where a GL error
would be emitted on drivers with EXT_texture_storage because an unsized
format is passed to setStorage(). This was a problem since a long time
ago, but I ignored it because it didn't affect WebGL 1 and all drivers
that exposed EXT_texture_storage exposed EXT_texture_rg, effectively
circumventing this issue. Or so I think, at least.
The constructors taking either a GL::TextureFormat or no format at all
are now deprecated aliases to the new functionality.
RGB was a hopeful intention that never really worked in practice. Or, if
it worked, it couldn't be queried back, and the driver used RGBA
internally anyway.
The refactor in 6707534ce6 somehow didn't
account for the need to have a different pixel format for the input and
the processed texture, which is needed on ES2 / WebGL 1 because there's
no renderable single-channel format. Which means distance field text
rendering was broken since then.
This commit is the first part of fixing that regression. It makes the
AbstractGlyphCache aware of the processing being done, decoupling the
input and processed format and size. Which also allows the base
implementation to provide interfaces that so far were only limited to
DistanceFieldGlyphCache, thus eventually making it possible for font
plugins to supply a pregenerated distance field image through
fillGlyphCache() and not just through createGlyphCache().
The two DistanceFieldGlyphCache APIs that are now directly on the
AbstractGlyphCache are now deprecated aliases to the new functionality.
The actual fix for the ES2 / WebGL 1 regression will come in the next
commit, as it's pretty much a separate step that involves updating the
GlyphCache as well.
Originally, years ago, it was supplying the same input value on all
targets, and just special-cased the output on ES2 to account for the
lower bit depth. But then SwiftShader arrived, which doesn't actually
care that there's a RGBA4 framebuffer, and performs all calculations at
the full precision. Which, well, makes sense from a perf PoV, kind of, so
I adapted also the input to be quantized to 4 bits instead of 8.
BUT THEN, some Mesa update happened, or maybe it was like this always and
I just didn't realize because I was on NV cards for so long, and that
input which made SwiftShader produced the expected result, was *further
quantized*, deviating further from what was expected.
So I now ditched all that and I'm just comparing with a
sufficiently large delta. If some implementation returns exactly what was
expected for an 8-bit framebuffer even thought the framebuffer is 4-bit,
I don't care.
So they can be used in StridedArrayView member function slicing the same
way as the mutable overloads. Pointer, Triple and other low-level
containers do the same thing and this is one of the prime use cases for
the slicing functionality, so it's silly it didn't actually work.
Some of them were marked as constexpr even though they were calling into
a deinlined function internally. Making sure the default construction is
constexpr at least, and testing that all constexpr functions actually
behave like that.