The tests all pass exactly as they did before, apart from slightly
different assertion messages. I moved the guts from Renderer.cpp to
RendererGL.cpp as the code relies on both GL and STL, to not have it
spread over too many places.
So far the Renderer doesn't work with that, and neither the builtin
Vector shaders are be able use it, but gotta start somewhere.
I wanted to have the texture contents tested on ES, but it turns out
that implementing DebugTools::textureSubImage() for arrays is blocked on
another feature I badly need to finish first. Sigh.
I just don't see a point in those. PBOs are for when a roundtrip through
a CPU memory would be wasteful, but these utils are mainly for use in
tests. Definitely not for being called several times per frame, because
the temporary framebuffer creation just doesn't make sense. Not sure
what was I thinking in 2016 when I added those, apart from "feature
parity for no practical reason".
I'm going to make a new Renderer class that's unlike the old one, and
isn't templated anymore either, so gotta make room for it first. I assume
that all existing code used the Renderer2D / Renderer3D typedefs so this
should be pretty much painless.
Ultimately this class will be gone, with only (then deprecated)
AbstractRenderer left, and Renderer2D / Renderer3D being aliases to it
-- internally it doesn't actually do anything dfferent, even the
positions are two-component in both cases.
The views made it look like one cannot use Trade::ImageData etc. as
inputs, and the reason for using them was because there was a
load-bearing const in the view type that allowed slice() to work. The
slice() API is fixed now so this doesn't need to be silly anymore.
Now it's less code, and it also no longer results in random edge
artifacts because the padding area wasn't correctly uploaded. I'm going
to do the same change in the FreeTypeFont and StbTrueTypeFont next.
Because otherwise the users likely have to do something similar on their
end to perform a texture upload etc., which means they'll either take
the easy path and upload everything including the unused area, or they
introduce various bugs in the process, leading to random artifacts,
especially when it comes to padding.
Which is exactly what I think is causing random test failures in the Ui
library text rendering, because the glyph cache filling process in
plugins is calculating the rectangle too tight, without considering
padding. Gonna fix that now.
Back in 2020 when I wrote this I didn't really expect the MeshData to be
directly used for much more than putting them on a GPU, mostly because
that used to be the primary use case with the old MeshData2D /
MeshData3D. So the documentation was focusing mainly on populating a GPU
mesh, and any docs for CPU-side access were added rather hastily.
As now the asset processing use case is much larger, the original docs no
longer made sense. Let's hope this is better.
Not too great yet, but at least the most common operations have an
example snippet that shows real use, instead of jumping off a cliff
right into the most detailed description.
I attempted to make it private only to discover it was used by Magnum
Player to make the workflow with opt-in tweakable constants more
efficient. So let's document that.
Because this overrides the base pointer*Event() implementations, it
additionally has to call into the parent implementation in order to fall
back to the deprecated mouse event if the new pointer event isn't
handled.
Pointer events are an unified abstraction over mouse, touch, pen and
potential other yet-to-be-invented pointer-like input methods. Their goal
is to expose all such input methods under a single interface so the
application side doesn't need to explicitly make sure that it's
touch-aware or pen-aware. This abstraction is already present in HTML5,
in Qt6 and in WINAPI as well, and is also what I adopted for the new UI
library because it *just makes sense*.
Unfortunately not even SDL3 took the opportunity to introduce that and
instead added a *third* separate event type for pen input in SDL3. At
first I thought that I wouldn't introduce any extra abstractions in the
Application classes (because that's what they are designed to be, as
lightweight as possible), but midway through introducing TouchEvent
classes and fighting SDL's touch->mouse and mouse->touch compatibility
translation (yes, it's both ways, depending on the platform) I realized
that a much simpler solution that doesn't require any event translation
or the users duplicating their event handling logic for several possible
input types is to introduce a single new event type that covers all.
Which is what this commit does -- it doesn't introduce anything
touch-related so far, just creates a new PointerEvent and
PointerMoveEvent class and corresponding virtual functions. Additionally,
I took this as an opportunity to make the position floating-point, since
that's what SDL3 does now as well, and GLFW did so since ever.
Plus, the Pointer and Pointers enums are directly on the Sdl2Application
class, to allow me to *finally* introduce pointer state queries. Which
weren't possible until now, because there were mutually incompatible
MouseEvent::Button and MouseMoveEvent::Button enums and putting them on
the base class would mean one would have to be translated and the other
not. With Pointer it's translated always, because there isn't any similar
enumeration in SDL that would cover mouse, touch and pen at the same
time.
Like with TextureTools::DistanceField, to make room for non-GL
implementations. Old names are deprecated aliases, same for headers. The
only remaining bit in the Text library is the Renderer class, but for
that one I first need to invent a non-shitty API so I can just deprecate
the old thing as a whole and create something reasonable from scratch in
RendererGL.
The internal GL texture format (especially the R8 vs Luminance mess on
ES2) is now considered an implementation detail and shouldn't affect
common use in any way.
The format is now required always, in order to prepare for use cases
where colored glyphs are a thing as well. Additionally, to match the
recent change in AbstractGlyphCache, the processed format is specified
separately, allowing the input and processed formats to be decoupled.
Which ultimately fixes the regression on ES2 and WebGL 1 where it was no
longer possible to call font.fillyGlyphCache() on a
DistanceFieldGlyphCache.
Also, as there's now a generic format on input, another ES2-specific
issue is now fixed as well, in particular a case where a GL error
would be emitted on drivers with EXT_texture_storage because an unsized
format is passed to setStorage(). This was a problem since a long time
ago, but I ignored it because it didn't affect WebGL 1 and all drivers
that exposed EXT_texture_storage exposed EXT_texture_rg, effectively
circumventing this issue. Or so I think, at least.
The constructors taking either a GL::TextureFormat or no format at all
are now deprecated aliases to the new functionality.
Essential for text selection and editing in cases where mapping from the
input text to the actual shaped glyphs is nontrivial. I.e., in case of
HarfBuzzFont; both FreeTypeFont and StbTrueTypeFont perform a 1:1
translation from input (UTF-8) characters to glyphs so there it isn't
as important.
This is going to get called from fillGlyphCache() that takes a string,
and thus should be better than one virtual call per character. The
single-character variant still stays, as it's a nice convenience API.
Eventually glyphSize() will get a similar treatment as well.
Like the Deg / Rad classes, these are for strongly-typed representation
of time. Because the current way, either with untyped and imprecise
Float, or the insanely-hard-to-use and bloated std::chrono::nanoseconds,
was just too crappy.
This is just the types alone, corresponding typedefs in the root
namespace, and conversion from std::chrono. Using these in the Animation
library, in Timeline, in DebugTools::FrameProfiler, GL::TimeQuery etc.,
will eventually and gradually follow.
To allow people to cherry-pick just a subset of them if other code
defines literals that may conflict. I first did that the same way as
STL (so both namespaces inline), only to subsequently discover the
horror that all literals are implicitly available in the enclosing
Math namespace, thus preventing no conflicts at all. So the Literals
namespace isn't getting inline, only the inner ones.
This is also in preparation for introduction of
Literals::ConstexprColorLiterals that would provide a constexpr variant
of the _srgbf literals at the expense of having a large LUT in a header
file.
To be used with the recently added MaterialTools::removeDuplicates() for
example, or internally by other upcoming scene tools such as importer
filtering.
Especially given that nullptr causes an assert. All call sites basically
ended up passing a &font and all that extra annoyance just doesn't make
sense.
Given this API is still relatively recent, I'm not bothering with
backwards compatibility.
In basically all cases it's two independent operations so forcing them
to be done together doesn't really bring any potential efficiency
advantages. On the other hand, splitting them allows allows the caller
to better make use of available memory, as the new
renderGlyphQuadsInto() allows the input and output arrays to be aliased.
Bumping AbstractFont plugin interface versions as this is a breaking
change.