Whoops, this got silently omitted during the massive refactor in
f7a6d79aa0 (Nov 23). Buffers *do* get
destroyed, VAOs not. If you got sudden GPU memory usage issues after a
recent Magnum update, this was why.
To be used with the recently added MaterialTools::removeDuplicates() for
example, or internally by other upcoming scene tools such as importer
filtering.
This "type erased std::vector member" was done in the times before
growable arrays were a thing, and kind of made sense to go the extra way
to avoid a <vector> include in the header. Except that it made rather
unportable assumptions about std::vector size, which weren't correct for
example with _GLIBXX_ASSERTIONS set.
But what was *completely* unacceptable was that the vector was of one or
another type depending on the GL feature set present in the current
context. Apart from adding a lot of extra *nasty* logic to construction,
moves and destruction, this approach led to the mesh instance asking the
current context on destruction in order to know whether a destructor
should be called on std::vector<Buffer> or std::vector<AttributeLayout>.
Ugh.
Now it's a regular Array member (which isn't *that* heavy to need such
type-erased treatment, although it eventually could be), and thanks to
the AttributeLayout packing improvements in previous commits it's no
longer prohibitively wasteful to just abuse AttributeLayout instances to
store just owning Buffer instances alone -- doing so now wastes only 16
bytes per buffer, compared to 36 before. Given there's usually just one
or two vertex buffers per mesh (compared to attributes, which are
usually 4 or more), it should be fine.
The MeshGLTest::destructMovedOutInstance() test added few commits back
also no longer asserts on no GL context being present.
Want to construct them without a GL context present, and Optional is too
wasteful. Also adding it to the AbstractGlyphCache base, where it skips
also allocating the internal PIMPL state, because it's not going to get
used for anything in a NoCreate'd instance anyway.
The test added in previous commit now passes. Besides the behavior being
different for single- and multi-bind calls, an additional "interesting"
behavior is that the glBindBufferBase() / glBindBufferRange()
apparently also creates the GL object, if not already (in contrast to
the multi-bind APIs which *require* the GL objects to be created
before).
Especially given that nullptr causes an assert. All call sites basically
ended up passing a &font and all that extra annoyance just doesn't make
sense.
Given this API is still relatively recent, I'm not bothering with
backwards compatibility.
In basically all cases it's two independent operations so forcing them
to be done together doesn't really bring any potential efficiency
advantages. On the other hand, splitting them allows allows the caller
to better make use of available memory, as the new
renderGlyphQuadsInto() allows the input and output arrays to be aliased.
Bumping AbstractFont plugin interface versions as this is a breaking
change.
So it's possible to shape the text even before having all glyphs ready.
That's one reason, second reason is that this is a more common behavior
-- it usually doesn't make sense to make the text jump based on whether
it's "zaxaca", "KEKEKE" or "yqpyq".
The original alignment based on glyph bounds is now moved into dedicated
`*GlyphBounds` variants. Additionally the `*LeftGlyphBounds` were
changed to subtract the initial glyph offset as well, `*Integer` now
rounds only in the direction where it's needed because a division by 2
happened, and there's a set of `*Bottom*` values that somehow weren't
there before.
The original implementation wrongly assumed that the input and output
pixel centers align, which would only be a case if the ratio of the
input and output sizes would be odd. Which it in practice isn't, usually
it's a 1024x1024 texture scaled down to 128x128 or something like that.
The flipped test cases added in the previous commit now pass.
According to the benchmark, the new code is very slightly slower (~815
µs vs ~805 before). The new code isn't really more complex than the old
one, it just does slightly different work -- there are new corner case
in the initial logic for marking the pixel inside or outside, on the
other hand some corner cases that had to be handled in the previous case
are no longer a thing.
If it's not, it's a programmer error (i.e., don't use Luminance or
packed formats, won't work), and since there's no way for the API to
report a failure in a programmatic way, this was causing hard-to-track
errors.
Replaces the previous, grossly inefficient AbstractLayouter which was
performing one virtual call per glyph (!). It's now also reusable,
meaning it doesn't need to be allocated anew for every new shaped text,
and it no longer requires each and every font plugin to implement the
same redundant glyph data fetching from the glyph cache, scaling etc. --
all that is meant to be done by the users of AbstractShaper, i.e.
Renderer. The independency on a glyph cache theorerically also means it
can be used for a completely different, non-texture-based way to render
text (such as direct path drawing directly on the GPU), although I won't
be exploring that path now.
It also exposes an interface for specifying script, language,
direction and typographic features. Such interface will be currently
only implemented in HarfBuzz, but that's the intent -- to provide a
flexible enough interface to support all possible use cases that a font
or a font plugin may support, instead of exposing a least common
denominator and then having no easy way to shape a text in a non-Latin
script or use a fancy OpenType feature the chosen font has.
The old public interface is preserved for backwards compatibility,
marked as deprecated, however the virtual APIs are not, as supporting
that would be too nasty. I don't think any user code ever implemented a
font plugin so this should be okay.
To ensure smooth transition with no regressions, the Renderer class and
MagnumFont tests still use the old API in this commit, and their test
pass the same way as they did before (except for two removed MagnumFont
test cases which tested errors that are now an assertion in the
deprecated layout() API and thus cannot be tested from the plugin
anymore). Porting them away from the deprecated API will be done in
separate commits.
The class now supports incremental filling, multiple fonts, texture
arrays, removes all reliance on STL containers and is finally properly
documented.
To avoid complete breakage of every use, as much as possible was kept as
deprecated APIs -- in particular the reserve() with the nasty
std::vectors, the insert() that assumes a 2D cache and a single font
and textureSize() that returns a 2D vector. Those behave the same as
before, but will assert if the cache is an array or contains more than
one font.
On the other hand, begin() / end() access with std::unordered_map iterators
(ew!) was removed as the internals simply aren't a hashmap anymore. The
image() that returned an Image2D is now used to fill the glyph cache
instead of querying its potentially processed contents, and returns a
MutableImageView3D. I considered keeping it and adding sourceImage()
instead, but such naming turned out to be too inconsistent. For querying
processed image data (such as with the distance field cache) there's a
new processedImage() query, guarded by new GlyphCacheFeature bits -- if
both ImageProcessing and ProcessedImageDownload is set, it can be used
to retrieve the processed image (so, similar as ImageDownload was
before), and if neither is set, the cache contents are queryable
directly through image(), without needing any special support from
the GPU API.
Existing code is updated only in the minimal way possible to ensure that
no serious breakage was introduced by reimplementing the deprecated APIs
on top of the new backend. Porting away from deprecated APIs will be
done in next commits. The GlyphCache and DistanceFieldGlyphCache have
their public API kept intact for now, as a similar rework will be needed
for them as well.
Additionally, the MagnumFont and MagnumFontConverter plugins aren't
compiling yet as they require substantial changes to deal with the new
glyph cache features. That is not the case with other plugins in the
magnum-plugins repository tho, for those the backwards compatibility
"just works". On the other hand, since layout of the AbstractGlyphChange
changed, I'm bumping the AbstractFont plugin interface version to
force-trigger a rebuild of dependent projects. Because I ran a stale
magnum-player binary, it worked without crashing or GL errors but just
didn't show ANY text whatsoever due to ABI differences, and I wasted
some precious minutes before realizing that a simple rebuild would fix
it.
Ugh. Was using this to verify that the glyph cache was correctly
populated, only to end up with a GL error that I thought was coming from
the glyph cache itself and not here. Wasted too much time on that.
An ad-hoc solution was already done in DebugTools::screenshot(), now I
need it in another place. While not as fast as the O(1) mapping from
the generic format to the API-specific ones due to the potentially
linear lookup, it definitely could be useful in general.
Only noticed this now when adding inverse mapping. Sigh. OTOH, with the
inverse mapping in place this will no longer be possible to happen, as
it would cause a compile error due to a duplicate switch case.
The overhead of maintaining two classes with only very slight
differences in the API and the internals being basically identical is
not worth it. Too much potential for inconsistencies and doc errors.
Additionally, when I attempted to use it for the reworked Text glyph
cache, I realized I'd need to wrap them both under a common interface,
allowing easy use for both 2D and 2D array textures. And then it's
easier to just have the Atlas class done that way directly instead of
papering over that in a downstream API.
Instead of piling up a mountain if the other end is a ditch. Results in
better packing in most cases, but in one doesn't, so also adding an
option to disable this.
The docs image is now slightly more leveled, one pixel lower.
I made the binary data use 16-bit integers instead of 32-bit to make
them occupy less space and forgot to update it here. Also they're in a
different path now.
Just the dumbest possible idea I had, and it compares surprisingly well
in both efficiency (~comparable to stb_rect_pack) and speed
(significantly faster than stb_rect_pack with tons of tiny images,
slower with larger ones -- would probably need to SIMD Math::max() and
such, haha). It's the very first implementation without any additional
improvements I have in mind, so it'll likely improve further.
Includes a benchmark with a bunch of "datasets" extracted from both
fonts and large glTF models. The stb_rect_pack file isn't commited as
it's not useful apart from this single benchmark, put it to
AtlasTestFiles/ and recompile.
All std::string arguments are now a StringView, what returned a
std::pair is now a Pair. STL compatibility headers are included on
deprecated builds to ease porting, as usual.
The only *really* breaking changes are in the internals, where an
ArrayView<const char32_t> is used instead of std::u32string, which is in
line with the change done in Utility::Unicode::utf32(); and a Triple is
returned instead of a std::tuple. Behaviorally nothing changed except
that fillGlyphCache() now asserts if the input string contains invalid
UTF-8 (which is also in line with the cahnge done in Utility::Unicode).