This is going to get called from fillGlyphCache() that takes a string,
and thus should be better than one virtual call per character. The
single-character variant still stays, as it's a nice convenience API.
Eventually glyphSize() will get a similar treatment as well.
Took me quite a while to realize what was going on, but in retrospect
it's obvious -- the rasterizer just *rounds* the sub-pixel-positioned
glyph quads to nearest pixels. Which then can cause the neighboring
glyph data to leak to it (because the texture is then sampled not
directly on the edge pixel, but slightly outside of it), or it can also
cut away the edge, when it gets rounded in the other direction.
This was a problem with the original -- laughably inefficient -- atlas
packer as well, but because that packer had excessive padding around
everything, only the second edge-cutting artifact manifested, and that
one is rather subtle unless you know what to look for.
This means the packing is now slightly worse than before and sizes that
previously worked may no longer fit anymore. But since the new atlas
packer is relatively new (well, from September, time sure flies
different here), and the improvement compared to the original packer is
still quite massive, I don't think this is a problem.
Especially given that nullptr causes an assert. All call sites basically
ended up passing a &font and all that extra annoyance just doesn't make
sense.
Given this API is still relatively recent, I'm not bothering with
backwards compatibility.
Replaces the previous, grossly inefficient AbstractLayouter which was
performing one virtual call per glyph (!). It's now also reusable,
meaning it doesn't need to be allocated anew for every new shaped text,
and it no longer requires each and every font plugin to implement the
same redundant glyph data fetching from the glyph cache, scaling etc. --
all that is meant to be done by the users of AbstractShaper, i.e.
Renderer. The independency on a glyph cache theorerically also means it
can be used for a completely different, non-texture-based way to render
text (such as direct path drawing directly on the GPU), although I won't
be exploring that path now.
It also exposes an interface for specifying script, language,
direction and typographic features. Such interface will be currently
only implemented in HarfBuzz, but that's the intent -- to provide a
flexible enough interface to support all possible use cases that a font
or a font plugin may support, instead of exposing a least common
denominator and then having no easy way to shape a text in a non-Latin
script or use a fancy OpenType feature the chosen font has.
The old public interface is preserved for backwards compatibility,
marked as deprecated, however the virtual APIs are not, as supporting
that would be too nasty. I don't think any user code ever implemented a
font plugin so this should be okay.
To ensure smooth transition with no regressions, the Renderer class and
MagnumFont tests still use the old API in this commit, and their test
pass the same way as they did before (except for two removed MagnumFont
test cases which tested errors that are now an assertion in the
deprecated layout() API and thus cannot be tested from the plugin
anymore). Porting them away from the deprecated API will be done in
separate commits.
The internals are still rather ew, that's for another time, but the goal
here was to make it compile and correctly handle the new variability in
created and passed glyph cache instances. In particular:
- The MagnumFont createGlyphCache() now uploads the texture image
directly, skipping a copy through the CPU-side image which may not
have a size matching the input image. That's kind of nasty (and too
tied to OpenGL), but will be cleaned up once the GlyphCache APIs are
fixed to allow this in a nicer way.
- The MagnumFontConverter will now correctly select a font if the cache
contains more than one, properly deal with glyph caches that either
do or not do image processing, and will fail for array glyph caches
as that's not supported by the current format. It also now has an
early special-case handling for glyph caches with processed images,
where the actual image may have different size (and possibly format),
to match what MagnumFont expects.
Need some regression tests for the upcoming glyph cache rework as
otherwise it'll be too miserable. It now fills the glyph cache image
with some non-trivial data and compares to them, and checks the filled
glyphs for the created glyph cache too.
This also fixes a regression introduced when porting away from STL in
47a1295ab8, where UTF-8 layouting was
reporting extra glyphs at the end. Now UTF-8 support is properly tested
both in the MagnumFontConverter plugin and MagnumFont.
It won't contain just font metrics anymore. Also don't require the
struct to be zero-initialized if opening fails -- simply allow the
plugins to return garbage in that case and save the values only if
opening actually succeeded.
Strictly speaking this isn't an ABI change as the return value isn't
part of the function signature and the struct is still the same, so the
plugin interface version isn't bumped for this change.
All std::string arguments are now a StringView, what returned a
std::pair is now a Pair. STL compatibility headers are included on
deprecated builds to ease porting, as usual.
The only *really* breaking changes are in the internals, where an
ArrayView<const char32_t> is used instead of std::u32string, which is in
line with the change done in Utility::Unicode::utf32(); and a Triple is
returned instead of a std::tuple. Behaviorally nothing changed except
that fillGlyphCache() now asserts if the input string contains invalid
UTF-8 (which is also in line with the cahnge done in Utility::Unicode).
Partially needed to avoid build breakages because Corrade itself
switched as well, partially because a cleanup is always good. Done
except for (STL-heavy) code that's deprecated or SceneGraph-related APIs
that are still quite full of STL as well.
So people new to the plugin stuff can quickly get to usage introduction
and code snippets. The plugins alone don't list anything like that and
it may be *very* confusing otherwise.
When for example only CMAKE_RUNTIME_OUTPUT_DIRECTORY is set, but not the
others, the original code skipped overriding the locations altogether.
This is a valid use case, as e.g. ARCHIVE and LIBRARY_OUTPUT_DIRECTORY
tend to mess with the way Visual Studio produces and consumes *.lib
files.
Furthermore, this now also handles CMAKE_*_OUTPUT_DIRECTORY_<CONFIG> in
a similar way, which is what Conan uses for example.
Similar to the change done in Corrade, see the commit for details:
878624ac36
Wow, this is probably the most backwards-compatibility code I've ever
written. Can't wait until I can drop all that.
Consistently with changes done to Utility::Path, this enforces proper
error handling on user side. Originally I didn't want to do this and
instead wanted to have a special Array instance devoted for an error
state, but that still would allow the error state be errorneously
treated as a successful but empty array.
It limits the support for CMake 3.12+, but it's much less verbose and I
don't expect people to use ancient CMake versions with IDEs like Xcode
or VS anyway, so this should be fine.
Like in Trade, the unatomic exists() + read() pair (and silent failures
if the file exists but can't be read) was replaced with just
Path::read() that now returns an Optional. Besides that, not much worth
mentioning.
First and foremost I need to expand the interface to support 3D
image conversion. But the interface was not great to begin with, so this
takes the opportunity of an API break and does several things:
* The `export*()` names were rather strange and I don't even remember
why I chose that name (maybe because at first I wanted to have an
"exporter" API as a counterpart to importers?)
* In addition, there was no way to convert a compressed image to a
compressed image (or to an uncompressed image) and adding the two
missing variants would be a lot of combinations. So instead the new
convert() returns an ImageData, which can be both, and thus also
allows the converters to produce compressed or uncompressed output
based on some runtime setting, without having to implement two
(four?) separate functions for that and requiring users to know
beforehand what type of an image will be created.
* The ImageConverterFeature enum was named in a really strange way as
well, with ConvertCompressedImage meaning "convert to a compressed
image" while "ConvertCompressedData" instead meant "convert a
compressed image to a data". Utter chaos. It also all implied 2D and
on the other hand had a redundant `Image` in the name, so I went and
remade the whole thing. As mentioned above, two of the enums now mean
the same thing, and are both replaced with Convert2D.
* Finally, similarly as changes elsewhere, I took this opportunity to
get rid of std::string in the convertToFile() APIs.
It doesn't really work for tests that depend on more than one plugin
(because there i would need to handle all combinations, somehow), but it
does the job when the end user has such use case.
Moreover, doing them here would mean the options might get ignored with
no clear reason why. Aaand yes of course this caused MagnumFontConverter
to be skipped for no clear reason on embedded platforms, and uncovered a
setup bug in the test.
This makes it possible to:
- finally use Magnum as a CMake subproject on Windows and have your
executables not fail to run with a "DLL missing" error (and the
setting is put to cache so superprojects just implicitly make use of
that)
- run tests on Windows without having to install first
- use dynamic plugins from a CMake subproject on any platform without
having to install first or load them by filename --- and the plugin
directory is now easily discovered as relative to
libraryLocation() of the library implementing given plugin interface
No matter how broken iOS is in CMake 3.6, $<CONFIG> seems to work there,
so reducing the amount of code and putting the configure into a single
place independently of what generator or what system/build is used.
Compared to current state it always adds Debug/configure.h instead of
putting it directly to the ${CMAKE_CURRENT_BINARY_DIR}, but the
alternative would be some CMake branching again and I just removed that,
so no.
This also prepares everything for plugin libraries being put into a
central place -- the config files don't depend on their location
anymore.