Should make new things more discoverable, avoid confusion when a
documented API isn't there and reduce the need for maintaining multiple
separate versions of the docs.
No matter how broken iOS is in CMake 3.6, $<CONFIG> seems to work there,
so reducing the amount of code and putting the configure into a single
place independently of what generator or what system/build is used.
Compared to current state it always adds Debug/configure.h instead of
putting it directly to the ${CMAKE_CURRENT_BINARY_DIR}, but the
alternative would be some CMake branching again and I just removed that,
so no.
This also prepares everything for plugin libraries being put into a
central place -- the config files don't depend on their location
anymore.
What's left is *a lot* of places taking monstrous
std::vector<std::reference_wrapper> and that can't be changed to
std::vector<Containers::Reference> in a source-compatible way. Even that
would be only a temporary change, since the goal is to fully avoid
dependency on STL in those cases.
The final version of these APIs should take
Containers::ArrayView<Containers::Reference> and be implicitly
convertible froom e.g. std::vector<Containers::Reference>. That's
definitely possible, but not in time for 2019.01, so instead of forcing
users to temporary pass a `{vec.begin(), vec.size()}` everywhere instead
of just `vec`, I'm rather keeping these APIs intact.
*Finally* having consistent output on desktop, ES1, ES2, WebGL 1 and
WebGL 2, while also cutting 40% off the processing time. For the record,
the benchmark took 2.3 ms before, now it's 1.4.
The nested for loop is a big problem. Worked around this by putting a
fixed upper bound and some `break`s. This might result in the code
being slower on desktop drivers, needs to be redone from scratch later
by generating the code directly.
Even this minor change caused Mesa drivers to output a slightly
different file. Test output is verbatim below:
============================================================================
FAIL [1] test() at
../src/Magnum/TextureTools/Test/DistanceFieldGLTest.cpp on line 107
Images actualOutputImage and
Utility::Directory::join(DISTANCEFIELDGLTEST_FILES_DIR, "output.tga")
have both max and mean delta above threshold, actual 1/0.000488281 but
at most 0/0 expected. Delta image:
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| M |
| |
| |
| M |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
Pixels above max/mean threshold:
[16,41] Vector(175), expected Vector(174) (Δ = 1)
[46,35] Vector(175), expected Vector(174) (Δ = 1)
Fully passes only on desktop and ES3 (Mesa), expecting minor differences
onother GPUs. ES2 is slightly broken and needs fixing; doesn't even
compile on WebGL 1 and causes a serious GPU stall on WebGL 2 -- in both
causes caused by the unbounded nested loops. Rendering doesn't work on
WebGL 1 at the moment, since luminance formats are not renderable. And
for a RGBA output format I would need some utility to get rid of the
extra channels in order to pass the comparison.
Lots of work to do here.