Exposes MaterialTools::phongToPbrMetallicRoughness() that got added some
time ago. Most of the code and tests is scaffolding needed for direct
material import and processing outside of the addImporterContents()
automagic, similar to what was already done for meshes and images.
In particular, adding more material conversion options such as
canonicalization or deduplication will be significantly easier now that
the basics are done and tested.
Because I got tired of having to manually postprocess ground truth files
for shader tests.
A bit strange / funny that the first ever version of my code manages to
produce smaller files than both stb_image and ImageMagick (with
-compress RunTimeLengthEncoded), for some reason both pick some strange
inefficient run combinations in various scenarios, such as "copy 2" +
"repeat 3" where I pick "repeat 5". And the difference is not
insignificant -- when testing with some shader test files, it resulted
in a ~17% smaller size!
The plugin follows the stricter variant of the spec by default (i.e.,
splitting runs across scanline boundaries) -- so that's *not* the
reason for the weird differences between my code and theirs -- but
provides an option to not do that for even smaller files. Which I'm
going to use for shader ground truth test files, because there every
byte counts. This option together with the above difference causes files
to be ~25% smaller, which is quite a lot.
Given that I made a breaking change by returning Containers::String
instead of a std::string, I can take it further and replace also
std::pair with Containers::Pair -- it won't bring any more pain to the
users, they have to change their code anyway.
Co-authored-by: Hugo Amiard <hugo.amiard@wonderlandengine.com>
Same as in the previous commit, most cases are inputs so a StringStl.h
compatibility include will do, the only breaking change is
GL::Shader::sources() which now returns a StringIterable instead of a
std::vector<std::string> (ew).
Awesome about this whole thing is that The Shader API now allows
creating a shader from sources coming either from string view literals
or Utility::Resource completely without having to allocate any strings
internally, because all those can be just non-owning references wrapped
with String::nullTerminatedGlobalView(). The only parts which aren't
references are the #line markers, but (especially on 64bit) those can
easily fit into the 22-byte (or 10-byte on 32bit) SSO storage.
Also, various Shader constructors and assignment operators had to be
deinlined in order to avoid having to include the String header, which
would be needed for Array destruction during a move.
Co-authored-by: Hugo Amiard <hugo.amiard@wonderlandengine.com>
Most of these are just inputs, so a compatibility StringStl.h include
will do, the only exception is the callback for which there needs to
stay a deprecated overload (which is internally delegated from the
StringView one).
Also explicitly testing with non-null-terminated strings -- the APIs
take an explicit size so it shouldn't be a problem, but it's always good
to have this verified independently. Drivers are crap, you know.
One consequence of no longer using an impossible-to-forward-declare
std::string is that I had to deinline the DebugGroup constructor because
it no longer worked with just a forward-declared StringView.
I'm not sure why this restriction was there as nothing was preventing
them from being used. The attribute is only accessible through the
typeless attribute(), which gives back
`Containers::StridedArrayView2D<const char>` with second dimension size
being set to the full stride. And there it doesn't matter if the format
is an array or not.
This will be useful for joint IDs and weights, for example doing crazy
things like packing the IDs into an array of 8 4-bit numbers, saving
half the memory compared to the smallest builtin representation using
UnsignedByte[8].
Because it was no longer bearable with three UnsignedInt arguments in a
row, especially when some of them are only available on a subset of
platforms. And it would get even worse with introduction of planned
features such as multiview or skinning.
Backwards compatibility is in place, as always. To ensure nothing
breaks, this commit still has all tests and snippets using the old API.
A bunch of new GLES- and WebGL-specific draw() variants got added in
b30d313ecd over a year ago, but so far
they weren't exposed in any of the Shaders because it'd mean a lot of
nasty copypasting and I just didn't like that.
So instead, there's a new macro to handle this messy work, and also
tested in order to ensure everything is still as it should be and that
it works even outside the Magnum namespace. This makes it much easier to
add new draw() variants (such as indirect draws, *finally*), without
having to update every shader implementation under the sun.
One difference is that I'm now allowing drawTransformFeedback() always
-- because that makes sense. It *is* possible to use a regular shader to
draw a result of a XFB, so it doesn't make sense to attempt to block
that.
The change to Shaders will be done in the next commit.
It's four pointers, twice as much as what would be acceptable. Not sure
why this happened, maybe because all those cases used an ArrayView
before and so I just changed the type without considering the difference
in its size?
Unfortunately this change also means a bump in the plugin interface
string, thus all scene converter plugins have to be updated as well.
Currently contains just one very silly Phong->PBR conversion utility,
but eventually it'll provide tools for simplifying, merging and
deduplicating materials.
The result of e.g. -15.0_degf is not Deg, but rather
Math::Unit<Math::Deg, Float>, and those didn't have a corresponding
TweakableParser defined. Now they do.
Funny how I didn't run into this until now.
The #line directive behaves differently on GLSL < 330. Who would have
thought. Another "fun" implication is that I didn't notice this until
now -- seems like I didn't really write any shader since that
needed compatibility with old GL since 2012?? Heh.
I'm getting kinda pissed at the strange defaults. Should I be using a
different toolkit altogether because SDL IS FOR GAMES ONLY? Given how
many bugreports and complaints there is about "Dosbox blocking
screensaver" I'm beginning to think it's SDL's fault, not mine.
Let the users control what they want, ffs, don't enable problematic
features like blocking powersave or disabling compositor by default.
Those should be a runtime option anyway, similarly to how video players
block powersaving only when *an actual video is playing*.
Lists features, aliases as well as documented contents of the whole
configuration file. Useful to not need to look up online docs when
working on the command line.
It was implemented only for the Half type and not the others, and I just
felt like using it on a vector now, 12 years after the Vector class got
first added.
Funny/sad that this possibility took me so long to realize. Until now it
was "you can have command-line utilities or static plugins but never
both", and only with CMake 3.13+ it was possible to link static plugins
to these executables from outside. Now it's a builtin and supported
option.
A large portion of the needed changes was in the previous commit
already, this does just the remaining part, in particular ensuring EGL
is linked and SDL is told to use EGL as well -- GLFW was told so in the
previous cleanup commit already.
These two options were mutually exclusive, and both were doing the same
thing -- switching to EGL on desktop GL, or switching away from EGL on
GLES. That made all logic vastly more complicated than it should be, and
unfortunately it took me half a decade to realize that. The new logic is
significantly simpler everywhere.
As usual, the old options are still recognized by CMake on a deprecated
build (with a warning), and are still exposed both as CMake variables
and a preprocessor define. But the logic for them was quite complicated,
so I don't guarantee all cases are covered.
I also tried to clean up the dependent CMake options to allow building
GLX and WGL apps on GLES independently of whether EGL is used, but it's
quite a mess due to the limitations of CMake < 3.22. Build directories
that have the options switched randomly over a long time might start
misbehaving, but the initial build should work well.
The whole class was a bad idea, why create something that's 99% similar
to another application and has just one platform-specific workaround? Of
course it resulted in this code being completely untested and not even
built anywhere, because it served a tiny insignificant use case.
To avoid losing all the code, I did my best in attempting to merge this
into the WindowlessEglApplication. But since, again, EGL isn't
really used on any Windows platform, I can't even say it builds
properly. Maybe not even the original code built.
The whole time I thought this class doesn't need such APIs due to being
rather short-lived. But now with async shader compilation it's no longer
so short-lived.
There's no reason for those to exist anymore -- origiinally they were
added in a hopeful attempt to make use of parallel shader compilation,
but in practice that meant compiling at most two or three shaders at
once and still stalling until that was done, so not that great at all.
The new APIs provide much better opportunities for parallelism.
Fun fact:
CORRADE_INTERNAL_ASSERT_OUTPUT(vert.compile() && frag.compile());
is actually one character shorter than
CORRADE_INTERNAL_ASSERT_OUTPUT(GL::Shader::compile({vert, frag}));
so not even typing convenience would be a reason to keep these.