Before the parent link was gone before destructing the children and that
just didn't make sense. The test added in previous commit now passes as
expected.
This is how it should be but isn't -- the Object inheritance makes it
so first the parent pointer is cleared and only then the children get
destructed. That doesn't really make sense (and I doubt any code relied
on that), so I'll flip it.
There will be Flag::FlipY for images at some point, enabled by default
for compatibility with existing GL code, and so it makes sense to start
discouraging setFlags() as early as possible to avoid people resetting
the default by accident.
Also update the imageconverter, sceneconverter and shaderconverter utils
to use these instead of setFlags().
Ah the good old times where my brain was not overflowing with
unnecessarily deep knowledge about how GPUs (and OpenGL drivers in
particular) work. Nope, switching a blend state and calling glFinish()
won't generate any GPU work, that will be all deferred until there's
something to be actually drawn, cleared or copied.
'cuz I was writing some raw GL for Emscripten tests and arrived at
glClear(GL_COLOR);
without having any suspicion that this is just TOTALLY WRONG. Same
would go for Vulkan few years ahead when I'm so used to the Magnum
wrappers that I forget what the common pitfalls are.
There are no SSO / DSA extensions on that platform, so it doesn't make
sense to do this extra indirection. Saves another kilobyte (237 -> 236
kB) in WebGL 2 magnum-gl-info.wasm.
Instead of wrapping glProgramUniform*() inside a member function, we now
call that function directly, which allows us to remove 2/3rds of
AbstractShaderProgram members. The non-DSA code path is still
implemented though member functions, though they are now static,
mimic the signature of the DSA APIs and do a use() + glUniform*()
internally.
Except CGL, iOS, AndroidApplication and AbstractXApplication which are
either too crappy or don't have the needed scaffolding for specifying
context flags yet.
We no longer have to use sizeof("...") to avoid useless strlen calls or
check for nulls because the StringView APIs are ACTUALLY SANE and not a
full of nasty surprises and performance / security pitfalls like with
both the C and C++ standard library functions. Good riddance.
The 64-bit flags are now always non-empty in WindowlessEglApplication
(containing the Windowless flag), so we should slice to 32-bit and
check that instead.
With this flag set (which is done implicitly for all windowless apps
and, conversely, not done for all windowed apps), the default
framebuffer state isn't touched in any way, which should avoid potential
race conditions with default framebuffer on another thread.
This means that instead of 12 separate allocations we have just one,
allocating everything together in a contiguous piece of memory. That
should be also a bit more cache friendly when accessing the state as
it's not scattered around the memory like crazy.
Because there are no Pointer indirections needed anymore, the State
members are just references now. That resulted in a lot of sweeping
changes around the whole GL library, but they're all trivial, changing
`->` to `.`, mostly.
There's two more nested allocations in the TextureState struct, will
take care of them in a separate commit.
This removes one unnecessary allocation from each application startup.
In some cases of the windowless apps the Platform::GLContext could be
put directly into the class, in other cases it had to be wrapped in an
Optional because we need delayed construction and/or earlier
destruction.
So in case it touches the GL state in some way, it doesn't do that on an
already destroyed context. The windowless apps do this all implicitly
due to the WindowlessGLContext encapsulation.
The app does its own EGL-specific verbose printing and thus should
recognize this option the same way as GL::Context does. Right now it was
only taking into account the command-line parameters and not the new
Configuration.
Right now only the command-line variant of it was checked. Since on
some platforms this requires the app to explicitly request a debug
context, the app needs to handle the case when it's passed via a
Configuration as well.
We're not including windows.h there (fortunately!), so there's no point
in defining such a macro. Also the proper way would be defining it only
if it's not already defined to avoid macro redefinition warnings.
I don't see a real use case for this API (and don't remember ever using
it) and it only causes extra overhead during context creation (and then
a ton of useless allocations at runtime).
Broken since 0ceb54ed7d, but the two code
paths actually differ only by an enum name so it didn't cause any
crashes. (I wonder why I need two different code paths at all.)
Disabling engine startup log or modifying enabled extensions /
workarounds from the application side was one of the common pain
points and this should *finally* solve the problem. This Configuration
is now inherited by the usual Platform::*Application::GLConfiguration /
Platform::Windowless*Application::Configuration classes people are used
to, so for the end user it's just as if these classes got a bunch new
options.
Having this, I also extended the ContextGLTest to verify that the
Configuration and command-line options do what's expected because that
hadn't automated tests until now. The test is mostly a copy of what I
did for Vulkan already, nothing special. Additionally all
Platform*ApplicationTest executables gained a new --quiet option to
verify that the GL::Context::Configuration subset gets correctly passed
from the Application code, because that's something we can't really
verify in an automated way.