I'm not sure why this restriction was there as nothing was preventing
them from being used. The attribute is only accessible through the
typeless attribute(), which gives back
`Containers::StridedArrayView2D<const char>` with second dimension size
being set to the full stride. And there it doesn't matter if the format
is an array or not.
This will be useful for joint IDs and weights, for example doing crazy
things like packing the IDs into an array of 8 4-bit numbers, saving
half the memory compared to the smallest builtin representation using
UnsignedByte[8].
Because it was no longer bearable with three UnsignedInt arguments in a
row, especially when some of them are only available on a subset of
platforms. And it would get even worse with introduction of planned
features such as multiview or skinning.
Backwards compatibility is in place, as always. To ensure nothing
breaks, this commit still has all tests and snippets using the old API.
A bunch of new GLES- and WebGL-specific draw() variants got added in
b30d313ecd over a year ago, but so far
they weren't exposed in any of the Shaders because it'd mean a lot of
nasty copypasting and I just didn't like that.
So instead, there's a new macro to handle this messy work, and also
tested in order to ensure everything is still as it should be and that
it works even outside the Magnum namespace. This makes it much easier to
add new draw() variants (such as indirect draws, *finally*), without
having to update every shader implementation under the sun.
One difference is that I'm now allowing drawTransformFeedback() always
-- because that makes sense. It *is* possible to use a regular shader to
draw a result of a XFB, so it doesn't make sense to attempt to block
that.
The change to Shaders will be done in the next commit.
It's four pointers, twice as much as what would be acceptable. Not sure
why this happened, maybe because all those cases used an ArrayView
before and so I just changed the type without considering the difference
in its size?
Unfortunately this change also means a bump in the plugin interface
string, thus all scene converter plugins have to be updated as well.
Currently contains just one very silly Phong->PBR conversion utility,
but eventually it'll provide tools for simplifying, merging and
deduplicating materials.
The result of e.g. -15.0_degf is not Deg, but rather
Math::Unit<Math::Deg, Float>, and those didn't have a corresponding
TweakableParser defined. Now they do.
Funny how I didn't run into this until now.
The #line directive behaves differently on GLSL < 330. Who would have
thought. Another "fun" implication is that I didn't notice this until
now -- seems like I didn't really write any shader since that
needed compatibility with old GL since 2012?? Heh.
I'm getting kinda pissed at the strange defaults. Should I be using a
different toolkit altogether because SDL IS FOR GAMES ONLY? Given how
many bugreports and complaints there is about "Dosbox blocking
screensaver" I'm beginning to think it's SDL's fault, not mine.
Let the users control what they want, ffs, don't enable problematic
features like blocking powersave or disabling compositor by default.
Those should be a runtime option anyway, similarly to how video players
block powersaving only when *an actual video is playing*.
Lists features, aliases as well as documented contents of the whole
configuration file. Useful to not need to look up online docs when
working on the command line.
It was implemented only for the Half type and not the others, and I just
felt like using it on a vector now, 12 years after the Vector class got
first added.
Funny/sad that this possibility took me so long to realize. Until now it
was "you can have command-line utilities or static plugins but never
both", and only with CMake 3.13+ it was possible to link static plugins
to these executables from outside. Now it's a builtin and supported
option.
A large portion of the needed changes was in the previous commit
already, this does just the remaining part, in particular ensuring EGL
is linked and SDL is told to use EGL as well -- GLFW was told so in the
previous cleanup commit already.
These two options were mutually exclusive, and both were doing the same
thing -- switching to EGL on desktop GL, or switching away from EGL on
GLES. That made all logic vastly more complicated than it should be, and
unfortunately it took me half a decade to realize that. The new logic is
significantly simpler everywhere.
As usual, the old options are still recognized by CMake on a deprecated
build (with a warning), and are still exposed both as CMake variables
and a preprocessor define. But the logic for them was quite complicated,
so I don't guarantee all cases are covered.
I also tried to clean up the dependent CMake options to allow building
GLX and WGL apps on GLES independently of whether EGL is used, but it's
quite a mess due to the limitations of CMake < 3.22. Build directories
that have the options switched randomly over a long time might start
misbehaving, but the initial build should work well.
The whole class was a bad idea, why create something that's 99% similar
to another application and has just one platform-specific workaround? Of
course it resulted in this code being completely untested and not even
built anywhere, because it served a tiny insignificant use case.
To avoid losing all the code, I did my best in attempting to merge this
into the WindowlessEglApplication. But since, again, EGL isn't
really used on any Windows platform, I can't even say it builds
properly. Maybe not even the original code built.
The whole time I thought this class doesn't need such APIs due to being
rather short-lived. But now with async shader compilation it's no longer
so short-lived.
There's no reason for those to exist anymore -- origiinally they were
added in a hopeful attempt to make use of parallel shader compilation,
but in practice that meant compiling at most two or three shaders at
once and still stalling until that was done, so not that great at all.
The new APIs provide much better opportunities for parallelism.
Fun fact:
CORRADE_INTERNAL_ASSERT_OUTPUT(vert.compile() && frag.compile());
is actually one character shorter than
CORRADE_INTERNAL_ASSERT_OUTPUT(GL::Shader::compile({vert, frag}));
so not even typing convenience would be a reason to keep these.
Wasn't really possible to split this into multiple commits, so here's
the whole thing including delegation from and to the single-mesh APIs.
What's not done here and postponed for later is:
- an ability to feed the whole importer to it, filtering away data that
aren't supported by the converter
- corresponding changes in AbstractImageConverter, where it would now
primarily accept ImageData to future-proof for arbitrary extra
key/value data
This allows people to directly pass Containers::Array<Trade::MeshData>
there, without having to put them to an annoying temporary
Containers::Array<Containers::Reference<const Trade::MeshData>.
The Iterable header is included for backwards compatibility, apart from
that there should be no breaking change.
Because it somewhat confusingly may have implied that it's really
composed of 8-bit bools, and not bits. The same reasoning was used to
pick the name for Corrade's Containers::BitArray.
Backwards compatibility aliases are in place as usual, however the
internal BoolVectorConverter is now BitVectorConverter and there
unfortunately cannot be any backwards compatibility. This breaks only
GLM and Eigen integration in the magnum-integration repo, which I'm
fixing immediately. I don't expect any user code to use this internal
helper. For regular vectors maybe, for this one definitely not.
For quite a while, setSwapInterval() was reporting that "swap interval
was ignored by the driver". Since I used to have that behavior ages ago
on a NVidia Optimus machine (where it was just *impossible* to have
VSync, imagine that!!), I assumed it was a similar wart in Mesa and
didn't bother looking into it.
It turns out, however, that calling setSwapInterval(1) may result in
SDL_GL_GetSwapInterval() returning -1 instead of 1, thus helpfully
enabling late-swap behavior for me. Since -1 != -1, the code treated
that the same as if SDL_GL_GetSwapInterval() returned 0 (which was the
case with NV Optimus having broken VSync), but it's not an error in
fact.
With the intention that those will eventually contain also things like
YUp / YDown, PremultipliedAlpha and such.
This commit is mostly just busywork, wiring this into [Compressed]Image,
[Compressed]ImageView and Trade::ImageData and ensuring the flags get
correctly propagated during moves and conversions. Unfortunately in case
of Trade::ImageData it meant deprecating the current set of constructor
in order to insert an ImageFlags parameter before the importer state
pointer.
The only non-trivial piece of logic is when a 2D
[Compressed]ImageView gets converted to a 3D one, then the Array bit is
implicitly dropped, as 2D arrays of 1D images are not really a thing.
Instead, it's now possible to add new flags when doing the conversion --
for example to turn a 2D image to a (single-layer) 2D array image.
Ahem, missed those in 36ee7835d6. Yes,
putting compressed images into atlases is a desirable thing to do as
well, although it's now rather shitty due to lack of slicing.
Similar to what VertexFormat already has -- getting channel count,
channel format, sRGB and color vs depth/stencil properties out of a
PixelFormat value. Very useful for various image conversion plugins.
For consistency with how VertexFormat and other enum helpers are named.
The compressedBlockSize() and compressedBlockDataSize() is also renamed
to compressedPixelFormatBlockSize() and
compressedPixelFormatBlockDataSize().
While backwards compatibility aliases are in place, a breaking change
is that Image classes now look for pixelFormatSize() instead of
pixelSize(). This is used e.g. when passing GL::PixelFormat /
GL::PixelType to the image classes, instead of the generic PixelFormat.
While useful, it's unlikely that any project was defining their own
pixel format enum and pixelSize() for a D3D or Metal renderer or
whatnot, so the breakage should have no practical impact.
Allows creating a StridedArrayView slice on the sizes to feed them to
various algorithms, instead of first having to copy the sizes out to a
freshly allocated array.