This might eventually be a supported case (an object referencing three
meshes with different primitives), but let's just cover the existing
code for now.
Basically just making use of all APIs that got invented over the last 10
years, such as using instanced test cases instead of repeatedly having
the same test code with just different strings or accessing meshes
directly by name instead of meshForName().
Additionally the test files were renamed to better group them visually,
with invalid cases being separated from valid cases so it's possible to
have instanced tests for those.
FUCKING stupid defaults. CMake's default no less stupid. I'm MAD, why
would anybody even want to have the build run sequentially on todays
machines?! If your build is crap and can't run in parallel, FIX YOUR
CODE, but don't make the other 99% users suffer!!
Because it somewhat confusingly may have implied that it's really
composed of 8-bit bools, and not bits. The same reasoning was used to
pick the name for Corrade's Containers::BitArray.
Backwards compatibility aliases are in place as usual, however the
internal BoolVectorConverter is now BitVectorConverter and there
unfortunately cannot be any backwards compatibility. This breaks only
GLM and Eigen integration in the magnum-integration repo, which I'm
fixing immediately. I don't expect any user code to use this internal
helper. For regular vectors maybe, for this one definitely not.
It's just not worth the pain. I have to reduce the parallelism from 24
to 6, and yet it still sometimes gets stuck. And at that point it's
about twice as slow, so there's not really any gain from this costing
half the credits.
This reverts commit 9d61a63553.
This reverts commit 9c4f2ceea2.
This reverts commit 80b7694468.
For quite a while, setSwapInterval() was reporting that "swap interval
was ignored by the driver". Since I used to have that behavior ages ago
on a NVidia Optimus machine (where it was just *impossible* to have
VSync, imagine that!!), I assumed it was a similar wart in Mesa and
didn't bother looking into it.
It turns out, however, that calling setSwapInterval(1) may result in
SDL_GL_GetSwapInterval() returning -1 instead of 1, thus helpfully
enabling late-swap behavior for me. Since -1 != -1, the code treated
that the same as if SDL_GL_GetSwapInterval() returned 0 (which was the
case with NV Optimus having broken VSync), but it's not an error in
fact.
They are required to have the same format already, so flags make sense
as well -- what's the point of saving a multi-level cube map with one of
the levels being just a 2D array, anyway?
Array texture and cubemap (array) subimage queries get
ImageFlag*D::Array, cubemap whole image ImageFlag3D::CubeMap, cubemap
array whole image get ImageFlag3D::CubeMap and ImageFlag3D::Array. No
flags are checked when uploading images or when downloading images to
views -- using an array texture as a cube map and vice versa is a valid
case, and others are probably also, so there's no point.
The flag restrictions will come into place when ImageFlag*D::YUp / YDown
etc. are a thing.
With the intention that those will eventually contain also things like
YUp / YDown, PremultipliedAlpha and such.
This commit is mostly just busywork, wiring this into [Compressed]Image,
[Compressed]ImageView and Trade::ImageData and ensuring the flags get
correctly propagated during moves and conversions. Unfortunately in case
of Trade::ImageData it meant deprecating the current set of constructor
in order to insert an ImageFlags parameter before the importer state
pointer.
The only non-trivial piece of logic is when a 2D
[Compressed]ImageView gets converted to a 3D one, then the Array bit is
implicitly dropped, as 2D arrays of 1D images are not really a thing.
Instead, it's now possible to add new flags when doing the conversion --
for example to turn a 2D image to a (single-layer) 2D array image.
Ahem, missed those in 36ee7835d6. Yes,
putting compressed images into atlases is a desirable thing to do as
well, although it's now rather shitty due to lack of slicing.
I'm running out of the 400k free monthly credits and since macOS takes
50 credits per minute vs 5 for the Linux Docker, it's the obvious
candidate. ARM and Android take 10 instead of 5 and since we don't have
any ARM-specific code and the Android ES3 build doesn't run GL tests,
it's also non-essential.
Will revert this at the start of Junly.
The main part of the build time is fetching packages (where I doubt
multithreading can help anything) and running tests, which is serial to
have the output reasonably ordered. The actual build time takes less
than a half of the total time so going with the smallest resource class
instead of the medium shouldn't make that much of a difference. But it
uses half the credits, and that's what matters.
Similar to what VertexFormat already has -- getting channel count,
channel format, sRGB and color vs depth/stencil properties out of a
PixelFormat value. Very useful for various image conversion plugins.
For consistency with how VertexFormat and other enum helpers are named.
The compressedBlockSize() and compressedBlockDataSize() is also renamed
to compressedPixelFormatBlockSize() and
compressedPixelFormatBlockDataSize().
While backwards compatibility aliases are in place, a breaking change
is that Image classes now look for pixelFormatSize() instead of
pixelSize(). This is used e.g. when passing GL::PixelFormat /
GL::PixelType to the image classes, instead of the generic PixelFormat.
While useful, it's unlikely that any project was defining their own
pixel format enum and pixelSize() for a D3D or Metal renderer or
whatnot, so the breakage should have no practical impact.
Together with the parent commit, it's now possible to do THE
UNTHINKABLE:
Containers::Array<Trade::ImageData2D> inputImages = ...;
auto out = TextureTools::atlasArrayPowerOfTwo({2048, 2048},
stridedArrayView(inputImages).slice(&Trade::ImageData2D::size));
An actual use case, in fact. Because why not.