Do it always when Flag::TextureArrays is set, not inside handling of
some particular texture. Because that way it won't work when other
textures are added/tested.
Before the object ID was enabled and tested always, which may lead to
some error being undetectable. Plus this makes the test more flexible
for further additions.
The proper practice is to have GLES and WebGL requiurements separated,
as the two editions diverge more and more and treating one as a subset
of another no longer works.
This ended up being far more complicated than anticipated -- the utils
now make use of filterExceptAttributes() in order to properly preserve
the index type when passing a MeshData through. Before an
implementation-specific index type was lost in the process due to
passing just the view to MeshIndexData constructor and not also the
type, and because keeping the metadata is a bit involved process, this
had to wait until there's an alternative to reference() that allows me
to drop all attribute information. And that's what
filterExceptAttributes() does.
Then, interleave() implicitly doesn't preserve strided index buffers
because GPUs don't like them, and since implementation-specific index
types always result in strided index buffers, this was then immediately
dying inside interleave() as it's not possible to tightly-pack an index
buffer of an unknown type. Thus, the function now has to expose
InterleaveFlags and propagate them to interleave() in order to allow the
user to preserve the index buffer without repacking it.
Apmong other things where it's useful for end users as a more convenient
alternative to recreating the MeshData by hand, I need to use this
inside the transform() utilities to preserve all index buffer properties
without copypasting nasty code everywhere.
As implementation-specific index types cause the index buffer to be
strided. which is not allowed by default, these work only if
InterleaveFlag::PreserveStridedIndices is set. Originally I thought
about straight-out disallowing implementation-specific types here,
unfortunately that broke some valid use cases so I backtracked on that
decision.
In the future I might change the default to preserve
implementation-specific indices with sane strides (i.e., positive powers
of 2), but that will only be done once similar treatment is finished for
attributes (currently zero and negative strides are just disallowed
completely).
The previous commits changed *nothing* related to this code, yet somehow
a Release build started warning about this variable maybe being used
uninitialized in the attribute checking loop even though it's *clearly*
not the case.
Tools are getting less useful and more annoying DAY BY DAY.
This mirrors what's done already for implementation-specific vertex
formats, thus:
* Ability to construct the classes without tripping up when trying to
check for type size in various asserts
* Providing a zero-size type-erased access in indices() and
mutableIndices()
* Disallowing typed and convenience access
The type is now extended to 32 bits. In the GL and Vk libraries it means
one can now do things like
MeshIndexType type = meshIndexTypeWrap(GL_UNSIGNED_BYTE);
and passing that to GL::Mesh or Vk::Mesh will cause it to use the value
directly, instead of doing a mapping from a generic type. The *real* use
case for this is however to allow custom index buffer representations in
Trade::MeshData. Support for that will be hooked up in the following
commit.
Otherwise it makes a tightly-packed copy. This also means that arbitrary
padding before/after the index buffer is preserved only in case it would
mean the data can be transferred without a copy -- otherwise it's faster
to just drop the padding and copy only the used part.
The reference() and mutableReference() utils preserve that implicitly as
MeshData::indices() now returns a strided array view containing all
needed information.
Also not something the classic GPU vertex pipeline can handle, but
useful for other scenarios. Subsequently a support for array indices
will be added, allowing to directly represent for example OBJ files,
where each attribute has its own index buffer.
I tried, I really did, but the downsides eventually just outweighed the
potential of this feature and I gave up. May try to tackle this again in
the future, but not now.
This is not something the classic GPU vertex pipeline can handle
(except maybe Vulkan, which can handle zero strides for instanced
attributes?), but useful for other scenarios. This means existing code
needs to be aware of and handle the new corner case.
Since the main speed advantage of the function is that it hashes a
*whole* vertex together instead of going through the attribute arrays
one by one, it can't really operate on whatever funny interleaved layout
it was given as there may be padding bytes with random content, breaking
the deduplication.
Since checking that a layout is really padding-less is rather complex,
the repacking is now performed always. This also means the && overload
makes no sense anymore and thus it was dropped.
This makes the test added in the previous commit not assert anymore, and
behave the same as the padding-less case.
Leads to an assertion inside StridedArrayView constructor, because I
just pass through the original attribute offsets/strides even though
interleavedMutableData() gives me a much smaller stride.
Moreover, because I just perform the deduplication on the original
vertex data *including* all random padding, the duplicate removal won't
always be able to find all duplicates. So test that case here as well --
turns out we just have to repack the data completely, after all.