Neither of those are really critically important. Failure not being a
failure is fine (just don't make the shader broken in the first place),
multiple color output bindings should be in the shader source anyway.
This was quite a mess, the whole array name copypasted to each and every
access. I mean, yeah, originally I thought this would be *the* usage
pattern, but oh god this resulted in SO MANY copypaste errors.
Which ultimately means the two annoying NVidia failures in the
TextureGLTest related to pixel storage in compressed 3D images are no
longer failing as a result of cleanup.
Following the spirit of extension-based functionality, the entrypoints
are available always but do something (i.e., call the actual WebGL API)
only if the extension is advertised. Which it is only on Emscripten
3.1.66+ because older versions don't have the corresponding entrypoints,
so there it's marked as disabled.
Additionally, EXT_polygon_offset_clamp is now also working on 3.1.66+,
but there's no wrapper for it yet.
Originally, years ago, it was supplying the same input value on all
targets, and just special-cased the output on ES2 to account for the
lower bit depth. But then SwiftShader arrived, which doesn't actually
care that there's a RGBA4 framebuffer, and performs all calculations at
the full precision. Which, well, makes sense from a perf PoV, kind of, so
I adapted also the input to be quantized to 4 bits instead of 8.
BUT THEN, some Mesa update happened, or maybe it was like this always and
I just didn't realize because I was on NV cards for so long, and that
input which made SwiftShader produced the expected result, was *further
quantized*, deviating further from what was expected.
So I now ditched all that and I'm just comparing with a
sufficiently large delta. If some implementation returns exactly what was
expected for an 8-bit framebuffer even thought the framebuffer is 4-bit,
I don't care.
Some of them were marked as constexpr even though they were calling into
a deinlined function internally. Making sure the default construction is
constexpr at least, and testing that all constexpr functions actually
behave like that.
Expands the test added in 789c52fd8a, for
which a fix was done in 8f6f4053fc but
which forgot to handle the case where a buffer is unbound. The test now
fails with an invalid error.
The test added in previous commit now passes. Besides the behavior being
different for single- and multi-bind calls, an additional "interesting"
behavior is that the glBindBufferBase() / glBindBufferRange()
apparently also creates the GL object, if not already (in contrast to
the multi-bind APIs which *require* the GL objects to be created
before).
Gotta admit I wasn't aware of this side effect, at all. Once I realized
that, a fix seemed simple at first, only to make a second (re)discovery,
where glBindBuffersRange() / glBindBuffersBase() does *not* have this
side effect.
I get that it's all a shaky pile building on top of a long legacy, but
still, it never ceases to amaze.
Originally GL::hasTextureFormat() returned false on ES2 for
PixelFormat::R8Unorm, RG8Unorm, RGB8Unorm and RGBA8Unorm because
glTexStorage() didn't work with the matching Luminance, LuminanceAlpha,
RGB and RGBA formats. But since the only ES2 platform is nowadays
basically just WebGL 1, which has neither EXT_texture_rg nor
EXT_texture_storage, this implicit failure made no sense and just made
the textureFormat() (and the new genericPixelFormat() API) useless
there.
Now it maps to them, and it's up to the caller to make sure
glTexStorage() doesn't get called with those, only glTexImage does.
Furthermore, if formats from EXT_texture_rg are used, the
genericPixelFormat() now also provides inverse mapping of them back to
the generic PixelFormat. Before it was basically *no* ES2 TextureFormat
that'd work with either of these, now it's all that have a (vaguely)
corresponding PixelFormat.
An ad-hoc solution was already done in DebugTools::screenshot(), now I
need it in another place. While not as fast as the O(1) mapping from
the generic format to the API-specific ones due to the potentially
linear lookup, it definitely could be useful in general.
This one was spectacular -- ALL uses of it had also #include <tuple> in
order to std::tie() the result into separate major & minor variables. So
much compile time overhead for so little.
Not that C++ STL and exceptions would be anything to take inspiration
from, but there's std::out_of_range. Python IndexError is also specified
as "index out of range", not "bounds".
Partially needed to avoid build breakages because Corrade itself
switched as well, partially because a cleanup is always good. Done
except for (STL-heavy) code that's deprecated or SceneGraph-related APIs
that are still quite full of STL as well.
So far this was only possible by creating a temporary MeshView, while
everything else (index/vertex count, base vertex, base instance, ...)
was changeable directly on the Mesh.
As this is now documented, it means 3rd party code can now directly make
use of these without having to reinvent the same logic, or worse,
rediscover the same driver bugs.
The compatibility.glsl file however stays private -- I don't expect
real-world projects needing *that much* diversity in their supported
GLSL versions, often the baseline is GLES 3.0 which makes a large part
of the file unnecessary, and the projects might choose to for example
always have implicitly queried uniform locations to not have to
maintain two code paths.
"Funny" how this is the only API where I can't use glCreateTextures().
Like, it would have been so easy to just stop teaching glGen*() and
all their quirks and "this ID exists but it's not an object until you
bind it somewhere actually" to people altogether, BUT NO! FFS.
Given that I made a breaking change by returning Containers::String
instead of a std::string, I can take it further and replace also
std::pair with Containers::Pair -- it won't bring any more pain to the
users, they have to change their code anyway.
Co-authored-by: Hugo Amiard <hugo.amiard@wonderlandengine.com>
Same as in the previous commit, most cases are inputs so a StringStl.h
compatibility include will do, the only breaking change is
GL::Shader::sources() which now returns a StringIterable instead of a
std::vector<std::string> (ew).
Awesome about this whole thing is that The Shader API now allows
creating a shader from sources coming either from string view literals
or Utility::Resource completely without having to allocate any strings
internally, because all those can be just non-owning references wrapped
with String::nullTerminatedGlobalView(). The only parts which aren't
references are the #line markers, but (especially on 64bit) those can
easily fit into the 22-byte (or 10-byte on 32bit) SSO storage.
Also, various Shader constructors and assignment operators had to be
deinlined in order to avoid having to include the String header, which
would be needed for Array destruction during a move.
Co-authored-by: Hugo Amiard <hugo.amiard@wonderlandengine.com>
Most of these are just inputs, so a compatibility StringStl.h include
will do, the only exception is the callback for which there needs to
stay a deprecated overload (which is internally delegated from the
StringView one).
Also explicitly testing with non-null-terminated strings -- the APIs
take an explicit size so it shouldn't be a problem, but it's always good
to have this verified independently. Drivers are crap, you know.
One consequence of no longer using an impossible-to-forward-declare
std::string is that I had to deinline the DebugGroup constructor because
it no longer worked with just a forward-declared StringView.