Since 98a676ef65, on ES2 and WebGL1 the
Texture::setStorage() emulation passes the pixel format to both <format>
and <internalFormat> of glTexImage() APIs. On desktop the go-to way to
create a sRGB texture is by passing GL_SRGB to <internalFormat> and
GL_RGB to <format>, but here GL_RGB was passed to both and thus the
information about sRGB was lost.
With the new PixelFormat::SRGB and PixelFormat::SRGBAlpha enums that
are present only on ES2/WebGL1 this case is fixed -- sRGB texture format
will get translated to sRGB pixel format and that used for both
<format> and <internalFormat>.
Another case is when EXT_texture_storage is available -- passing unsized
GL_SRGB_EXT or GL_SRGB_ALPHA_EXT to glTexStorageEXT() is an error and
there's apparently no mention of this in any extension, making it
impossible to create sRGB textures using EXT_texture_storage. I bit the
bullet and tried passing the (numerical value of) GL_SRGB8 and
GL_SRGB8_ALPHA8 to it. At least on my NV it worked, so I enabled these
two in TextureFormat for ES2. No EXT_texture_storage is on WebGL1, so
they are only on ES2.
These two sized TextureFormat::SRGB8 and TextureFormat::SRGB8Alpha8
formats are translated to GL_SRGB and GL_SRGB_ALPHA and so using them
unconditionally for all platforms (except WebGL1) "just works".
Used `#pragma warning(suppress: 4996)` before which was apparently doing
completely nothing. Switched to `#pragma warning(disable: 4996)` now.
Started to become problematic on latest MSVC 2017 update (19.11) -- the
UWP builds are failing because of implicit warning-as-error.
There's a new DynamicAttribute class that is very similar to Attribute,
but it has the location and base type as runtime properties instead of
them being a part of template. This allows for more flexibility, but
OTOH also more typing and more responsibility on the user. See
MeshGLTest for details and usage comparison to the Attribute API.
They were utterly confusing, as it was completely unclear what the units
of offset/size parameters are, whether byte sizes or element counts (and
moreover, some of these APIs had offset in bytes and size in count and
some not). All of those are deprecated now, with hinting the user to
convert to non-templated APIs in combination with
Containers::arrayCast(). Moreover, the non-templated range map()
function doesn't return just void* anymore, but a properly sized
ArrayView<char>. The old map() (which doesn't take range) still returns
just a pointer (but also a char* instead of void* for consistency), as
getting size there is non-trivial (and impossible on old ES/WebGL).
The switch to ArrayView might be a source breaking change, but I
silently hope that everyone was just using the templated functions
anyway (that are deprecated now). So, in short, this was before:
T* a = buf.map<T>(0, size_in_what_i_have_no_idea);
And this is now, with proper size safety and clear API:
ArrayView<T> a = Containers::arrayCast<T>(buf.map(0, size_in_bytes);
The deprecated APIs will be removed at some point in the future, as
usual.
Emscripten AL does not support specifying attributes and does not set a
ALC error when alcCreateContext fails.
Signed-off-by: Squareys <squareys@googlemail.com>
Fixes a case where passing TextureFormat::RGBA8 to
Texture::setStorage() on a platform w/o EXT_texture_storage would emit
an error. Now it passes GL_RGBA to both format and internalFormat
fields.
Fails. Problem spotted on WebGL 2 and is an unfortunate consequence of
these events:
1. Neither EXT_DSA nor ARB_DSA is available, but VAOs are available,
which means virtually all ES 3 and WebGL 2 implementations and also
ES 2 / WebGL 1 with OES_VAO available.
2. Index buffer gets created with TargetHint::ElementArray, bound to
GL_ELEMENT_ARRAY_BUFFER and filled with data.
3. Mesh object with VAO inside is created and the index buffer from
above gets bound to GL_ELEMENT_ARRAY_BUFFER again to attach it to the
VAO.
4. Another index buffer, possibly for another mesh, gets created with
TargetHint::ElementArray and bound to GL_ELEMENT_ARRAY_BUFFER in
order to be filled with data. But because the VAO from above is still
bound, the index buffer attachment is then stomped on.
5. Rendering such mesh will use a different index buffer, most probably
causing some out-of-range GL error and nothing rendered.
SVGA3D has broken handling of glTex[ture][Sub]Image*D() for 1D arrays,
2D arrays, 3D textures and cube map textures where it uploads just the
first slice in the last dimension. This is only with copies from host
memory, not with buffer images. Seems to be fixed in Mesa 13, but I have
no such system to verify that on. Relevant commit in the Mesa sources:
2aa9ff0cda
This is one of the uglier workarounds -- I had to reintroduce multiple
code paths for glTexImage() which were removed when implementing ARB_DSA
and the workaround consists of basically a bunch of functions that are
slicing the image and calling the original implementations with each
slice.
For some reason a buffer *allocated* with this function caused
Mesh::draw() to draw nothing. I gave up on investigating the root cause
-- using non-DSA glBufferData() "just works", so in case of this driver
the "svga3d-broken-dsa-bufferdata" workaround uses the non-DSA code
path.