Was Magnum::GL::Extensions::GL before and the redundancy was completely
unnecessary. Potential future extensions coming from GLX, EGL or whatnot
will most probably be in the Platform namespace in a completely separate
file, so this is not a problem.
All code internal to the GL library is affected, not much the outside,
as that is handled by the compatibility alias.
The Sampler class was split into GL::Sampler (which is now mostly just a
placeholder for implementing OpenGL sampler objects), pairs of generic /
GL-specific SamplerFormat / GL::SamplerFormat, SamplerMipmap /
GL::SamplerMipmap, SamplerWrapping / GL::SamplerWrapping enums and the
GL-specific GL::SamplerCompareMode, GL::SamplerCompareFunction,
GL::SamplerDepthStencilMode enums.
The old Sampler class is marked as deprecated and aliases its enum to
the generic enums (or to the GL-specific ones in case the generic
versions are not available).
This is quite big, so:
* There are new Magnum::PixelFormat and Magnum::CompressedPixelFormat
enums, which contain generic API-independent formats. In particular,
PixelFormat replaces GL::PixelFormat and GL::PixelType with a single
value.
* There's GL::pixelFormat(), GL::pixelType(),
GL::compressedPixelFormat() to convert the generic enums to
GL-specific. The mapping is only in one direction, done with a lookup
table (generic enums are indices to that table).
* GL classes taking the formats directly (such as GL::BufferImage) have
overloads that take both the GL-specific and generic format.
* The generic Image, CompressedImage, ImageView, CompressedImageView,
and Trade::ImageData classes now accept the generic formats
first-class. However, it's also possible to store an
implementation-specific value to cover cases where a generic format
enum doesn't have support for a particular format. This is done by
wrapping the value using pixelFormatWrap() or
compressedPixelFormatWrap(). Particular GPU APIs then assume it's
their implementation-specific value and extract the value back using
pixelFormatUnwrap() or compressedPixelFormatUnwrap(). There's also an
isPixelFormatImplementationSpecific() and
isCompressedPixelFormatImplementationSpecific() that distinguishes
these values.
* Many operations need pixel size and in order to have it even for
implementation-specific formats, a corresponding pixelSize()
overload is found via ADL on construction and the calculated size
stored along the format. Previously the pixel size was only
calculated on demand, but that's not possible now. In case such
overload is not available, it's possible to pass pixel size manually
as well.
* In order to support the GL format+type pair, Image, ImageView and
Trade::ImageData, there's now an additional untyped formatExtra()
field that holds the second value.
* The CompressedPixelStorage class is now unconditionally available on
all targets, including OpenGL ES and WebGL. However, on OpenGL ES the
GL APIs expect that it's all at default values.
I attempted to preserve backwards compatibility as much as possible:
* The PixelFormat and CompressedPixelFormat enum now contains generic
API-independent values. The GL-specific formats are present there,
but marked as deprecated. Use either the generic values or
GL::PixelFormat (togehter with GL::PixelType) and
GL::CompressedPixelFormat instead. There's a lot of ugliness caused
by this, but seems to work well.
* *Image::type() functions are deprecated as they were too
GL-specific. Use formatExtra() and cast it to GL::PixelType instead.
* Image constructors take templated format or format+extra arguments,
so passing GL-specific values to them should still work.
The general part stays in the root namespace, while the GL-specific
stuff goes to the GL namespace. Also all GL-specific documentation was
moved to relevant APIs in the GL namespace. This finally allows me to
build PixelStorage.cpp as part of the root namespace.
The PixelStorage::pixelSize() function and
PixelStorage::dataProperties() taking a pair of GL::PixelFormat /
GL::PixelType enums is deprecated, use GL::pixelSize() and
dataProperties() taking pixel size directly instead. A lot of code is
still using these, including images; the deprecated aliases are inlined
in the header to avoid a compile-time dependency on the GL library.
At the moment just the GL library itself w/o the tests, and without
backwards compatibility aliases. The following types were left in the
root namespace, despite being in the GL/ directory, as they will get
moved back soon:
* Image, CompressedImage and their dimensional typedefs
* ImageView, CompressedImageView and their dimensional typedefs
* PixelStorage
Not PixelFormat etc., that one will stay in the GL namespace and a
completely new PixelFormat enum will be provided in the root namespace.
Minimal updates (just the include guards) so Git is hopefully able to
detect the rename and track the history properly.
Everything except Magnum::GL doesn't compile now.
Since 98a676ef65, on ES2 and WebGL1 the
Texture::setStorage() emulation passes the pixel format to both <format>
and <internalFormat> of glTexImage() APIs. On desktop the go-to way to
create a sRGB texture is by passing GL_SRGB to <internalFormat> and
GL_RGB to <format>, but here GL_RGB was passed to both and thus the
information about sRGB was lost.
With the new PixelFormat::SRGB and PixelFormat::SRGBAlpha enums that
are present only on ES2/WebGL1 this case is fixed -- sRGB texture format
will get translated to sRGB pixel format and that used for both
<format> and <internalFormat>.
Another case is when EXT_texture_storage is available -- passing unsized
GL_SRGB_EXT or GL_SRGB_ALPHA_EXT to glTexStorageEXT() is an error and
there's apparently no mention of this in any extension, making it
impossible to create sRGB textures using EXT_texture_storage. I bit the
bullet and tried passing the (numerical value of) GL_SRGB8 and
GL_SRGB8_ALPHA8 to it. At least on my NV it worked, so I enabled these
two in TextureFormat for ES2. No EXT_texture_storage is on WebGL1, so
they are only on ES2.
These two sized TextureFormat::SRGB8 and TextureFormat::SRGB8Alpha8
formats are translated to GL_SRGB and GL_SRGB_ALPHA and so using them
unconditionally for all platforms (except WebGL1) "just works".
Fixes a case where passing TextureFormat::RGBA8 to
Texture::setStorage() on a platform w/o EXT_texture_storage would emit
an error. Now it passes GL_RGBA to both format and internalFormat
fields.
SVGA3D has broken handling of glTex[ture][Sub]Image*D() for 1D arrays,
2D arrays, 3D textures and cube map textures where it uploads just the
first slice in the last dimension. This is only with copies from host
memory, not with buffer images. Seems to be fixed in Mesa 13, but I have
no such system to verify that on. Relevant commit in the Mesa sources:
2aa9ff0cda
This is one of the uglier workarounds -- I had to reintroduce multiple
code paths for glTexImage() which were removed when implementing ARB_DSA
and the workaround consists of basically a bunch of functions that are
slicing the image and calling the original implementations with each
slice.
The GL queries are needed only if the user-provided pixel storage
doesn't contain enough information, in particular dimensions and byte
size of compression block.
So in case they are present, no query for compressed image size is done
for full image queries (just saving one API call) and no queries for
block dimensions and byte size for subimage queries (saving four API
calls and not requiring ARB_internalformat_query2).
The documentation was fixed and tests were updated to take the
ARB_internalformat_query2 requirement into account.
103% of use cases use the returned value directly without checking, so
we might as well do the check ourselves. Added new function hasCurrent()
and added deprecated backward-compatibility conversion and -> operators.
Wow, that creeped to a lot of places.
Last dinosaur from the pointer age.
Pre-DSA code paths need to specify for which face we are querying the
level parameters, which meant that all other calls had to specify the
(implicit) target too. I'm also preparing to put a cubemap-specific
workaround in the level parameter query and that really shouldn't be
present in the generic implementation for all texture types.
The other place where a specific target is needed is in setImage()
implementations, but these are rather big chunks of code and I don't
feel like copying these verbatim to cubemap implementation just to
isolate the workaround in one place.
Pre-DSA code path needs to pass specific slice of a cube map to all
getters instead of just GL_TEXTURE_CUBE_MAP. I did that properly for
image size query, which weirdly enough, had its own implementation, but
forgot to do that in compressed image getters and, because I have DSA
drivers, never tested that on pre-DSA contexts.
Using single implementation of image size with explicit target
parameter now.
Pain and misery. Majority of functionality for 3D compressed images now
suddenly fails the test -- this is either very vaguely specified or I am
very bad at understanding things or there are bugs in my NVidia drivers.
This was awful feature. Kill me now.
Makes it far easier to detect pixel storage misconfigurations and
improperly sized data arrays. Data owning classes (Image,
Trade::ImageData) accept Containers::Array<char> while wrappers
(ImageView, BufferImage) accept Containers::ArrayView<const void>.
ImageView reinterprets the passed array as const char to enable pointer
arithmetic on the data.
The old way (constructor/setData() call accepting void*) is now marked as
deprecated and will be removed in some future release. Because decay of
fixed-size arrays to void* is preferred to calling Containers::ArrayView
constructor, there are two more overloads to have proper handling of
const T(&)[n] and std::nullptr_t arguments.
Currently the TgaImporter and TgaImageConverter fail on images with row
length not aligned to 4, will fix that in followup commits.
Removed *Image::dataSize() and added *Image::dataProperties(), which
returns all properties separately for better introspection. This function
is now just an convenience alias to PixelStorage::dataProperties(), which
takes into account proper alignment and all other storage properties.
Yeah, sorry, I know, the enums are renamed for second or third time in a
row, first they were Image::Format, then ImageFormat, then ColorFormat
and now PixelFormat. But this time it's final and last time they are
renamed and now everything is finally consistent:
* ColorFormat::DepthComponent -- depth is not a color, thus
PixelFormat::DepthComponent makes a lot more sense.
* There will be PixelStorage classes, which will be stored in images
alonside PixelFormat/PixelType enums, making everything nicely
aligned.
* The GL documentation about glTexImage2D() etc. denotes the <format>
and <type> parameters as format and type of *pixel* data, so now we
are _finally_ consistent with the official naming.
I wonder why did I not choose PixelFormat originally. Anyway, the old
<Magnum/ColorFormat.h> header, ColorFormat, ColorType and
CompressedColorFormat types are now aliases to the new ones, are marked
as deprecated and will be removed in some future release (as always, I'm
waiting at least six months before removing the deprecated
functionality).
Apart from different include (<Magnum/Math/Color.h> instead of
<Magnum/Color.h>) there shouldn't be any visible change to the user. The
BasicColor3 and BasicColor4 classes are now Math::Color3 and
Math::Color4. The Color3, Color4, Color3ub and Color4ub typedefs in
Magnum namespace stayed the same.
BasicColor3 and BasicColor4 is now an alias to Math::Color3 and
Math::Color4, is marked as deprecated and will be removed in future
release. The same goes for the <Magnum/Color.h> include, which now just
includes the <Magnum/Math/Color.h> header.
With pixel pack/unpack support it will be possible to create views onto
sub-images, renamed the class to reflect that.
The old Magnum/ImageReference.h and ImageReference types are now aliases
to ImageView.h and ImageView types, are marked as deprecated and will be
removed in future release.