* Calling *Mesh::draw() with parameter to start on arbitrary vertex
index might give users more freedom than they want to have (e.g.
lines rendered where gaps should be, broken triangle strips...).
* Single-precision floats have meaningful precision of ~6 decimal
places, everything after that would be random garbage anyway, so we
don't need anything "more precise" for icosphere.
* Texture1D can have only one target and it can be used as framebuffer
target.
Long-standing TODO, can be used for in-game mirrors etc. I give up with
shearing, as I think that it makes sense only in 2D and I can't find any
reasonable use case for that yet.
Framebuffer::attach*() doesn't need the target at all (meaning the
attachment will be used for both reading and drawing), another
misunderstanding on my side.
Now the extension is used on all places where it can be used (except for
unimplemented features).
It prevents unwanted implicit conversions from e.g. nullptr to Camera,
Vector2 to Physics::Point etc. By making all the constructors explicit
it is easier to routinely add the keyword to all new classes instead of
thinking about cases when to add and when not to.
Viewport position and size is managed separately for each framebuffer
and glViewport() is called in bind() (and also from setViewport(), if
the framebuffer is currently bound) if the viewport differs from
current state. If used only one framebuffer size through whole
application lifetime, glViewport() doesn't need to be called at all.
Default framebuffer is now accessible throught defaultFramebuffer global
variable, named framebuffers are handled the same way as before. All
operations (clear, setViewport, blit, read) now are member functions
so they cannot be mistakenly used when unwanted framebuffer is bound.
Further rework (DSA, state tracking...) is on the way.
Buffered* hinted that it has something to do with caching, streaming or
whatever. "Buffer texture" is now also consistent with naming in
specification.
It is now ambiguous whether data passed as `std::int8_t` are of
Type::Byte, Type::ByteInteger or whatnot. The user now must explicitly
specify both format and type.
Got rid of InternalFormat class, now it is all in one big enum, as each
version (desktop, ES2, ES3) has different requirements and it can't be
done that way anymore (moreover that was terribly ugly solution).
On OpenGL 3.3 context it was checking for support of all 3.0, 3.1 and
3.2 extensions even if isExtensionSupported() didn't use that
information at all.