Add `HrtfStatus` and queries in `Audio::Renderer`, `ALC::SOFT::HRTF` and
`@requires_alc_extension` notes for relevant methods in `Configuration`.
Signed-off-by: Squareys <Squareys@googlemail.com>
`ALC_SOFTX_HRTF` is the "in dev" hrtf extension present in OpenAL Soft
version 1.16.0, whereas `ALC_SOFT_HRTF` is the finished version of the Hrtf
extension, probably released with OpenAL Soft version 1.17.0.
Signed-off-by: Squareys <Squareys@googlemail.com>
Left todo: once there is a common OpenAL extension registry again,
use that for the al_extension and alc_extension aliases/expansions.
Signed-off-by: Squareys <Squareys@googlemail.com>
Analog to Magnum::Extensions, but without Version, since OpenAL has only
one Version which is in use (OpenAL 1.1).
Signed-off-by: Squareys <Squareys@googlemail.com>
AMD is behaving the same as NVidia (at least on Windows) -- when
creating core context with minimum specified version set to 3.1, it
forces that version instead of going with the largest available version,
which, again, is pretty useless behavior.
Enabling that on both Linux and Windows, the behavior is confirmed on
Windows but I bet it's doing the same on Linux.
GL 3.2 has texelFetch() and layout(pixel_center_integer), which means
that we integer coordinates with no precision loss when addressing
individual pixels in the source texture. In the versions before we have
to craft floating-point coordinates for texture() to grab the value of
wanted pixel with no jumping around or interpolation.
This change improves the behavior *a bit*, but not fully. I'm postponing
this to the point when I have an unit test that compares the output with
ground truth.
For some reason this was causing the inner for cycle to loop
indefinitely on AMD cards. Not a problem or NVidia drivers, Intel
Windows drivers or Mesa. Thanks a lot to @LB-- for the investigation.
The shader code took the image size from uniform if the GL was older
than version 3.2 (GLSL 1.50). But the shader class was setting that
uniform only if the GL was older than version 3.0. Thus the distance
field converter worked only on GL 2.1 and GL >= 3.2.
This was discovered only by accident, thanks to the quite recent
attempts to create core contexts by default. Because apparently setting
explicit version requirements (core GL 3.1) on AMDwill make
wglCreateContext() stay on that version and not choosing any later
compatible version, similarly to how NVidia behaves. That's another bug
for later.
Before the application was just creating the context the old way, thus
all cards compatible with GL3 were at least on GL 3.2 and thus the
missing uniform setting did not affect anybody, thus the bug was
effectively hidden.
Do I have to repeat it? Oh and also it produces warnings ONLY if given
function is used, so I guess the user code has *a lot* more warnings of
this kind.
Making use of sincos() for Dual numbers, constructing DualQuaternion
from dual vector and scalar parts and using
DualQuaternion::isNormalized(). Also updated the math equation to be
consistent with conventions elsewhere.
Mainly a convenience function in case you want to compute sin and cos of
the same, potentially longer expression, and you don't want to have
repeated code or temporary variables. On some architectures might use
faster instruction that computes both values in one shot.
- Use explicit conversion to `T`
- Use `std::` for `acos, cos, sin` to avoid use of double only functions
- Do not mutate variables in math code to avoid confusion
Signed-off-by: Squareys <Squareys@googlemail.com>
Should help people understand the code and counteracts the unreadability
caused by the optimization commit at least a bit.
Signed-off-by: Squareys <Squareys@googlemail.com>
Optimized with simple code tricks, some very complex math (like `2*0.5=1`),
and principal of locality. Things the compiler would probably do for me
anyway. Was able to remove about 6 useless float multiplications.
Signed-off-by: Squareys <Squareys@googlemail.com>
Clang 3.7 complained that it would prevent copy elision optimizations.
Thanks, Clang! On the other hand, I have a weird feeling that this would
break the build somewhere else...
Now works both ways. The base class works with virtually any combination
that is supported by the underlying types, so e.g. Dual<Matrix3<T>>
could be multiplied/divided with Vector3<T> (result is Vector3<T>), with
Matrix3<T> (result is Matrix3<T>) or with T (result is Matrix3<T>).
The macros, on the other hand, because they are there only to help with
implementation of *my* subclasses, restrict that to the two only cases I
need (i.e. multiplication with Dual<T> and Dual<T::Type> and nothing
else). Could be extended in the future if it needs to be.
When PlayableGroup.h is used without Playable.h, the compiler spits out a
very unintuitive error message, which would probably confuse unfamiliar
users.
This change fixes that error message appearing.
Signed-off-by: Squareys <Squareys@googlemail.com>