This avoids allocating a potentially large array in case just the first
device is needed. Originally I did this in a hope to avoid a stall + CPU
power management issues due to some bad shit in the AMD driver, but it
seems that enumerating even just one device still makes it stall. Sigh.
Similarly to Corrade Assert.h, so it's possible to include this header
without having to worry about irrelevant overhead when asserts are
disabled.
Also test it properly, as it should have been from the start.
I'm a bit unsure why there's a device extension that actually gets used
on an instance and doesn't need any enabling (and thus there's
currently no way to disable it to test all code paths, hmm).
I have more and more cases where I need to query device properties later
down the road (memory capabilities, device name, ...) and leaving all
this up to the user / making this impossible to do in the library
internals is complicating everything too much.
Since there's a shitton of device properties with a new bag of props
coming with every other new extension, I expect the queries to get quite
involved / complicated over time (chaining 100s structs and such), so
let's design this upfront in a way that can avoid reqpeatedly querying
the same thing just because we needlessly discarded a fully populated
instance before.
It also means the users don't need to drag their own DeviceProperties
instance along anymore and can just let the Device take care of that.
Unfortunately the only nice way to make this work with DeviceCreateInfo
method chaining is to add & and && overloads for each. But it's quite
easy to test that all of them work and properly return a r-value
reference so it shouldn't be too much of a maintenance nightmare.
You won't believe it, but it took me over a month of sitting on the
shitter until this design idea materialized out of [..] air. The whole
story, in order:
- Vulkan doesn't allow one VkDeviceMemory to be mapped more than once.
This is rather sad, because since Vulkan best practices suggest to
allocate a large block and suballocate from that, the engine needs
an extra layer that "emulates" mapping the suballocations for the
users but behind the scenes it inevitably has to map the whole
VkDeviceMemory anyway and keep it mapped for as long as any of the
sub-mappings is active.
- Because if it would map just a certain suballocation and then the
user would want to map another suballocation, it would have to
discard the original mapping and create a new one spanning both
suballocations and that has a risk of suddenly being in a different
VM block, making all pointers to the previous mapping invalid.
- The Vulkan Memory Allocator implements this approach of mapping the
whole thing and because of all the bookkeeping it doesn't give a
direct access to the underlying VkDeviceMemory, making it rather
hard to integrate.
Here I realized that:
- Most allocations won't need to be mapped ever, so the hiding and
obfuscation done by VMA isn't needed for those --- and we want
interoperability with 3rd party code, so preventing access to
VkDeviceMemory is out of question.
- There's KHR_dedicated_allocation, which (probably?) wasn't around
when VMA was originally designed. The extension was created because
a dedicated allocation actually *does* make sense in certain
cases and on certain architectures. Providing a way to make those
thus shouldn't be something "temporary, until a real allocator
exists" but rather a well-designed API that's there to stay.
- Except for iGPUs, the usual way to populate a GPU buffer would be to
first copy the data to some host-accessible scratch buffer and then
do a GPU-side copy of that buffer to a device-local memory. The
scratch buffer is very likely to have a vastly different
suballocation scheme than GPU buffers (grow & discard everything
once it's all uploaded, for example) so again trying to put the two
under the same allocator umbrella doesn't make sense.
Thus:
- To avoid implementing a full-blown allocator right from the start,
we'll first provide convenience APIs only for dedicated allocations
-- making it possible to transfer memory ownership to an
Image/Buffer so it can be treated the same way as in GL, and later
having the Image/Buffer constructor implicitly allocate a dedicated
VkDeviceMemory.
- This default allocation will be subsequently equipped with
KHR_dedicated_allocation bits.
- Thanks to the extensible/layered nature of the design, the user is
still capable of being completely in control of allocations,
managing VkDeviceMemory sub-allocations by hand.
Finally, once allocator APIs are figured out, the default Buffer/Image
behavior gets switched from a dedicated allocation to using an
allocator, and dedicated allocation will be only used if the
KHR_dedicated_allocation bit is requested.
Memory type flags are put into a new, separate Memory.h header as those
will be needed more often than the (ever-growing) DeviceProperties --
from Image and Buffer constructors, in particular.
It's there only to create the instance right now. Not using it to test
instance-level functionality right now as we'd be only working around it
anyway.
Otherwise I'd need to duplicate that for clang-cl, mingw and so on. It's
enabled only on MSVC 2019 and newer, because I don'r really care about
older versions -- those are mainly for compatibility with existing code,
and Vulkan is not existing code yet.
It's not using Android's native Vulkan library because that's only since
Android 24 and emulator for that version doesn't even start. So we use
our own minimal driver instead.
On Ubuntu, because SwiftShader Vulkan doesn't compile on GCC 4 (or 5)
anymore, we use 4.8 on 16.04 for compile-test against libvulkan-dev and
18.04 for render-test with SwiftShader.
On Mac, because we want to work with both MoltenVK and Vulkan Loader, we
compile-test against MoltenVk in the generic "all build" and render-test
against SwiftShader on a new, Vulkan-specific build. That requires us to
supply our own loader as well -- apparently no such thing is on
Homebrew, which is kinda silly (doesn't anybody do real Vulkan work on
this crazy platform?!).