mirror of https://github.com/mosra/magnum.git
Tree:
e44d5af48d
apple-crashy-msaa-default-framebuffer
audio-import
catastrophic-cross
chainsaw-surgery
dpi-change-events
euler-xxx
findsdl-include-root
gltestlib-symbol-duplication
gpu-preference
inverted-ranges
ktx1-detection
master
meshdata-cereal-killer
mousecapture
multiwindow
next
sceneconverter
scenedata-optimizations
simd
vectorfields
zerocopy
snapshot-2013-08
snapshot-2013-10
snapshot-2014-01
snapshot-2014-01-compatibility
snapshot-2014-06
snapshot-2014-06-compatibility
snapshot-2015-05
snapshot-2015-05-compatibility
v2013.08
v2013.10
v2014.01
v2014.06
v2015.05
v2018.02
v2018.04
v2018.10
v2019.01
v2019.10
v2020.06
${ noResults }
2 Commits (e44d5af48d8f19164d533acfa3f25a77d04c9eb7)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
b6e41ab1a7 |
Vk: initial APIs for binding a memory to an image.
You won't believe it, but it took me over a month of sitting on the shitter until this design idea materialized out of [..] air. The whole story, in order: - Vulkan doesn't allow one VkDeviceMemory to be mapped more than once. This is rather sad, because since Vulkan best practices suggest to allocate a large block and suballocate from that, the engine needs an extra layer that "emulates" mapping the suballocations for the users but behind the scenes it inevitably has to map the whole VkDeviceMemory anyway and keep it mapped for as long as any of the sub-mappings is active. - Because if it would map just a certain suballocation and then the user would want to map another suballocation, it would have to discard the original mapping and create a new one spanning both suballocations and that has a risk of suddenly being in a different VM block, making all pointers to the previous mapping invalid. - The Vulkan Memory Allocator implements this approach of mapping the whole thing and because of all the bookkeeping it doesn't give a direct access to the underlying VkDeviceMemory, making it rather hard to integrate. Here I realized that: - Most allocations won't need to be mapped ever, so the hiding and obfuscation done by VMA isn't needed for those --- and we want interoperability with 3rd party code, so preventing access to VkDeviceMemory is out of question. - There's KHR_dedicated_allocation, which (probably?) wasn't around when VMA was originally designed. The extension was created because a dedicated allocation actually *does* make sense in certain cases and on certain architectures. Providing a way to make those thus shouldn't be something "temporary, until a real allocator exists" but rather a well-designed API that's there to stay. - Except for iGPUs, the usual way to populate a GPU buffer would be to first copy the data to some host-accessible scratch buffer and then do a GPU-side copy of that buffer to a device-local memory. The scratch buffer is very likely to have a vastly different suballocation scheme than GPU buffers (grow & discard everything once it's all uploaded, for example) so again trying to put the two under the same allocator umbrella doesn't make sense. Thus: - To avoid implementing a full-blown allocator right from the start, we'll first provide convenience APIs only for dedicated allocations -- making it possible to transfer memory ownership to an Image/Buffer so it can be treated the same way as in GL, and later having the Image/Buffer constructor implicitly allocate a dedicated VkDeviceMemory. - This default allocation will be subsequently equipped with KHR_dedicated_allocation bits. - Thanks to the extensible/layered nature of the design, the user is still capable of being completely in control of allocations, managing VkDeviceMemory sub-allocations by hand. Finally, once allocator APIs are figured out, the default Buffer/Image behavior gets switched from a dedicated allocation to using an allocator, and dedicated allocation will be only used if the KHR_dedicated_allocation bit is requested. |
6 years ago |
|
|
fcd0afb306 |
Vk: add an Image wrapper.
Not exactly sure about the usage, so no docs yet. Will come once memory allocation is complete. |
6 years ago |