Each HMD driver now has to implement compute_distortion() which will be called
by the compositor implementation to generate a mesh (usually).
u_distortion_mesh contains implementations for the defaults (panotools, OpenHMD, vive).
Also adds compute_distortion function for Vive distortion
There are differences between OpenHMD and Panotools values, main differences for now:
* psvr has 5 pano coefficients, ohmd has 3
* psvr uses viewport size and lens center in pixels for distortion calculation, ohmd in meter
* psvr uses different distortion scaling than ohmd
It was left in as a debug measure, but is more confusing than useful,
especially with northstar directly generating a mesh and vive with its own shader.
For now add only the depth formats mandated by OpenGL to maximize the
chances of the Vulkan driver supporting a reasonable set of usage flags
for the formats.
This adds some Android support in composition clients,
and fixes the breakage from 2 commits ago.
Thanks to Jakob for finding my error in an earlier version.
As before, on the service side the GPU index the compositor runs on can be selected with
* XRT_COMPOSITOR_FORCE_GPU_INDEX=INDEX1
By default xrGetVulkanGraphicsDevice() will suggest the same GPU the compositor runs on.
It is also possible to override the GPU index suggested to applications with
* XRT_COMPOSITOR_FORCE_CLIENT_GPU_INDEX=INDEX2
The reason this is both done on the service side is that if compositor and client run
on different GPUs, the swapchains use linear tiling instead of optimal tiling.
To make chosen GPUs comparable across the compositor's and the client's vulkan instance,
VkPhysicalDeviceIDProperties.deviceUUID is used.
This patch passes the offset and extent properties to the layer shader
by extending the uniform. The fragment shader stage now also receives
the transformation uniform, which contains a has_sub boolean to
distinguish if the properties are set, so between projection and quad layers.
To avoid color bleeding the subimage sampling happens on a global pixel
coordinates basis as ivec2 using the GLSL texelFetch function.
Projection layers will be sampled as before.