Find and capture input from cameras, and split according to
frame type. Send long exposure tracking frames through
the AEG module, and SLAM and hand tracking trackers.
Add controller emulated hand devices.
The native "fisheye62" camera distortion model is
dynamically converted to OpenCV Kannala-Brandt
parameters using a TinyCeres solver.
Port across the Oculus Rift S driver from OpenHMD as a native
Monado driver.
This is mostly the same as the OpenHMD 3DOF driver, with
slightly better HMD distortion correction, various small
fixes, some capsense touch detection support.
Controller poses are rotated 40° to match grip pose.
* Removed depthai_tracked_device - now you create a "SLAM" device, plug any frameserver into it and you're done
* Consolidated the grayscale frameservers into just one that gives you SLAM sinks
* Allows for different framerates and half-size for ov9282s
* Added debug frame sinks
* Added the ability to wait at startup for a number of frames for the streams to stabilize before submitting them to SLAM
Use a sink in the middle of the stream to correct for v4l2 timestamps with
hardware timestamps to monotonic clock.
This sink, together with other utilities related to data streaming, lives in a
new vive_source entity, with similar functionality to wmr_source or rs_source.
The vive_source lifetime is managed by the builder xfctx, which prevents
deallocation dependencies between vive_device and the v4l2_fs to cause segfaults.
We now have a cmake-format config file.
We no longer use list variables for sources, instead using
target_sources when we need to add, in accordance with current
best practice. (This makes it a lot easier to edit too.) There's no more
include_directories(), add_definitions(), or other gently-deprecated
directory-scoped commands, nor any CMake scripts that include
a parent directory reference (named targets instead)