What if you could train any 3D gesture for your game or software in 30 seconds?
Introducing our 3D Gesture Recognition Library, a patented machine learning library designed to help game, app, and experience developers quickly and reliably program gesture input for Unity, Unreal, and (basically) any software in their pipeline.
(APK, loadable through SideQuest)
How our patented machine learning library
shaves crucial hours off your production time
The biggest bottleneck in production processes is often user input. A perfect example is the modern keyboard QWERTY layout – a relic from the past designed to slow down your input speed due to conditions of the time (typewriter jamming). One of the biggest promises of VR is the ability to not only perceive in 360 degrees of freedom, but to also use both controllers as 360-degree input devices, allowing greater fidelity.
Still, programming 3D gestures manually is extremely tedious. That’s where MiVRy’s AI steps in, slicing that manual programming time by turning it over to an advanced neural network which can learn any gesture with 98% reliability after simply 30 repetitions, which can be performed in about 30 seconds.
This frees precious development time up so you can spend more time working on the things that matter, not tweaking gestures endlessly. Any gesture, trained and implemented, in the time it takes your coffee to brew.
- Want to draw arrows and shoot them in your VR game? You can do that.
- Hoping to easily implement a “reloading” gesture for your shooter? Done in seconds.
- Want to have your game allow user-programmed spells to cast specific effects in your spellcaster game? Easy.
- How about a series of exercises for a VR fitness app? Piece of cake.
Our 3D Gesture Recognition AI turns what would have taken dozens or perhaps hundreds of hours of manual programming time into something you can do in minutes.
The gestures can be both direction specific (“swipe left” vs. “swipe right”) or direction independent (“draw an arrow facing in any direction”) – either way, you will receive the direction, position, and scale at which the user performed the gesture!
Draw a large 3d cube and there it will appear, with the appropriate scale and orientation.
Both hand-handed gestures, two-handed gestures, and multi-part sequential gestures are supported.