3d gesture library

3D Gesture Recognition

What if you could train any gesture as input for your game or software in 30 seconds?

Introducing our 3D Gesture Recognition Library, a machine learning library designed to help game developers and 3D artists quickly and reliable program gesture input for Unity, Unreal, and (basically) any software in their pipeline.

How this patent-pending machine learning library
makes your production process faster

The biggest bottleneck in production processes is often the user input. A perfect example is the modern keyboard layout – a relic from the past designed to slow down your input due to outdated necessities (typewriter jamming). One of the biggest promises of VR is the ability to not only perceive in 360 degrees of freedom, but to also use both controllers as 360-degree input devices.

Still, programming gestures manually is tedious. That’s where this 3D Gesture Recognition steps in, slicing that manual programming time by turning it over to an advanced neural network which can learn any gesture with 98% reliability after simply 30 repetitions, which can be performed in about 30 seconds.

The possibilities this opens up for developers of VR games, software, and tools is nearly endless while your coffee brews.

Want to draw arrows and shoot them in your VR game? You can do that.
Want to easily implement a “reloading” gesture for your shooter? Done in seconds.
Want to have your game allow user-programmed spells to cast specific effects in your spellcaster game? Easy.
Want to create a series of exercises for a VR fitness app? Also possible.

Our 3D Gesture Recognition turns what would have taken dozens or perhaps hundreds of hours of manual programming time into something you can do in minutes.

The gestures can be both direction specific (“swipe left” vs. “swipe right”) or direction independent (“draw an arrow facing in any direction”) – either way, you will receive the direction, position, and scale at which the user performed the gesture!
Draw a large 3d cube and there it will appear, with the appropriate scale and orientation.
Both hand-handed gestures, two-handed gestures, and multi-part sequential gestures are supported.

3D Gesture Recognition

  • Trains gestures in 30 seconds with 98% reliability
  • Real 3D gestures - like waving a magic wand
  • Record your own gestures - simple and straightforward
  • Easy to use - single C# class
  • Can have multiple sets of gestures simultaneously (for example: different sets of gestures for different buttons)
  • High recognition fidelity
  • Outputs the position, scale, and orientation at which the gesture was performed
  • High performance (back-end written in optimized C/C++)
  • Integrate it as a library
  • Integrate it as a Unity plug-in
  • Integrate it as a UE4 plug-in