New Products


UC-win/Road Kinect Plug-in

Optional plug-in that can load information from KinectTM  to UC-win/Road for hands-free driving simulation


●Price US$2,320 (Subscription fee from 2nd year: US$490) ●Release date: December 2016
Simulation

 Theory of a Depth sensor
Kinect Plugin puts data imported from depth sensor to full use. Devices that can work with the plugin are KinectTM Sensor for Xbox, Xtion Pro, and Xtion Pro Live. These 3 sensors all have depth sensing capability based on PrimeSense technology, with an infrared projector that projects a pattern of infrared light which falls on objects around it like a sea of dots and an infrared camera that can see the dots. With infrared light waves at approximately 780nm, these dots are invisible to the human eye, but are visible to a night vision camera (see figure 1). The camera sends it's video feed of this distorted dot pattern into the depth sensor's processor, and the processor works out depth from the displacement of the dots. On near objects the pattern is spread out, on far objects the pattern is dense. This 'worked out' depth map can be read from the depth sensor into the computer. Depth map can then be analyzed by comparing and contrasting dot pattern on each person detected by the sensor that define a person's skeletal structure to figure out who is who.          画像をクリックすると大きな画像が表示されます。
Pattern of infrared light
 Kinect Plugin
Kinect Plug-in was developed as an interface that can access data stored in a depth sensor simply through UDP sockets. Once a depth map (Figure 2) is built and a skeletal structure of a subject with visible joints in the depth map is defined by skeletal tracking, the movement of the subject's hands and foot can be detected by the sensor and interpreted in UC-win/Road as a natural user interface using gestures to control a steering wheel, accelerator, and brake pedal during driving simulation within a virtual environment (Figure 3).

画像をクリックすると大きな画像が表示されます。    画像をクリックすると大きな画像が表示されます。 
Figure 2 Kinec Plugin
    (Depth Map)
  Figure 3 Kinect Plugin
   (Air Driving)
 Applications
Air Driving, a hands-free driving simulation using UC-win/Road connected to a depth sensing device via Kinect Plugin enabled through  recognition hand and foot gestures to control an invisible steering wheel and gas/brake petal is one of the applications of the plugin. A research on the development is a skeletal tracking system that could be used to control a robot working in a harsh environment is currently underway. If successfully developed, a whole new system that no longer requires the attachment of sensors to the human body can be put to practice. FORUM8, through vigorous R&D and testing, has successfully controlled a robot arm (Figure 4), a radio-controlled car 1/10th the size of a real car, and UAV  (Figure 5) from a remote environment. The result was spectacular, with plenty of potentials for practical applications. 

画像をクリックすると大きな画像が表示されます。    画像をクリックすると大きな画像が表示されます。 
▲Remote control of a robot
   (LynxMotion AL5D)
  ▲Remote control of a robot
   (DJI Phantom 3 Pro)
 Recent Developments using Kinect for Windows
Kinect for Windows has built-in features of KinectTM and allows for the development of a customized application or a customized environment at user's will. The linkage of KinectTM with UC-win/Road will enable the operation of UC-win/Road without any physical controller such as a mouse or a steering wheel.

A City Building Giant
The new KinectTM V2 can detect more human joints, for instance the sensor can tell whether the subject has his/her palm open or closed. FORUM8 has developed an innovative system for 3D city modeling that allows UC-win/Road users to build civil infrastructures within a virtual city model from a giant's perspective as if he/she is a friendly giant.  
  • Display of cursor: The subject standing in front of a depth sensing device with his hand up is detected as a unique skeletal structure from a depth map. A cursor is then displayed on the screen where the subject's left or right hand overlaps with the screen had there been an invisible straight line perpendicular to the screen connecting his hand to the screen.
  • Position an infrastructure (right hand): From a menu of icons representing different types of structure that can be positioned within the virtual city, move the cursor to an icon of your choice, close your palm to grab that structure, move your fist to a location you want to place the structure, and open palm to drop the structure to that location.
  • Control camera or models using your left hand: If you put the cursor over an icon of your choice using your left hand and make a fist, you can control camera position (rotate, move, or jump) or take control of the model such as removing the positioned structure - undo - or reset the city model to default. 

画像をクリックすると大きな画像が表示されます。    画像をクリックすると大きな画像が表示されます。 
A visitor building a bridge within a virtual city during an expo   Cursor image settings window
(Up&Coming '17 New Years Issue)
Back
Up&Coming

FORUM8