Today's post shows off a patent from Microsoft showing off how they might be bringing much better Kinect based gesture/selection, a more natural selection, out...
Microsoft wants to make it easier to select and activate objects in a GUI using the Kinect
How many of you have tried to use the Kinect sensor to maneuver around your Xbox One home screen and found it to be a bit frustrating? It can be difficult to select and activate objects in a graphical user interface (GUI) using a natural user input, such as the Kinect. According to Microsoft, users tend to naturally be inclined to perform a pressing gesture to select and activate an object on-screen. Sometimes this will cause the wrong object to be selected and pressed. Microsoft, which we discovered in a patent filing today, wants to improve this.
Microsoft suggests that the Kinect sensor be utilized to model the user via the Kinect's depth camera. Once this process is complete, the cursor on the GUI will move based on the position of a joint of the virtual skeleton created by the Kinect sensor. In other words, the user's physical movements can be interpreted as controls and can be used to operate a cursor. The user can also select and activate information presented in a pressable user interface.
"A cursor in a user interface is moved based on the position of a joint of the virtual skeleton. The user interface includes an object pressable in a pressing mode but not in a targeting mode. If a cursor position engages the object, and all immediately-previous cursor positions within a mode-testing period are located within a timing boundary centered around the cursor position, operation transitions to the pressing mode. If a cursor position engages the object but one or more immediately-previous cursor positions within the mode-testing period are located outside of the timing boundary, operation continues in the targeting mode," the patent application explains.
...
Project Information URL: http://www.winbeta.org/news/microsoft-wants-make-it-easier-select-and-activate-objects-gui-using-kinect
USPTO - TARGET AND PRESS NATURAL USER INPUT
A cursor is moved in a user interface based on a position of a joint of a virtual skeleton modeling a human subject. If a cursor position engages an object in the user interface, and all immediately-previous cursor positions within a mode-testing period are located within a timing boundary centered around the cursor position, operation in a pressing mode commences. If a cursor position remains within a constraining shape and exceeds a threshold z-distance while in the pressing mode, the object is activated.
...
Follow @CH9
Follow @Coding4Fun
Follow @KinectWindows
Follow @gduncan411
