Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 446 articles
Browse latest View live

Two Kinect Speech Tips

$
0
0

Today we're highlight two recent posts from Abhijit Jana, both speech related.

If you've been following long, you'll know my feelings about speech and how that's a killer feature. Abhijit's two posts help you understand and better use speech in your next app...

Get the list of recognized words from Kinect speech commands

When a speech is recognized by the Speech Recognizer engine, the Speech Recognizer returns the recognized words as collection of the type RecognizedWordUnit class.  This set of words are extremely useful to deal with any sentences. In my previous post, I discussed about recognizing a statement like “draw a read circle” or “draw a green circle”; where we had to identify the sentences from the Kinect captured audio and then splitting it in a series of words.

With Microsoft Speech API, the SpeechRecognitionEngine class handles all the operation related with the speech. You can then attach an event handler to the SpeechRecognized event, which will fire whenever the audio is internally converted into text and identified by the recognizer. The following code shows how you can create an instance of SpeechRecognitionEngine and register the Speech Recognized event handler.

...

image

...

[Read the full post]

Project Information URL: http://dailydotnettips.com/2014/01/20/get-the-list-of-recognized-words-from-kinect-speech-commands/

Accepting Kinect Speech Commands after a specific level of confidence

In my Kinect for Windows SDK Tips series, over the last few posts I was discussing about speech recognition using Kinect for Windows SDK. You have seen how we can load / unload multiple grammar, how to use wildcard with grammar builder or even getting list of recognized words from Kinect. This post is related with the confidence level of recognized words and I think this is required for all types of speech enabled application using Kinect.

In a speech enabled Kinect application, whenever the speech is recognized, usually we invoke a method to parse the command; and perform the action based on the recognized commands. But before we parse it:

...

The speech recognizer also provides all the information based on the confidence level of the sound source on the speech that was identified. If the speech is detected but does not match properly or is of very low confidence level, the SpeechRecognitionRejected event handler will fire

image

...

[Read the rest...]

Project Information URL: http://dailydotnettips.com/2014/01/23/accepting-kinect-speech-commands-after-a-specific-level-of-confidence/

Contact Information:


Kinect on the Korean border

$
0
0

Today inspirational project provides another example of how the Kinect has exploded so far beyond mere gaming that it's almost scary. (in a very cool way)..

This article from Tom Warren about Jae Kwan Ko's Kinect project is pretty awesome. Also, notice the Kinect for Windows v2 device? If he can do this with v1...

Kinect is helping guard the Korean border

clip_image001

Microsoft’s Kinect sensor has been used in many weird and wonderful ways, including turning a bathtub into a giant liquid touch screen, and its obvious use alongside the Oculus Rift. While it’s moving rapidly away from its origins as a games console accessory, self-taught programmer Jae Kwan Ko is extending its use even further as a method of border protection. South Korea and North Korea are separated by a heavily armed border and Demilitarized Zone (DMZ), and Ko has developed software and hardware for a system that uses Kinect to detect moving objects.

Hankook Ilbo reports that Ko’s system was supplied to the US Army back in August, and it has since been installed at some parts of the DMZ. The Kinect-powered system can detect whether a moving object is human or an animal, and it will automatically trigger alerts at the army base if its detects human movement. " ...

Project Information URL: http://www.theverge.com/2014/2/3/5373798/kinect-is-helping-guard-the-korean-border

Deep Diving into the Kinect for Windows v2, Part 2...

$
0
0

Zubair Ahmed is back with the second part, first part Dive into Developing with the Kinect for Windows v2, of his exploration of the Kinect for Windows v2 SDK, fixing a few bugs and providing more tips and tricks too...

Kinect for Windows v2 Deep dive–Part 2

In my previous K4W deep dive post I draw Body and joints in WPF and overlay them on top of Color stream. In that post notice that I am using two Image control, one to render Color and other for Body info.

...

Also in that post I talked about a hack to position body tracking drawings ‘properly’ over the color stream.

In this post however I fix the above two problems, which means I don’t need to use 80px offset hack from the previous post and use only one Image control to render both Color and Body data.

Fixing Body tracking offset – Using MapCameraPointToColorSpace

First thing first, in the previous post I used a method of the CoordinateMapper class in the Kinect for Windows v2 SDK that uses camera point and maps them to depth space

...

The problem is that this method does not accurately translates points to the color frame that we receive from the Kinect sensor so I can up with the hack. Fortunately the CooridnateMapper class has another method that works perfectly for this.

...

This time I use the ColorSpacePoint and use that to get X and Y coordinates from the camera.

Merge Color and Body frames to use one Image control

Alright so I got the Body info mapped properly to the Color frames, but I have not solved one problem, how to merge these two frames and use single image control to render them.

...

Also in the above code I am merging the two writeable bmps using this helpful open source library  WriteableBitmapExtensions which has many helpful extensions but the method I am using is Blit.

[Read the full post and see all the code]

Project Information URL: http://www.zubairahmed.net/?p=1682

image

image

Contact Information:

Looking at the Kinect for Windows v2

$
0
0

This week it's all Kinect for Windows v2. The first two posts will be introductions to the device and the last, some Kinecting to C++...

First Vangos Pterneas provides a nice quick overview...

Kinect for Windows version 2: overview

Well, I have been lucky enough since Microsoft selected me for early access to the new Kinect for Windows version 2 sensor. Today, I want to share some facts and figures regarding the new device.

The hardware

The new sensor features a radically different hardware design. First thing to notice is that the tilt motor is now gone. However, the new cameras provide a wider field of view and feature frames of higher resolution. Above, you can see my Developer Preview unit, unboxed. Below, you can watch a quick video I made, demonstrating the new color, depth, infrared and body streams.

image

The software...
Better camera streams...
More joints

Yeah, the new sensor tracks up to 25 body joints, along with their corresponding orientations...

Hand tracking

That’s right! Except from joint tracking, the new sensor lets us determine the state of the users’ hands. The state is just an enumeration with values of “Open”, “Closed”, “Lasso”, “Unknown” and “NotTracked”. ...

Facial expressions

Kinect for Windows version 1 could track 40 points of the human face. Kinect for Windows version 2 goes one step further and can even recognize some very basic facial expressions, activities and accessories! Here are supported facial data:

  • Eyes closed
  • Eyes looking away
  • Mouth open
  • Mouth moved
  • Glasses accessory
  • Happy expression
  • Neutral expression

image

...

Overall

Let me clarify that I am not paid by Microsoft, though Kinect for Windows version 2 is my gadget of choice for 2014. Everything has been dramatically improved and new features will be popping all the time. Now, the only limit of software is your imagination.

PS: New Kinect book – 20% off

Well, I am publishing a new ebook about Kinect development in a couple months. It is an in-depth guide about Kinect, using simple language and step-by-step examples. You’ll learn usability tips, performance tricks and best practices for implementing robust Kinect apps. Please meet Kinect Essentials, the essence of my 3 years of teaching, writing and developing for the Kinect platform. Oh, did I mention that you’ll get a 20% discount if you simply subscribe now? Hurry up

[Read the full post]

Project Information URL: http://pterneas.com/2014/02/08/kinect-for-windows-version-2-overview/

Contact Information:

Kinect for Windows – What’s new, a view from a Kinect for Windows MVP

$
0
0

The next in our v2 theme week comes from our new minted Kinect for Windows MVP, Tom Kerkhove.

Second Gen. Kinect for Windows – What’s new?

It has been a while since the alpha version of the second generation of Kinect for Windows has been released. At first I was not going to post any 101 post because there are already a lot of them out there but why not. In this post I will give a theoretical overview of what is included in the november version of the new SDK.

Everything in this post is based on alpha hardware & alpha SDK, this is still a work in progress.

Disclaimer

“This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

I. Hardware

The hardware is improved in several ways, f.e. the new IR technology – The sensor now uses the Time-of-Flight technology to calculate the distance between the sensor and objects for each pixel of the image by measuring the time a light signal travels to an object and back to the sensor.

Tilt motor is no more...

...

II. Features

The focus of the first alpha SDK is on the core elements – Color, depth, IR & body.
Unfortunately nothing else has been implemented yet and no news concerning audio, face tracking, fusion & interaction is available yet.

For the sake of interaction you can build your own controls but this requires some effort if you’re new to Kinect.

Color...
Depth & IR ...

...

Body, the new skeletal tracking

Skeletal tracking has been renamed to “Body” and is now capable of tracking 6 persons completely with a total of 25 joints.

The biggest improvement is that each hand now has a seperate joint for the thumb and one for the other four fingers

image

  • Hand tracking - Each hand of a body now has an indication of what state it is in, f.e. Open, Closed, Lasso where lasso is pointing with two fingers
  • Activities - Indication what the facial emotion of the user is, f.e. eye left closed, mouth open, etc. (More might be added later)
  • Leaning - Indication if the user is leaning to the left or right
  • Appearance - Tells more about the user if he/she is wearing glasses (More might be added later)
  • Expressions - Expression of the current person, f.e. happy or neutral (More might be added later)
  • Engaged - Indication if the user is looking at the sensor or not
III. Other

...

Supported systems

Here is a small overview of the supported systems for the alpha SDK in a VM or native machine.

  • Windows Embedded or Windows 8+ are required for the SDK. It is still not possible to create Windows Store apps for the same reason as in v1 – You can stream the data from the desktop mode to your app but your app won’t pass certification because of this streaming.
  • Windows 7 is not officially supported at the moment because Win8+ has improved USB 3.0 support
  • Micro framework is not supported due to too few processing power
Multiple sensor applications

Since the developer program is still in progress nobody tried to combine multiple sensors but my guess is that it will support 4 sensors on one machine like v1.

V. Conclusion

The first version looks like a big step ahead concerning the specs but some functionality is still unclear but time will tell.

In my next post I will tell you how you can create your first Kinect v2 application that will demonstrate all the core datastreams.

[Read the entire post]

Project Information URL: http://www.kinectingforwindows.com/2014/02/12/draft-second-gen-kinect-introduction/

Contact Information:

Kinect for Windows v2 Events Sample in C++

$
0
0

The last in our v2 week is a CodePlex project for you C++ junkies, who also have a Kinect for Windows v2 device...

Kinect 4 Windows v2 Events Sample in C++

This project is an example of how to listen for K4W v2 API events using C++. This project was built with VS.Net 2013 and K4Wv2 API alpha bits issued 11/2013.

This project is a simple example of how to listen for Kinect v2 API events using C++. This project was meant to be a tutorial of sorts to show how to write modern C++ code to listen for Kinect for windows v2 Frame arrived events.

This sample was built using the alpha Kinect v2 API library.

The sample is a win32 based project which utilizes a menu command to "Start" the Kinect v2 Sensor. The sensor is started and the infrared events are received and written out the output window. You can use Sysinternal DebugView to see the events as they arrive in real time.

This sample shows how to use a Message Loop based on the "gamers" loop design and the MsgWaitForMultipleObjects non-blocking parameters to listen for normal window events such as pain events, mouse clicks, menu-commands, as well as the Kinect v2 sensor events.

All code is written with the default Visual Studio 2013 C++ windows 32 project template, and C++11 modern libraries and conventions.

Project Information URL: https://k4wv2eventsample.codeplex.com/

Project Download URL: https://k4wv2eventsample.codeplex.com/releases/

Project Source URL: https://k4wv2eventsample.codeplex.com/SourceControl/latest

Getting a Continuous Grip [aka ContinousGrippedState] with the Kinect.Reactive

$
0
0

Today's project comes to us from another Friend of the Gallery, Marcus Kohnert, who shows us how are can continue to take advantage of the Kinect.Reactive library in our Kinect for Windows v1 SDK applications.

Other times we've highlighted Marcus;

ContinousGrippedState in Kinect.Reactive

For a while now I was wondering why the Kinect’s InteractionStream sends only one InteractionHandEventType.Grip when the user closes its hand. While the user still holds its hand in a closed state the SDK will fire events that have a HandEventType of None. This confused me from the very beginning. Compared to mouse events you’ll get continous mousedown events when the user does not release the mouse button.

So I thought about a way to get the same functionality when using the Kinect for Windows SDKs 1.x InteractionStream.

This extension method solved my problem and is now part of Kinect.Reactive:

[Check out the source]

Project Information URL: http://passiondev.wordpress.com/2014/02/17/continousgrippedstate-in-kinect-reactive/

SNAGHTML27bdeb7f

Contact Information:

Mysteries of Kinect for Windows Face Tracking...

$
0
0

Today's post from Carmine Sirignano, a Developer Support Escalation Engineer on the Kinect for Windows talks in detail on how the Kinect for Windows Face Tracking actually works, providing some cool code samples too...

Mysteries of Kinect for Windows Face Tracking output explained

Since the release of Kinect for Windows version 1.5, developers have been able to use the Face Tracking software development kit (SDK) to create applications that can track human faces in real time. Figure 1, an illustration from the Face Tracking documentation, displays 87 of the points used to track the face. Thirteen points are not illustrated here—more on those points later.

image

You have questions...

Based on feedback we received via comments and forum posts, it is clear there is some confusion regarding the face tracking points and the data values found when using the SDK sample code. The managed sample, FaceTrackingBasics-WPF, demonstrates how to visualize mesh data by displaying a 3D model representation on top of the color camera image.

image
Figure 2: Screenshot from FaceTrackingBasics-WPF

By exploring this sample source code, you will find a set of helper functions under the Microsoft.Kinect.Toolkit.FaceTracking project, in particular GetProjected3DShape(). What many have found was the function returned a collection where the length of the array was 121 values. Additionally, some have also found an enum list, called “FeaturePoint”, that includes 70 items.

We have answers...

As you can see, we have two main sets of numbers that don't seem to add up. This is because these are two sets of values that are provided by the SDK:

  1. 3D Shape Points (mesh representation of the face): 121
  2. Tracked Points: 87 + 13

The 3D Shape Points (121 of them) are the mesh vertices that make a 3D face model based on the Candide-3 wireframe.

...

To get the 100 tracked points mentioned above, we need to dive more deeply into the APIs. The managed APIs, provide an FtInterop.cs file that defines an interface, IFTResult, which contains a Get2DShapePoints function. FtInterop is a wrapper for the native library that exposes its functionality to managed languages. Users of the unmanaged C++ API may have already seen this and figured it out. Get2DShapePoints is the function that will provide the 100 tracked points.

If we have a look at the function, it doesn’t seem to be useful to a managed code developer:

...

Pulling it all together...

As we have seen, there are two types of data points that are available from the Face Tracking SDK:

  • Shape Points: data used to track the face
  • Mesh Data: vertices of the 3D model from the GetProjected3DShape() function
  • FeaturePoints: named vertices on the 3D model that play a significant role in face tracking

To get the shape point data, we have to extend the current managed wrapper with a new function that will handle the interop with the native API.

[Read the entire post, check out the code and explanations]

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/01/31/clearing-the-confusion-around-kinect-for-windows-face-tracking-output.aspx

Contact Information:


More Tips and Tricks for the Kinect to Windows SDK

$
0
0

Today we're bringing Abhijit Jana back with a round-up of all his recent Windows for Windows that he's been doing on Daily .NET tips.

Many of our past posts from Abhijit;

14 Tips and Tricks on Kinect for Windows SDK

Here is the list of 14 cool Kinect for Windows SDK Tips and Tricks that you found very useful. Couple of months back I published a post to share the list of Kinect for Windows SDK tips and tricks that I was writing at Daily .NET tips. In the past few days; I have also shared another set of tips that are mainly related with the Kinect Speech Recognition.

Here is the updated list:

  1. Accepting Kinect Speech Commands after a specific level of confidence
  2. Get the list of recognized words from Kinect speech commands
  3. Using Wildcard with Grammar Builder – Kinect Speech Recognition
  4. Dynamically Loading/Unloading Grammar – Kinect Speech Recognition Engine
  5. Applying RGB color filtering in Kinect color stream Data
  6. How to control the frame interval of Kinect color data stream?
  7. How to adjust Kinect sensor automatically based on user positions ?
  8. Identify Kinect device index from device instance id
  9. How to check if Kinect data streams are already enabled ?
  10. Do we really need to install Kinect for Windows SDK in the production / end user systems ?
  11. Using Kinect Instance Id to Initialize the Kinect Sensor
  12. How to get list of all Connected Kinect Sensor ?
  13. How to turning off the Kinect IR light forcefully ?
  14. How to check if any Kinect device is connected with system ?

Project Information URL: http://abhijitjana.net/2014/01/24/14-tips-and-tricks-on-kinect-for-windows-sdk/

Contact Information:

Color, depth and infrared streams in the Kinect for Windows v2 world (here's how)

$
0
0

This week we're doing another Kinect for Windows v2 week. From today's example, to tomorrow's cool project to finally the alpha version of a coming third party product.

But first, today we again highlight a post from Vangos Pterneas who shows us how easy it is to get and display the different Kinect for Windows v2 streams...

Kinect for Windows version 2: Color, depth and infrared streams

image

A month ago, I was happy to receive a brand-new Kinect for Windows version 2 Developer Preview sensor. You can read my previous blog post about the capabilities of the new device. Kinect v2 now includes 5 main types of input streams:

  • Color
  • Depth
  • Infrared
  • Body
  • Audio

Today I will show you how you can acquire and display each bitmap input in a Windows application. In the next blog post, we’ll talk about body tracking. Here is a quick video I made which demonstrates the different streams provided by the new sensor.

Requirements

Creating a new project

...

Initializing the sensor

Let’s now dive into the C# code! ...

Reading the streams

That’s it! We have now connected to the sensor...

Color stream

The raw color images have been increased to 1920×1080 resolution...

Depth stream

The depth stream provides us with the depth value of every point in the visible area...

Infrared stream

The infrared sensor is the ability to view clearly into the dark...

That’s it! You can now display every bitmap stream! The only thing left to do is call the corresponding method and display the frame. This is how to display the color frame, for example:

Big hint: Kinect v2 requires you to start the KinectService.exe program before running any Kinect 2 apps. I always forget this detail, so I open this executable using a single line of C# code:

PS 1: Vitruvius

If you want to automate the above bitmap-conversion processes, consider downloading Vitruvius. Vitruvius is a free & open-source library I built, which provides many utilities for your Kinect applications, including gesture detection, voice recognition and drawing extensions. Give it a try, enjoy and even contribute yourself!

PS 2: New Kinect book – 20% off

Well, I am publishing a new ebook about Kinect development in a couple months. It is an in-depth guide about Kinect, using simple language and step-by-step examples. You’ll learn usability tips, performance tricks and best practices for implementing robust Kinect apps. Please meet Kinect Essentials, the essence of my 3 years of teaching, writing and developing for the Kinect platform. Oh, did I mention that you’ll get a 20% discount if you simply subscribe now? Hurry up

[Read the entire post]

Project Information URL: http://pterneas.com/2014/02/20/kinect-for-windows-version-2-color-depth-and-infrared-streams/

Project Source URL: http://pterneas.com/wp-content/uploads/2014/01/KinectStreams.zip

Contact Information:

Kinecting to your Heart[rate]

$
0
0

Today's Kinect for Windows v2 project from D Goins Espiriance is currently binary only (but hopefully source is coming) that show off one of the coolest and weirdest and freakiest features of the Kinect v2, using it to detect your heart rate...

Kinect Heart Rate Detector

This project is an application that utilizes the Infra-Red sensor of the Kinect v2 device to calculate a person's heart rate.

This project is a sample application which shows how to use the Kinect v2 Device using it's raw data feeds from the InfraRed sensor to determine a person's heart rate.

The Source code is not provided as of yet, only the executable.

Note: This is based on preliminary software and/or hardware, subject to change

Project Information URL: https://k4wv2heartrate.codeplex.com/

Project Download URL: https://k4wv2heartrate.codeplex.com/releases/ 

image

image

Contact Information:

GesturePak 2.0 Alpha for the Kinect for Windows v2

$
0
0

Today's we close out our Kinect v2 week with a project from the one and only, Carl Franklin. He's taking his cool GesturePak project forward to support the future, the Kinect for Windows v2 device. The really great news is that he's going to be releasing the source with this new future release too!

GesturePak 2.0 Alpha

When the Microsoft Kinect for Windows team sent all of it's MVPs (myself included) the new Kinect Sensor and access to the Developer Preview edition of the Kinect For Windows SDK v2, it didn't take me long to refactor the GesturePak Matcher to work with the new sensor.

The difference is amazing. This new sensor is so much more accurate, so much faster (lower latency) and can see you in practically no light. Clearly, there will be a demand for robust and easy gesture recognition software.

Background
I wrote GesturePak to make it easier for developers and end users to create and recognize gestures using Kinect for Windows. You essentially "record" a gesture using movements, the data is saved to an xml file, and you can then load those files in your code and use the aforementioned GesturePak Matcher to tell you (in real-time) if your user has made any of those gestures.
GesturePak 2.0

GesturePak 1.0 was fun to write. It works, but it's a little clunky. The device itself is frustrating to use because of lighting restrictions, tracking problems, jitters, and all that. The biggest issue I had was that if the Kinect stopped tracking you for whatever reason, it took a long time to re-establish communication. This major limitation forced me into a design where you really couldn't walk away from tracking to edit the gesture parameters. Everything had to be done with speech. Naming your gesture had to be done by waving your hands over a huge keyboard to select letters. Because of this, you had to break down a gesture into "Poses", a set of "snapshots" which are matched in series to make a gesture.

For version 2.0 I wanted to take advantage of the power in the device to make the whole experience more pleasant. Now you can simply record yourself performing the gesture from beginning to end, and then sit down at the machine to complete the editing process.

image

...

GesturePak File Format v2 ...
POSE is now FRAME ...
Recording a Gesture ...
Editing your Gesture ...
Using the GestureMatcher in your code ...
Source will be included in v2 ...

The price has not been set, but I plan to ship the C# source with GesturePak 2.0 at some level. You will be free to modify it for your own apps and use it however you like. You will get the source code to the API, the recorder/editor, and the tester app. The recorder/editor can be modified and included in your own app if you want to give your end-users the ability to create their own gestures. If you have code to contribute back to GesturePak, I would welcome it!

Get the bits!

Do you have the Kinect for Windows Developer Preview SDK and the Kinect Sensor v2? Would you like to take GesturePak 2.0 for a test run? Send me an email at carl@franklins.net with the subject GesturePak 2.0 Alpha and I'll gladly send you the latest bits. I only ask that you are serious about it, and send me feedback, good and bad.

[Read the entire post]

Project Information URL: http://carlfranklin.net/blog/2013/12/30/gesturepak-20-alpha.html

Contact Information:

A Bridge not to far... The Kinect Common Bridge get face tracking and voice recognition.

$
0
0

We last mentioned the Kinect Common Bridge here, Kids, Kinect, Cinder and some C++ too... Meet the Kinect Common Bridge. Today we return to it, with a fresh new update...

Kinect Common Bridge adds Face Tracking and Voice recognition!

The newest release of Kinect Common Bridge makes it even easier to track faces and recognize speech in your C++ applications with Kinect for Windows.

This is the first update to the open source Kinect Common Bridge (KCB) released recently by MS Open Tech to make it simple to integrate Kinect for Windows scenarios and experiences in creative software development. The openFrameworks and Cinder communities have already adopted the Kinect Common Bridge. If you have been using either framework and experimented with KCB, you will find yourself right at home with its added capabilities. In the spirit of “focusing on the cool stuff” that motivates creative developers, starting the sensor and displaying a simple video treatment with face tracking can now be achieved in less than 10 lines of code! Incorporating Kinect for Windows magic in software experiences couldn’t be any easier…

image

Kinect for Windows SDK comes with support for C++ and C# development. In addition to the Kinect for Windows SDK, you can download the Kinect for Windows Developer Toolkit that offers precious guidance and samples to get started coding with Kinect for Windows (both are a prerequisite for KCB use).

The Kinect Common Bridge then aggregates the various helper libraries available in the Kinect for Windows Developer Toolkit samples, and exposes them through a single C++ API adapted to creative development.

...

MSOpenTech / KinectCommonBridge

Introduction

Kinect Common Bridge is a complement to the Kinect for Windows SDK that makes it easy to integrate Kinect scenarios in creative development libraries and toolkits.

Why KCB?

When working with the openFrameworks and Cinder community members, it was evident that they needed something similar to the managed APIs but for C++. The graphics libraries they use are written entirely in native C++ for “down to the bare metal” performance to accomplish their craft. As for the functionality, they wanted something lightweight to keep the extensions to their libraries as lightweight as possible. If you are not familiar with these libraries or any type of game development model, they do not have a typical application design pattern. They need to run as fast as possible to run simulations, update positions of objects, and then render those on screen either as fast as possible or locked in sync with the refresh of the display. This can run at typical 60 frames per second (fps) and as high as the CPU/GPU can handle.

Many familiar with Kinect know the maximum frame rate is 30fps. Using an event based model doesn’t work well for this type of development since it needs to grab the frame of data when it wants, regardless of what Kinect is doing and if it isn’t there, it will catch it next time around. It cannot block the thread that does this update/query cycle.

Taking a look at the common use case scenarios, the common tasks when working with the Kinect for Windows SDK and the device are:

  1. Select a sensor
  2. Get the color/IR, depth, and skeleton data from it.

That was the goal of KCB: allow any framework that is capable of loading the DLL direct access to the data.

Requirements

The Hardware and Software below are required to build the library:

Additionally, to take advantage of the face tracking and speech recognition capabilities you need to install:

  • Speech Server SDK: It is available at http://www.microsoft.com/en-us/download/details.aspx?id=27226. Note that depending on the OS version and target platform that you are building for, you may need to have either x86, or x64, or both on your machine.

  • Kinect for Windows Developer Toolkit: It is available at http://go.microsoft.com/fwlink/?LinkID=323589 and is necessary for face tracking functionality. After the installation, make sure that the KINECT_TOOLKIT_DIR environment variable is set. Usually its value will be something like C:\Program Files\Microsoft SDKs\Kinect\Developer Toolkit v1.8.0. Tip: reboot your machine after installation even if Windows does not prompt you. Environment variables may not be updated until you do so, causing build errors.

Getting Started

...

More advanced functionality: face tracking and voice recognition

KCB has additional support for more advanced features of the sensor such as face tracking and voice recognition. Check out the samples folder for working code that illustrates how to get up and running quickly.

KCB builds with both face tracking and voice recognition enabled. To disable these items remove the following preprocessor defines from the C++ preprocessor properties of the KinectCommonBridge project:

...

Additional Resources

Accessible Kinect and Yoga

$
0
0

Today's inspirational post comes to us from Kyle Rector where she shows off one of the coolest Kinect usage examples...

Accessible yoga for the blind using Kinect

[GD: Post copied in full below]
Yoga is not an easy exercise for those who are vision impaired. Fortunately, Kyle Rector, a fourth-year PhD student at the University of Washington, has developed a way to make it more accessible. After sustaining a running injury that made her turn to yoga for exercise, she and her advisor, Julie Kientz, realized the potential of using Kinect to detect yoga poses. Using skeletal tracking, voice recognition and the ability to use software to help people improve the alignment of people’s bodies, they developed Eyes-Free Yoga.

The program uses Microsoft Kinect software to track body movements and quickly offer verbal feedback for various yoga poses. A mix of a video game and exercise, Eyes-Free yoga makes a typically visual exercise accessible to people without sight, allowing them to interact verbally with an instructor. Not only has Kyle helped to create a new way for the vision-impaired to exercise, but she’s paved the path for others to use to create things using the same technology for the visually impaired.

Kyle is featured on the Microsoft Facebook Page in #ICreatedThis, an ongoing series that showcases people doing interesting things at Microsoft and with Microsoft technology.  Know someone else doing something amazing?  Tweet us @Microsoft using the #ICreatedThis hashtag or email the story to cmgsocial@microsoft.com.

image

Project Information URL: http://blogs.technet.com/b/firehose/archive/2014/03/07/accessible-yoga-for-the-blind-using-kinect.aspx

Kinect 1 vs. Kinect 2, a quick side-by-side reference

$
0
0

Today we're highlighting a quick post from James Ashley who does a nice Kinect 1 vs. 2 side-by-side comparison.

Quick Reference: Kinect 1 vs. Kinect 2

This information is preliminary as Kinect for Windows SDK 2.0 has not been released in final form and some of this may change.  Some things, such as no tilt motor and supported USB standards, are probably impossible to change.

image

Project Information URL: http://www.imaginativeuniversal.com/blog/post/2014/03/05/Quick-Reference-Kinect-1-vs-Kinect-2.aspx


Body Tracking with the Kinect for Windows v2

$
0
0

It's Kinect for Windows v2 Thursday, with another great piece of work from Vangos Pterneas. Today he continues to show us the Body Tracking features and capabilities found in the Kinect for Windows v2 device.

Some of the other times we've highlighted Vangos Pterneas's work;

Kinect for Windows version 2: Body tracking

NOTE: This is preliminary software and/or hardware and APIs are preliminary and subject to change.

In my previous blog post, I show you how to display the color, depth and infrared streams of Kinect version 2 by transforming the raw binary data into Windows bitmaps.

This time, we’ll dive into the most essential part of Kinect: Body tracking.

The initial version of Kinect allowed us to track up to 21 body joints. The second version allows up to 25 joints. The new joints include the fists and thumbs! Moreover, due to the enhanced depth sensor, the tracking accuracy has been significantly improved. Experienced users will notice less jittering and much better stability. Once again, I would like to remind you of my video, which demonstrates the new body tracking capabilities:

Next, we are going to implement body tracking and display all of the new joints on-screen. We’ll extend the project we created previously. You can download the source code here.

Extending the project

In the previous blog post, we created a project with an <Image> element for displaying the streams. We now need to add a <Canvas> control for drawing the body. Here is the updated XAML code:

...

The Reader_MultiSourceFrameArrived method will be called whenever a new frame is available. Let’s specify what will happen in terms of the body data:

  1. Get a reference to the body frame
  2. Check whether the body frame is null – this is crucial
  3. Initialize the _bodies list
  4. Call the GetAndRefreshBodyData method, so as to copy the body data into the list
  5. Loop through the list of bodies and do awesome stuff!

Always remember to check for null values. Kinect provides you with approximately 30 frames per second – anything could be null or missing! Here is the code so far:

...

This is it! We now have access to the bodies Kinect identifies. Next step is to display the skeleton information on-screen. Each body consists of 25 joints. The sensor provides us with the position (X, Y, Z) and the rotation information for each one of them. Moreover, Kinect lets us know whether the joints are tracked, hypothsized or not tracked. It’s a good practice to check whether a body is tracked before performing any critical functions. The following code illustrates how we can access the different body joints:

The supported joints by Kinect 2 are the following:

  • SpineBase
  • SpineMid
  • Neck
  • Head
  • ShoulderLeft
  • ElbowLeft
  • WristLeft
  • HandLeft
  • ShoulderRight
  • ElbowRight
  • WristRight
  • HandRight
  • HipLeft
  • KneeLeft
  • AnkleLeft
  • FootLeft
  • HipRight
  • KneeRight
  • AnkleRight
  • FootRight
  • SpineShoulder
  • HandTipLeft
  • ThumbLeft
  • HandTipRight
  • ThumbRight

Neck and thumbs are new joints added in the second version of Kinect.

Knowing the coordinates of every joint, we can now draw some objects using XAML and C#. However, Kinect provides a distance in millimetres, so we need to map the millimetres to screen pixels. In the attached project, I have made this mapping for you. So, the only method you need to call is the DrawPoint or the DrawLine. Here is the DrawPoint:

...

image

...

[Click through to see more, the code and more]

Project Information URL: http://www.codeproject.com/Articles/743862/Kinect-for-Windows-version-Body-tracking

Project Source URL: https://workspaces.codeproject.com/pterneas/kinect-for-windows-version-body-tracking

Contact Information:

Jitter Filter for the Kinect

$
0
0

Today's project comes to us from Marc Drossaers who teaches us a good bit about jitter, what it is and best of all how we can add a jitter filter to our next Kinect project. This is a smaller piece of a bigger project he's working on, one that I'm sure we'll be highlighting soon..

A Jitter Filter for the Kinect

This blog post introduces a filter for the jitter caused by the Kinect depth sensor. The filter works essentially by applying a dynamic threshold. Experience shows that a threshold works much better than averaging, which has the disadvantage of negatively influencing motion detection, and has only moderate results. The presented DiscreteMedianFilter removes the jitter. A problem that remains to be solved is the manifestation of depth shadows. Performance of the filter is fine. Performance is great in the absence of depth shadow countermeasures.

Introduction

Kinect depth images show considerable jitter, see e.g. the depth samples from the SDK. Jitter degrades image quality. But it also makes compression(Run Length Encoding) harder; compression for the Kinect Server System will be discussed in a separate blog post. For these reasons we want to reduce the jitter, if not eliminate it.

Kinect Depth Data

What are the characteristics of Kinect depth data?

Literature on Statistical Analysis of the Depth Sensor

Internet search delivers a number of papers reporting on thorough analysis of the depth sensor. In particular:

...

Depth Data

We are interested in the depth properties of the 640×480 spatial image that the Kinect produces at 30 FPS in the Default range. From the he SDK documentation we know that the Kinect provides depth measurements in millimeters. A dept value measures the distance between a coordinate in the spatial image and the corresponding coordinate in the parallel plane at the depth sensor, see image below from the Kinect SDK Documentation.

...

Jitter

The Kinect depth measurements are characterized by some uncertainty that is expressible as a random error. One can distinguish between errors in the x,y-plane on the one hand, and on the z-axis (depth values) on the other hand. It is the latter that is referenced to as the depth jitter. The random error in the x,y-plane is much smaller than the depth jitter. I suspect it manifests itself as the color jitter in the KinectColorDepthServer through the mapping of color onto depth, but that still has to be sorted out. Nevertheless, the filter described here is also applied to the color data, after mapping onto depth.

The depth jitter has the following characteristics:

...

A Kinect Produces a Limited Set of Discrete Depth Values

It is not the goal of the current project to correct the Kinect depth data, we just want to send it over an Ethernet network. What helps a lot is, and you could see this one coming:

The Kinect produces a limited set of depth values.

The Kinect for Windows produces 345 different depth values in the Default range, not counting the special values for unknown, and out of range measurements. The depth values for my Kinect for Windows are (divide by 8 to get the depth distance in mm):

image

Design

I’ve experimented with several approaches: sliding window of temporal averages, Bilateral Filter. But these were unsatisfactory:

- Reduction of Jitter is much less good compared to applying a threshold.

- Movement detection is as much reduced as the jitter, which is an undesirable effect.

A simple threshold, of about the size of the breadth of the error function proved the best solution. As noted above, the jitter typically is limited to a few value above and below the ‘real’ value. We could ...

...

The DiscreteMedianFilter Removes Jitter

In practice we see no jitter anymore when the filter is applied: The DiscreteMedianFilter ends the jitter (period). However, the filter is not applicable to (edges of) depth shadows.

Noise

Actually, it turned out that this filter is in fact too good. If the Kinect registers a moving object, we get a moving depth shadow. The filter cannot deal with illegal depth values, so we are stuck with a depth shadow smudge.

A modest level of noise solves this problem. In each frame 10% of the pixels the filter skips is selected at random, and updated. This works fine, but it should be regarded as a temporal solution: the real problem is, of course, the depth shadow, and that should be taken up.

Implementation

The Discrete Median Filter was implemented in C++, as a template class, with a traits template class (struct, actually); one specialization for the depth value type and one specialization for the color value type, to set the parameters that are typical for each data type, and a policy template which holds the variant of the algorithm that is typical for the depth and color data respectively. For evaluation purposes, I also implemented traits and policy classes for unsigned int.

...

Channels and Offset

Color data is made up of RGBA data channels: e.g. R is a channel. Working with channels is inspired on data compression. More on this subject in the blog post on data compression.

The advantages of working with channels for the DiscreteMedianFilter are:

..

Code

The code is complex at points, so it seems to me that printing the code here would raise more questions than it would answer. Interested people may download the code from The Byte Kitchen Open Sources at Codeplex. If you have a question about the code, please post a comment.

Performance

How much space and time do we need for filtering?

A small test program was built to run the filter on a number of generated arrays simulating subsequent depth and color frames. The program size never gets over 25.4 megabytes. The processing speed (without noise) is:

...

[Click through for the whole thing...]

Project Information URL: http://thebytekitchen.com/2014/03/17/a-jitter-filter-for-the-kinect/

Project Source URL: The Byte Kitchen Open Sources at Codeplex.

 

Contact Information:

Face Swap... with a little help from the Kinect for Windows v2

$
0
0

Today's inspirational project is one that I really wish we had the source or download for, and while I try to only focus on projects with such, this was just too cool not to share. It sure wets our appetite for Apache's new library...

Swap your face…really

Ever wish you looked like someone else? Maybe Brad Pitt or Jennifer Lawrence? Well, just get Brad or Jennifer in the same room with you, turn on the Kinect for Windows v2 sensor, and presto: you can swap your mug for theirs (and vice versa, of course). Don’t believe it? Then take a look at this cool video from Apache, in which two developers happily trade faces.

According to Adam Vahed, managing director at Apache, the ability of the Kinect for Windows v2 sensor and SDK to track multiple bodies was essential to this project, as the solution needed to track the head position of both users. In fact, Adam rates the ability to perform full-skeletal tracking of multiple bodies as the Kinect for Windows v2 sensor’s most exciting feature, observing that it “opens up so many possibilities for shared experiences and greater levels of game play in the experiences we create.”

..

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/03/17/swap-your-face-really.aspx

An Apache Labs project to demonstrate dynamic face swapping using the Kinect.
As the Kinect doesn't track head rotation it means that both users need to be looking in the same direction for the illusion to work best.

This uses the Apache Kinect Library (alpha version), which integrates the Kinect for Windows v2 (dev preview) sensor and SDK with the Unity3D gaming engine.

The 1920 x 1080 colour feed from the Kinect is pushed to a Unity Texture and is displayed using an orthographic camera.

The users' head positions in 3D space are mapped to the relevant portions of the 2D video feed and these are then cut out and applied to two planes using an oval mask to blur the edges.

Please note: This is preliminary software and/or hardware and APIs are preliminary and subject to change.

image

image

image

image

Getting down and dirty coding with the Kinect for Windows v2

$
0
0

Today Tom Kerkhove takes us back to the Kinect for Windows v2, this time helping us get our hands a little dirty and digging into some (well more than just some!) coding...

[Tutorial] Gen. II Kinect for Windows – Basics Overview

After a theoretical overview it is time to get our hands dirty and start with a basic application that will visualize the basic streams – Color, depth, infrared & body tracking.

Disclaimer

Although this is a tutorial I am bound to the Kinect for Windows Developer program which means I can’t share the SDK/DLL.

“This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

What you will learn

This tutorial covers the following aspects -

  • Introduction to the alpha SDK
  • Visualize the camera
  • Depth indication
  • Display the infrared stream
  • Body/Skeletal tracking on top of the camera output

image

Prerequisites

In order to follow the tutorial you will need the following aspects -

  • Windows 8/8.1
  • Visual Studio 2013
  • Basic C# & WPF knowledge
  • Kinect for Windows alpha sensor & SDK
Template

For the sake of this tutorial I’ve created a basic WPF template that we will use in this tutorial, you can download it here.

I. Introduction to the new SDK

This tutorial is based on the v2 alpha version (nov-13) of the SDK and some core functionality has changed due to the SDK “architecture”.

The SDK is built on top of the Kinect Core API where Xbox One application will use a seperate SDK built on top of the same SDK.

...

image

II. Getting started

...

III. Visualizing the camera

IV. Depth indication

image

V. Displaying the Infrared stream

image

VI. Body tracking, the new skeletal tracking

Conclusion

In this post we’ve learned how we can implement the basic streams, Color – Depth- Infrared & Body, and visualize them for the user.
I hope you’ve noticed that each output type is using the same principles and it is only a matter of processing the data!

Remember this – Connect, listen, acquire, process & disconnect.

You can download my complete demo here.

[Make sure you click through to read the entire thing...]

Project Information URL: http://www.kinectingforwindows.com/2014/03/03/gen-ii-kinect-basics-overview/

Project Source URL: https://github.com/KinectingForWindows/G2KBasicOverview

Contact Information:

Final Kinect for Windows v2 Hardware Revealed

$
0
0

We've been highlighting some of the work being done by those with the early versions of the Kinect for Windows v2 devices. Last week the Kinect for Windows Team reveal the final hardware and more...

Revealing Kinect for Windows v2 hardware

As we continue the march toward the upcoming launch of Kinect for Windows v2, we’re excited to share the hardware’s final look.

Sensor

The sensor closely resembles the Kinect for Xbox One, except that it says “Kinect” on the top panel, and the Xbox Nexus—the stylized green “x”—has been changed to a simple, more understated power indicator:

image

Hub and power supply
The sensor requires a couple other components to work: the hub and the power supply. Tying everything together is the hub (top item pictured below), which accepts three connections: the sensor, USB 3.0 output to PC, and power. The power supply (bottom item pictured below) does just what its name implies: it supplies all the power the sensor requires to operate. The power cables will vary by country or region, but the power supply itself supports voltages from 100–240 volts.

image

Kinect for Windows v2 hub (top) and power supply (bottom)

As this first look at the Kinect for Windows v2 hardware indicates, we're getting closer and closer to launch. So stay tuned for more updates on the next generation of Kinect for Windows.

Kinect for Windows Team

Key links

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/03/27/revealing-kinect-for-windows-v2-hardware.aspx

Contact Information:

Viewing all 446 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>