Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 446 articles
Browse latest View live

"Coding for Kinect with Scratch" MVA Course

$
0
0

It's been a long time, to long, since we've covered Kinect2Scratch from Stephen Howell.

You've all heard of Scratch?

Scratch is a programming language that makes it easy to create your own interactive stories, animations, games, music, and art -- and share your creations on the web.

As young people create and share Scratch projects, they learn important mathematical and computational ideas, while also learning to think creatively, reason systematically, and work collaboratively.

And you'll remember the times we've highlighted Kinect2Scratch;

Now what if I were to tell you Stephen has created and shared a full, free, course on it? Woot!

Coding for Kinect with Scratch

Would you like to know how to build natural user interface (NUI) programs using Microsoft Kinect and the Scratch programming language? Check out this course, and explore how NUI applications can respond to users' movements and gestures and how the Kinect motion-sensing camera enables you to build cool body-tracking software.

Even if you can't program in an advanced language yet, you can use Scratch, the programming environment (from MIT) for beginners. Learn how to set up your computer to use Scratch and Kinect, and then see how to build NUI applications with ease. Start with tracking a single point, like a user's hand, and end by building motion-sensitive multiplayer games for Kinect using Scratch. Don't miss it!

(NOTE: To follow along with the course, you should have Kinect v1 or Kinect v2, along with Windows 7, Windows 8, or Windows 8.1, and associated SDKs, plus Scratch.)

Instructors | ​ ​Stephen Howell - Microsoft Ireland Academic Engagement Manager

image

image

Project Information URL: http://scratch.saorog.com/, http://www.microsoftvirtualacademy.com/training-courses/coding-for-kinect-with-scratch

Contact Information:





Handpose - Look Ma, No Keyboard!

$
0
0

Today's inspirational post shows something that could become awesome... We all gesture at our PC's, would be be awesome if our PC's understood them (then again, maybe not... lol :)

All hands, no keyboard: New technology can track detailed hand motion

Or, let’s say you speak sign language and are trying to communicate with someone who doesn’t. Imagine a world in which a computer could track your hand motions to such a detailed degree that it could translate your sign language into the spoken word, breaking down a substantial communication barrier.

Researchers at Microsoft have developed a system that can track – in real time – all the sophisticated and nuanced hand motions that people make in their everyday lives.

The Handpose system could eventually be used by everyone from law enforcement officials directing robots into dangerous situations to office workers who want to sort through e-mail or read documents with a few flips of the wrist instead of taps on a keyboard.

It also opens up vast possibilities for the world of virtual reality video gaming, said Lucas Bordeaux, a senior research software development engineer with Microsoft Research, which developed Handpose. For one thing, it stands to resolve the disorienting feeling people get when they’re exploring virtual reality and stick their own hand in the frame, but see nothing.

Microsoft researchers will present the Handpose paper at this year’s CHI conference on human-computer interaction in Seoul, where it has received a Best of CHI Honorable Mention Award.

Handpose uses a camera to track a person’s hand movements. The system is different from previous hand-tracking technology in that it has been designed to accommodate much more flexible setups. That lets the user do things like get up and move around a room while the camera follows everything from zig-zag motions to thumbs-up signs, in real time.

The system can use a basic Kinect system, just like many people have on their own Xbox game console at home. But unlike the current home model, which tracks whole body movements, this system is designed to recognize the smaller and more subtle movements of the hand and fingers.

It turns out, it’s a lot more difficult for the computer to figure out what a hand is doing than to follow the whole body.

...

In the long run, the ability for computers to understand hand motions also will have important implications for the future of artificial intelligence, said Jamie Shotton, a principal researcher in computer vision who worked on the project.

That’s because it provides another step toward helping computers interpret our body language, including everything from what kind of mood we are in to what we want them to do when we point at something.

In addition, the ability for computers to understand more nuanced hand motions could make it easier for us to teach robots how to do certain things, like open a jar.

“The whole artificial intelligence space gets lit up by this,” Shotton said.

Project Information URL: http://blogs.microsoft.com/next/2015/04/17/all-hands-no-keyboard-new-technology-can-track-detailed-hand-motion/

Accurate, Robust, and Flexible Real-time Hand Tracking

Abstract

We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis

image

image

image

Project Information URL: http://research.microsoft.com/apps/pubs/default.aspx?id=238453



Kinect 2 Unity 5

$
0
0

Kinect and Unity, Peanut Butter and Chocolate (or you'd think so, given how often we cover them here... ;).

Now James Ashley has put together something just a yummy with his Unity 5 and Kinect 2 tutorial...

Unity 5 and Kinect 2 Integration

image

Until just this month one of the best Kinect 2 integration tools was hidden, like Rappuccini’s daughter, inside a walled garden. Microsoft released a Unity3D plugin for the Kinect 2 in 2014. Unfortunately, Unity 4 only supported plugins (bridges to non-Unity technology) if you owned a Unity Pro license which typically cost over a thousand dollars per year.

On March 3rd, Unity released Unity 5 which includes plugin support in their free Personal edition – making it suddenly very easy to start building otherwise complex experiences like point cloud simulations that would otherwise require a decent knowledge of C++. In this post, I’ll show you how to get started with the plugin and start running a Kinect 2 application in about 15 minutes.

(As an aside, I always have trouble keeping this straight: Unity has plugins, openFrameworks as add-ins, while Cinder has bricks. Visual Studio has extensions and add-ins as well as NuGet packages after a confusing few years of rebranding efforts. There may be a difference between them but I can’t tell.)

1. First you are going to need a Kinect 2 and the Unity 5 software. If you already have a Kinect 2 attached to your XBox One, then this part is easy. You’ll just need to buy a Kinect Adapter Kit from the Microsoft store. This will allow you to plug your XBox One Kinect into your PC. The Kinect for Windows 2 SDK is available from the K4W2 website, though everything you need should automatically install when you first plug your Kinect into your computer. You don’t even need Visual Studio for this. Finally, you can download Unity 5 from the Unity website.

...

image

9. To build the app, select File | Build & Run from the top menu. Select Windows as your target platform in the next dialog and click the Build & Run button at the lower right corner. Another dialog appears asking you to select a location for your executable and a name. After selecting an executable name, click on Save in order to reach the final dialog window. Just accept the default configuration options for now and click on “Play!”. Congratulations. You’ve just built your first Kinect-enabled Unity 5 application!

Project Information URL: http://www.imaginativeuniversal.com/blog/post/2015/03/27/Unity-5-and-Kinect-2-Integration.aspx

Contact Information:




Kinect 2, Sound Detection with C++

$
0
0

Peter Daukintis, Friend of the Gallery, posted a short, but sweat Kinect C++ example;

Here's some of the other posts from Peter we've highlighted recently;

Kinect V2 – Simple Sound Detection C++

This is a very simple post showing an implementation of the detection of a sound, any sound above a threshold, using the Kinect V2 SDK. The sample is written as a C++ XAML Windows Store app. The code uses the usual pattern for retrieving data from the Kinect via the SDK; that is,

  • Get Sensor
  • Choose Datasource
  • Open a Reader on the DataSource
  • Subscribe to the FrameArrived event on the Reader
  • Open the Sensor

This looks like this:

Project Information URL: http://peted.azurewebsites.net/kinect-v2-simple-sound-detection-c/

Project Source URL: https://github.com/peted70/kinectv2-detectsound.git

Contact Information:




Kinect Photo Booth

RoomAlive Toolkit & Hacking Augmented Reality with Kinect

$
0
0

We've mentioned IllumiRoom and RoomAlive before, but at Build 2015 there was a dedicated session AND the release of the RoomAlive Toolkit too!

Hacking Augmented Reality with Kinect

IllumiRoom (http://research.microsoft.com/en-us/projects/illumiroom/) demonstrated how projection mapping can enhance entertainment. RoomAlive (http://research.microsoft.com/en-us/projects/roomalive/) prototyped turning any room into an interactive, augmented experience. In this session, Andy Wilson (Microsoft Research) teaches the key concepts behind these projects including:

  • Kinect and projector calibration
  • Networking multiple Kinect sensors together
  • Displaying dynamic AR objects in real-time.

Additionally, all tools and source code used in the session will be released on GitHub to enable you to make use of these techniques in your own projects.

Project Information URL: http://channel9.msdn.com/Events/Build/2015/3-87

Kinect/RoomAliveToolkit

RoomAlive Toolkit README

The RoomAlive Toolkit calibrates multiple projectors and cameras to enable immersive, dynamic projection mapping experiences such as RoomAlive. It also includes a simple projection mapping sample.

This document has a few things you should know about using the toolkit's projector/camera calibration, and gives a tutorial on how to calibrate one projector and Kinect sensor (AKA 'camera').

Prerequisites

  • Visual Studio 2013
  • Kinect for Windows v2 SDK

The project uses SharpDX and Math.NET Numerics packages. These should be downloaded and installed automatically via NuGet when RoomAlive Toolkit is built.

Tutorial: Calibrating One Camera and One Projector

...

ProjectionMapping Sample

...

Calibrating Mutiple Cameras and Multiple Projectors

...

More Online Resources

Source Code URL: https://github.com/Kinect/RoomAliveToolkit




Kinect to China Imagine Cup

$
0
0

Today's inspirational post is from the Kinect for Windows team provides a peek into the great stuff coming from China and the Image Cup and the amazing ways the Kinect is being used change the world, one gesture at a time...

Kinect-based student projects shine at China Imagine Cup

Microsoft’s Imagine Cup has become a global phenomenon. Since its inception in 2003, this technology competition for students has grown from about 1,000 annual participants to nearly half a million in 2014. Now the 2015 competition is underway, and projects that utilize Kinect for Windows are coming on strong, as can be seen in the results of the competitions in China. Of the 405 Imagine Cup projects that made it to the second round of the China National Competition, 46 (11 percent) used Kinect for Windows technology.

image

Ten of these Kinect-based projects made it through the national semifinals, comprising 20 percent of the 49 projects that moved on to the national finals, where they competed for prizes in the Innovation, World Citizenship, and Games categories, as well as for three prizes in a special Kinect-technology category. Six of the ten Kinect-enabled projects came away with prizes, including two First Prizes in the Innovation category and two Second Prizes in the World Citizenship category (the top prize in all categories was the Grand Prize). 

Watch a video overview of the national finals of the China Imagine Cup 2015 competition

The table below provides information about the winning projects (two of which share a similar name—Laputa—which is a reference to a popular Japanese anime film). As you can see, the Pmomo project earned both a First Prize in the Innovation category and an Excellence Prize in the Kinect for Windows special category.

Kinect projects that earned prizes in the China Imagine Cup National Finals

image

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/05/08/kinect-based-student-projects-shine-at-china-imagine-cup.aspx

Contact Information:




Kinect v2 Avateering

$
0
0

Peter Daukintis, Friend of the Gallery, posted another great example of using the Kinect v2, this time using it and its capabilities to start an Avatar journey...

Here's some of the other posts from Peter we've highlighted recently;

Avateering with Kinect V2 – Joint Orientations

For my own learning I wanted to understand the process of using the Kinect V2 to drive the real-time movement of a character made in 3D modelling software. This post is the first part of that learning which is taking the joint orientations data provided by the Kinect SDK and using that to position and rotate ‘bones’ which I will represent by rendering cubes since this is a very simple way to visualise the data. (I won’t cover smoothing the data or modelling/rigging in this post). So the result should be something similar to the Kinect Evolution Block Man demo which can be discovered using the Kinect SDK browser.

image

To follow this along you would need a working Kinect V2 sensor with USB adapter, a fairly high-specced machine running Windows 8.0/8.1 with USB3 and a DirectX11-compatible GPU and also the Kinect V2 SDK installed. Here are some instructions for setting up your environment. 

To back up a little there are two main ways to represent body data from the Kinect; the first being to use the absolute positions provided by the SDK which are values in 3D Camera-space which are measured in metres, the other is to use the joint orientation data to rotate a hierarchy of bones. The latter is the one we will look at here. Now, there is an advantage in using joint orientations and that is, as long as your model has the same overall skeleton structure as the Kinect data then it doesn’t matter so much what the relative sizes of the bones are which frees up the modelling constraints. The SDK has done the job of calculating the rotations from the absolute joint positions for us so let’s explore how we can apply those orientations in code.

Code

I am going to program this by starting with the DirectX and XAML C++ template in Visual Studio which provides a basic DirectX 11 environment, with XAML integration, basic shaders and a cube model described in code ...

Body Data

Let’s start by getting the body data into our program from the sensor. As always we start with getting a KinectSensor object which I will initialise in the Sample3DSceneRenderer class constructor, then we open a BodyFrameReader on the BodyFrameSource, for which there is a handy property on the KinectSensor object. ...

Kinect Joint Hierarchy

The first subject to consider is how the Kinect joint hierarchy is constructed as it is not made explicit in the SDK. Each joint is identified by one of the following enum values:...

Bones

To draw each separate bone I modified the original cube model that was supplied with the default project template. I modified the coordinates of the original cube so that one end was at the origin and the other was 4 units in the y-direction; so when rendered ...

...

...this shows the end result:

image

Project Information URL: http://peted.azurewebsites.net/avateering-with-kinect-v2-joint-orientations/

Project Source URL: https://github.com/peted70/kinectv2-avateer-jointorientations

Contact Information:





Computational Hydrographic Printing

$
0
0

Today inspirational project shows off another way the Kinect is being used in new, exciting and unanticipated ways...

Computational Hydrographic Printing (SIGGRAPH 2015)

image

image

Abstract:
Hydrographic printing is a well-known technique in industry for transferring color inks on a thin film to the surface of a manufactured 3D object. It enables high-quality coloring of object surfaces and works with a wide range of materials, but suffers from the inability to accurately register color texture to complex surface geometries. Thus, it is hardly usable by ordinary users with customized shapes and textures.

We present computational hydrographic printing, a new method that inherits the versatility of traditional hydrographic printing, while also enabling precise alignment of surface textures to possibly complex 3D surfaces. In particular, we propose the first computational model for simulating hydrographic printing process. This simulation enables us to compute a color image to feed into our hydrographic system for precise texture registration. We then build a physical hydrographic system upon off-the-shelf hardware, integrating virtual simulation, object calibration and controlled immersion. To overcome the difficulty of handling complex surfaces, we further extend our method to enable multiple immersions, each with a different object orientation, so the combined colors of individual immersions form a desired texture on the object surface. We validate the accuracy of our computational model through physical experiments, and demonstrate the efficacy and robustness of our system using a variety of objects with complex surface textures.

Project Information URL: http://www.cs.columbia.edu/~cxz/publications/hydrographics.pdf




Dark Olive Green Skin...

$
0
0

This is a great title for a post from Dwight Goins that I've been meaning to highlight for a while...

My Kinect told me I have Dark Olive Green Skin…

Did you know the Kinect for windows v2 has the ability to determine your Skin  pigmentation and your hair color? – Yes I’m telling you the truth. One of the many features of the Kinect device is the ability to read skin complexion and hair color of a person who is being tracked by the device.

If you ever need or require the ability to read the skin complexion of a person or determine the color of a persons hair on their head, this posting will show you how to do just that.

image

The steps are rather quick and simple. Determining the skin color requires you to access Kinect’s HD Face features.

Kinect has the ability to detect Facial features in 3-D. This is known as “HD Face”. It can detect depth, height, and width. The Kinect can also use it’s High Definition Camera, to detect colors such as the Red, Green, and Blue intensities that reflect back, and infer the actual skin tone of a tracked face. Along with the skin tone, the Kinect can also detect the Hair color on top of a person’s head…

So What’s Your Skin Tone? Click Here to download the source code and try it out.

If you want to include this feature inside your application, the steps you must take are:

1. Create a new WPF or Windows 8.1 WPF application

2 Inside the new application, add a reference to the Microsoft.Kinect and Microsoft.Kinect.Face assemblies.

...

Once your application runs it should look similar to this (Minus the FrameStatus):

image

...

Project Information URL: https://dgoins.wordpress.com/2015/03/21/my-kinect-told-me-i-have-dark-olive-green-skin/

Contact Information:




Resolving "Kinect Monitor (KinectMonitor) failed to start."

$
0
0

Today is a quick post from the one and only Bruno Capuano...

[#KINECTSDK] Error: Kinect Monitor (KinectMonitor) failed to start.

Today is (again) a quick post. I hope this one is my last error fixing of the year 2014. Todays issue is related to the installation process of Kinect SDK V2. If you were using old sdks, you’ll probably find this error message

Error code: 1920

Kinect Monitor (KinectMonitor) failed to start. Verify that you have sufficient privileges to start system services

image

So is time to check the log in temp folder. There is a message which suggest me that previous versions of the KinectSDK not deleted some files on the uninstall process. And that’s why, the current Installer had problems to deploy and register a new Kinect service .

...

Project Information URL: http://elbruno.com/2014/12/19/kinectsdk-error-kinect-monitor-kinectmonitor-failed-to-start-2/

Contact Information:




"Avatar Car Driving with Microsoft Kinect V2"

KinemotoSDK (Kinect v2 Web Player)

$
0
0

Today's commercial product was just recently announced and is something pretty new and interesting...

KinemotoSDK just released! (Kinect v2 in the browser)

We're proud to announce the release of our first product that is now also available at the Unity Asset Store: https://www.assetstore.unity3d.com/en/#!/content/35136

We use the SDK to create all our own Kinect v2.0 games and think it can benefit the dev community. The $30 fee we ask for the package allows us to maintain and support the SDK. The USP of this SDK, in combination with the Kinemoto server (Windows 8.1 only), is the fact that it allows you to run Unity apps within a browser and still use the different Kinect streams.

image

We created a dedicated developer page that contains several tutorial video's to get you started. Have a look at http://developer.kinemoto.com.

If your interested in this SDK and want to check it out, let us know. We give away vouchers to non-profit organizations or people that help us improve the product by providing valuable feedback ...

Project Information URL: https://social.msdn.microsoft.com/Forums/en-US/5704cb38-d063-48d8-b354-b782835b59f0/kinemotosdk-just-released-kinect-v2-in-the-browser?forum=kinectv2sdk

KinemotoSDK (Kinect Web Player)

his KinemotoSDK enables developers to use Kinect enabled Unity apps/games in the Unity Web Player. With the KinemotoSDK developers can easily add Kinect streams to their app/game, make use of Kinemoto functions and methods and build for Unity Web Player and Standalone.

Developers only need to download and install the KinemotoServer and voila!

image

The currently available streams are: Body, BodyIndex and Color. More streams will be added over time.
Future releases also include WebGL and Android support.

Getting started

In order to work with the KinemotoSDK, you need to download and install the SDK, Kinect drivers and KinemotoServer first.

Have a look at our video tutorials!

image

Project Information URL: http://developer.kinemoto.com




Kinect to HD Face

$
0
0

Friend of the Gallery and Kinect MVP Vangos Pterneas is back with a great and detailed post on developing with the HD Face API.

Some of the other times we've highlighted Vangos Pterneas's work;

How to use Kinect HD Face

image

Throughout my previous article, I demonstrated how you can access the 2D positions of the eyes, nose, and mouth, using Microsoft’s Kinect Face API. The Face API provides us with some basic, yet impressive, functionality: we can detect the X and Y coordinates of 4 eye points and identify a few facial expressions using just a few lines of C# code. This is pretty cool for basic applications, like Augmented Reality games, but what if you need more advanced functionality from your app?

Recently, we decided to extend our Kinetisense project with advanced facial capabilities. More specifically, we needed to access more facial points, including lips, jaw and cheeks. Moreover, we needed the X, Y and Z position of each point in the 3D space. Kinect Face API could not help us, since it was very limited for our scope of work.

Thankfully, Microsoft has implemented a second Face API within the latest Kinect SDK v2. This API is called HD Face and is designed to blow your mind!

At the time of writing, HD Face is the most advanced face tracking library out there. Not only does it detect the human face, but it also allows you to access over 1,000 facial points in the 3D space. Real-time. Within a few milliseconds. Not convinced? I developed a basic program that displays all of these points. Creepy, huh?!

In this article, I am going to show you how to access all these points and display them on a canvas. I’ll also show you how to use Kinect HD Face efficiently and get the most out of it.

Prerequisites

Source Code

Tutorial

Although Kinect HD Face is truly powerful, you’ll notice that it’s badly documented, too. Insufficient documentation makes it hard to understand what’s going on inside the API. Actually, this is because HD Face is supposed to provide advanced, low-level functionality. It gives us access to raw facial data. We, the developers, are responsible to properly interpret the data and use them in our applications. Let me guide you through the whole process.

Step 1: Create a new project

Let’s start by creating a new project. Launch Visual Studio and select File -> New Project. Select C# as you programming language and choose either the WPF or the Windows Store app template. Give your project a name and start coding.

...

But wait!

OK, we drew the points on screen. So what? Is there a way to actually understand what each point is? How can we identify where they eyes are? How can we detect the jaw? The API has no built-in mechanism to get a human-friendly representation of the face data. We need to handle over 1,000 points in the 3D space manually!

Don’t worry, though. Each one of the vertices has a specific index number. Knowing the index number, you can easily deduce where does it correspond to. For example, the vertex numbers 1086, 820, 824, 840, 847, 850, 807, 782, and 755 belong to the left eyebrow.

Similarly, you can find accurate semantics for every point. Just play with the API, experiment with its capabilities and build your own next-gen facial applications!

If you wish, you can use the Color, Depth, or Infrared bitmap generator and display the camera view behind the face. Keep in mind that simultaneous bitmap and face rendering may cause performance issues in your application. So, handle with care and do not over-use your resources.

image

Project Information URL: http://pterneas.com/2015/06/06/kinect-hd-face/

Project Source URL: https://github.com/Vangos/kinect-2-face-hd

Contact Information:




Finger Tracking with Metrilus Aiolos Finger Tracking Library

$
0
0

Today's library is one I've seen asked for a number of times on different forums and comments. Best of all, you can get it free and help them flesh it out...

Metrilus Aiolos Finger Tracking

We are excited to share our Finger Tracking library Aiolos for Kinect v2 with you. At this time, Aiolos is still in an experimental stage. Feel free to play with it, but don’t expect it to be perfect, yet. To improve #Aiolos we are interested in your feedback! What do you use it for? How would you like to use it? Please also tell us if you find bugs. This is especially important for us to further develop Aiolos.

Features

  • 2-D position of finger tip-, middle- and root joint
  • 2-D contour points of the hand
  • 3-D position of finger tip-, middle- and root joint
  • 3-D contour points of the hand
  • finger labeling (experimental)

image

Usage

Aiolos for Kinect v2 works side by side with the Kinect SDK. Get the infrared and depth images, put them into Aiolos, and get three 3D points for each finger. The download also includes a small sample program.

Project Information URL: http://www.metrilus.de/blog/portfolio-items/aiolos/





Unity Asset - Kinect [v1] with MS-SDK

$
0
0

Last week I was a little taken to task for not covering the many Kinect assets in the Unity Asset Store. Sure I've blogged about a few, but I'd never actually searched the Store for Kinect assets. I know, "Bad Greg..."

image

I have to thank Rumen Filkov (aka RF Solutions) for pointing this out. Rumen has a number of assets there in the store, free and paid, which I'll be covering in the coming week to make up for missing this great resource... :)

The first is Kinect v1 asset. Sure, the Kinect v1 has been out for years and been superseded by the Kinect v2, but there are still a good number of V1's out there...

Kinect with MS-SDK

image

This is a set of Kinect v1 examples that uses several major scripts, grouped in one folder. It demonstrates how to use Kinect-controlled avatars, Kinect-detected gestures or other Kinect-related stuff in your own Unity projects. This asset uses the Kinect SDK/Runtime provided by Microsoft. For more Kinect v1-related examples, utilizing Kinect Interaction, Kinect Speech Recognition, Face Tracking or Background Removal, see the KinectExtras package. These two packages work with Kinect v1 only and can be used with both Unity Pro and Unity Free editors.

Project Download URL: https://www.assetstore.unity3d.com/en/#!/content/7747 

Kinect with MS-SDK

...

How to Run the Example:
1. Download and install Kinect SDK 1.8 or Kinect Runtime 1.8 as explained in Readme-Kinect-MsSdk.pdf, located in Assets-folder.
2. Download and import the package.
3. Open and run scene KinectAvatarsDemo, located in Assets/AvatarsDemo-folder.
4. Open and run scene KinectGesturesDemo, located in Assets/GesturesDemo-folder.
5. Open and run scene KinectOverlayDemo, located in Assets/OverlayDemo-folder.

Download:
The official release of ‘Kinect with MS-SDK’-package is available in the Unity Asset Store.
The project’s Git-repository is public and is located here. This repository is private and its access is limited to contributors and donators only.

Troubleshooting:
* If you need integration with the KinectExtras, see ‘How to Integrate KinectExtras with the KinectManager’-section here.
* If you get DllNotFoundException, make sure you have installed the Kinect SDK 1.8 or Kinect Runtime 1.8.
* Kinect SDK 1.8 and tools (Windows-only) can be found here.
* The example was tested with Kinect SDK 1.5, 1.6, 1.7 and 1.8.
* Here is a link to the project’s Unity forum: http://forum.unity3d.com/threads/218033-Kinect-with-MS-SDK

What’s New in Version 1.11:
1. Added max-user-distance setting to KinectManager, to allow max-distance limitation.
2. Added maps-width-percent setting to KinectManager, to allow specifying of depth & color maps width as percent of the game-window width.
3. Added colliders to the avatars in KinectAvatarsDemo-scene.
4. Updated KinectOverlayDemo-scene to use full-screen background.
5. Updated calls to the KinectExtras-functions, in order to sync them to the latest Extras’ version.
6. Fixed Playmaker-Kinect actions.
7. Converted package to Unity v.4.5.

Playmaker Actions for ‘Kinect with MS-SDK’ and ‘KinectExtras with MsSDK':
And here is “one more thing”: A great Unity-package for designers and developers using Playmaker, created by my friend Jonathan O’Duffy from HitLab Australia and his team of talented students. It contains many ready-to-use Playmaker actions for Kinect and a lot of example scenes. The package integrates seamlessly with ‘Kinect with MS-SDK’ and ‘KinectExtras with MsSDK’-packages. I can only recommend it!

...

Project Information URL: http://rfilkov.com/2013/12/16/kinect-with-ms-sdk/




Kinect 2 Computer Vision

$
0
0

Kinect MVP James Ashley is back with a great example of using OpenCV v3 (which we highlighted OpenCV turns 3 and seeing Intel(R) INDE OpenCV), Emgu and the Kinect v2 to implement computer vision/facial recognition.

Some of our other posts where we highlight James;

Emgu, Kinect and Computer Vision

image

Last week saw the announcement of the long awaited OpenCV 3.0 release, the open source computer vision library originally developed by Intel that allows hackers and artists to analyze images in fun, fascinating and sometimes useful ways. It is an amazing library when combined with a sophisticated camera like the Kinect 2.0 sensor. The one downside is that you typically need to know how to work in C++ to make it work for you.

This is where EmguCV comes in. Emgu is a .NET wrapper library for OpenCV that allows you to use some of the power of OpenCV on .NET platforms like WPF and WinForms. Furthermore, all it takes to make it work with the Kinect is a few conversion functions that I will show you in the post.

Emgu gotchas

The first trick is just doing all the correct things to get Emgu working for you. Because it is a wrapper around C++ classes, there are some not so straightforward things you need to remember to do.

1. First of all, Emgu downloads as an executable that extracts all its files to your C: drive. This is actually convenient since it makes sharing code and writing instructions immensely easier.

2. Any CPU isn’t going to cut it when setting up your project. You will need to specify your target CPU architecture since C++ isn’t as flexible about this as .NET is. Also, remember where your project’s executable is being compiled to. For instance, an x64 debug build gets compiled to the folder bin/x64/Debug, etc.

3. You need to grab the correct OpenCV C++ library files and drop them in the appropriate target project file for your project. Basically, when you run a program using Emgu, your executable expects to find the OpenCV libraries in its root directory. There are lots of ways to do this such as setting up pre-compile directives to copy the necessary files. The easiest way, though, is to just go to the right folder, e.g. C:\Emgu\emgucv-windows-universal-cuda 2.4.10.1940\bin\x64, copy everything in there and paste it into the correct project folder, e.g. bin/x64/Debug. If you do a straightforward copy/paste, just remember not to Clean your project or Rebuild your project since either action will delete all the content from the target folder.

4. Last step is the easiest. Reference the necessary Emgu libraries. The two base ones are Emgu.CV.dll and Emgu.Util.dll. I like to copy these files into a project subdirectory called libs and use relative paths for referencing the dlls, but you probably have your own preferred way, too.

WPF and Kinect SDK 2.0

I’m going to show you how to work with Emgu and Kinect in a WPF project. The main difficulty is simply converting between image types that Kinect knows and image types that are native to Emgu. I like to do these conversions using extension methods. I provided these extensions in my first book Beginning Kinect Programming about the Kinect 1 and will basically just be stealing from myself here.

I assume you already know the basics of setting up a simple Kinect program in WPF. In MainWindow.xaml, just add an image to the root grid and call it rgb:

...

image

You should now be able to plug in any of the sample code provided with Emgu to get some cool CV going. As an example, in the code below I use the Haarcascade algorithms to identify heads and eyes in the Kinect video stream. I’m sampling the data every 10 frames because the Kinect is sending 30 frames a second while the Haarcascade code can take as long as 80ms to process. Here’s what the code would look like:

...

Project Information URL: http://www.imaginativeuniversal.com/blog/post/2015/06/11/Emgu-and-Kinect-and-Computer-Vision.aspx

Contact Information:




Unity Asset - Kinect v2 with MS-SDK

$
0
0

Last week I introduced you to the Unity Kinect of Rumen Filkov (aka RF Solutions), Unity Asset - Kinect [v1] with MS-SDK. As I said, here's another of his Unity Assets, today focusing on the Kinect v2.

Kinect v2 with MS-SDK

image

Kinect v2 with MS-SDK is a set of Kinect v2 examples that uses several major scripts, grouped in one folder. The package contains over ten demo scenes. The avatars-demo demonstrates how to utilize Kinect-controlled avatars in your Unity projects. The gestures-demo shows how to use Kinect gestures in your scenes. The interaction demo presents hand controlled cursors and utilization of the hand grips to drag and drop 3d-objects. The overlay-demos show how to align 3d-objects to the Kinect video stream. The face-tracking demos present Kinect face tracking and HD face models. The speech recognition demo shows how to use Kinect speech recognition to control the player with voice commands. There are many other demo scenes too, like the background removal demo, depth collider demo, multi-scenes demo and fitting-room demo. This package works with Kinect v2 and v1, supports 32- and 64-bit builds and can be used in Unity Pro and Unity Personal editors.

This package is free to schools, universities, students and teachers. If you match this criterion, send me an e-mail to get the Kinect-v2 package directly from me.

Customer support: First, see if you can find the answer you’re looking for on this page, in the comments below the articles or in the Unity forum. If it is not there, you may contact me, but please don’t do it on weekends or holidays. As everybody else, I also need some free time to rest.

How to Run the Example:

...

Download:
The official release of ‘Kinect v2 with MS-SDK’-package is available at the Unity Asset Store.

...

Troubleshooting:
* If you get exceptions at the scene start-up, make sure ...

...

What’s New in Version 2.5:

...

Videos worth 1000 Words:
Here is a video by Ricardo Salazar, created with Unity5, Kinect v2 and “Kinect v2 with MS-SDK”, v.2.3:


 

Project Information URL: http://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/

Project Download URL: https://www.assetstore.unity3d.com/en/#!/content/18708




Kinect to your Heart

$
0
0

Today Dwight Goins shares a great example on using one of the coolest features of the Kinect v2, heart rate detection...

Detecting heart rate with Kinect

When the latest Kinect sensor was unveiled more than a year ago at Build 2014, demos showed how it could determine a user’s heart rate without attaching sensors or wires to his or her body. But that was old news to regular followers of D Goins Insperience, the personal blog of Dwight Goins, a Microsoft Kinect for Windows MVP and founder of Dwight Goins Inc. As Goins revealed in February 2014, he had already devised his own application for detecting a person’s heart rate with the preview version of the latest Kinect sensor.  

Goins’ app, which he has subsequently refined, takes advantage of three of the latest sensor’s key features: its time-of-flight infrared data stream, its high-definition-camera color data stream, and face tracking. The infrared stream returns an array of infrared (IR) intensities from zero to 65,536, the color stream returns RGB data pixels, and the face tracking provides real-time location and positioning of a person’s face. He thus knew how to capture a facial image, measure its infrared intensity, and gage the RGB color brightness level in its every pixel. The following video shows Goins' Kinect v2 heart rate detector in action.

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/06/12/detecting-heart-rate-with-kinect.aspx

Kinectv2HeartRate

Kinect for Windows v2 Heart Rate Library

image

This application is a .Net WPF application which uses the R Statistical programming language engine version > 3.12. This application requires the R engine to be installed on the system running the application. R can be installed from here: http://cran.r-project.org/ The WPF application utilizes the Kinect RGB, IR, and Face streams of data to determine a region around the face and calculate a spatially averaged brightness over time. The averaged values are then divided by their respective standard deviations to provide a unit variance value. These values are required for feeding into ICA algorithms. The values are saved into a csv file for processing with other Machine Learning techniques and algorithms.

The basic approach is simple. When a person's heart pumps blood, the volume of blood is pushed through various veins and muscles. As the blood pumps through the muscles, particularly the face, the more light is absorbed, and the less brightness the a web camera sensor picks up. This change in brightness value is very minute and can be extracted using mathematical tricks. The change in brightness is periodic. In other words, a signal or wave. If we can match the signal/wave to that of a blood pulse, we can calculate the heart rate.

In order to match the change in brightness to a blood pulse we use the Independent Component Analysis (ICA) concept. This concept is the cocktail party concept and is the basis for finding hidden signals within a set of mixed signals. If you have two people talking in a crowded room, and you have microphones placed at various locations around the room, ICA algorithms let you take a mixed sample of signals, such as sound waves, and calculates an estimated separation mixture of components. If you match the separate components to the original signal of a person speaking you have found that person in the crowded room.

This ICA concept is also known as blind source separation, and this project uses the JADE algorithm for R, to provide the separation matrix of components for the R,G, B, IR mixture of data. The separate components then have their signals extracted using a fast Fourier transform to find a matching frequency range of a heart rate.

Project Source URL: https://github.com/dngoins/Kinectv2HeartRate

Couple other times we've highlighted Dwight's work;

Contact Information:




Unity Asset - Kinect v2 with MS-SDK Tips, Tricks and Examples

$
0
0

For the last couple weeks I've been highlighted the Kinect Unity Assets of Rumen Filkov (aka RF Solutions), Unity Asset - Kinect [v1] with MS-SDK and Unity Asset - Kinect v2 with MS-SDK.

Today I'm wrapping up the series by sharing a great blog post from Rumen on using his “Kinect v2 with MS-SDK” asset...

Kinect v2 Tips, Tricks and Examples

After answering so many different questions about how to use various parts and components of the “Kinect v2 with MS-SDK”-package, I think it would be easier, if I share some general tips, tricks and examples. I;m going to expand this article in time with more tips and examples. Please drop by from time to time to check it out.

Table of Contents:

What is the purpose of all manages in the KinectScripts-folder
How to use the Kinect v2-Package functionality in your own Unity project
How to use your own model with the AvatarController
How to make the avatar hands twist around the bone
How to utilize Kinect to interact with GUI buttons and components
How to get the depth- or color-camera textures
How to get the position of a body joint
How to make a game object rotate as the user
How to make a game object follow user’s head position and rotation
How to get the face-points’ coordinates
How to mix Kinect-captured movement with Mecanim animation
How to add new model to the FittingRoom-demo

...

Project Information URL: http://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/

Finally, remember the Unity Asset Store is your friend... For example, check out all these Kinect Assets

image

Contact Information:




Viewing all 446 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>