This is a very simple post showing an implementation of the detection of a sound, any sound above a threshold, using the Kinect V2 SDK. The sample is written as a C++ XAML Windows Store app. The code uses the usual pattern for retrieving data from the Kinect via the SDK; that is,
Additionally, all tools and source code used in the session will be released on GitHub to enable you to make use of these techniques in your own projects.
The RoomAlive Toolkit calibrates multiple projectors and cameras to enable immersive, dynamic projection mapping experiences such as RoomAlive. It also includes a simple projection mapping sample.
This document has a few things you should know about using the toolkit's projector/camera calibration, and gives a tutorial on how to calibrate one projector and Kinect sensor (AKA 'camera').
Prerequisites
Visual Studio 2013
Kinect for Windows v2 SDK
The project uses SharpDX and Math.NET Numerics packages. These should be downloaded and installed automatically via NuGet when RoomAlive Toolkit is built.
Tutorial: Calibrating One Camera and One Projector
...
ProjectionMapping Sample
...
Calibrating Mutiple Cameras and Multiple Projectors
Today's inspirational post is from the Kinect for Windows team provides a peek into the great stuff coming from China and the Image Cup and the amazing ways the Kinect is being used change the world, one gesture at a time...
Kinect-based student projects shine at China Imagine Cup
Microsoft’s Imagine Cup has become a global phenomenon. Since its inception in 2003, this technology competition for students has grown from about 1,000 annual participants to nearly half a million in 2014. Now the 2015 competition is underway, and projects that utilize Kinect for Windows are coming on strong, as can be seen in the results of the competitions in China. Of the 405 Imagine Cup projects that made it to the second round of the China National Competition, 46 (11 percent) used Kinect for Windows technology.
Ten of these Kinect-based projects made it through the national semifinals, comprising 20 percent of the 49 projects that moved on to the national finals, where they competed for prizes in the Innovation, World Citizenship, and Games categories, as well as for three prizes in a special Kinect-technology category. Six of the ten Kinect-enabled projects came away with prizes, including two First Prizes in the Innovation category and two Second Prizes in the World Citizenship category (the top prize in all categories was the Grand Prize).
The table below provides information about the winning projects (two of which share a similar name—Laputa—which is a reference to a popular Japanese anime film). As you can see, the Pmomo project earned both a First Prize in the Innovation category and an Excellence Prize in the Kinect for Windows special category.
Kinect projects that earned prizes in the China Imagine Cup National Finals
Peter Daukintis, Friend of the Gallery, posted another great example of using the Kinect v2, this time using it and its capabilities to start an Avatar journey...
Here's some of the other posts from Peter we've highlighted recently;
For my own learning I wanted to understand the process of using the Kinect V2 to drive the real-time movement of a character made in 3D modelling software. This post is the first part of that learning which is taking the joint orientations data provided by the Kinect SDK and using that to position and rotate ‘bones’ which I will represent by rendering cubes since this is a very simple way to visualise the data. (I won’t cover smoothing the data or modelling/rigging in this post). So the result should be something similar to the Kinect Evolution Block Man demo which can be discovered using the Kinect SDK browser.
To follow this along you would need a working Kinect V2 sensor with USB adapter, a fairly high-specced machine running Windows 8.0/8.1 with USB3 and a DirectX11-compatible GPU and also the Kinect V2 SDK installed. Here are some instructions for setting up your environment.
To back up a little there are two main ways to represent body data from the Kinect; the first being to use the absolute positions provided by the SDK which are values in 3D Camera-space which are measured in metres, the other is to use the joint orientation data to rotate a hierarchy of bones. The latter is the one we will look at here. Now, there is an advantage in using joint orientations and that is, as long as your model has the same overall skeleton structure as the Kinect data then it doesn’t matter so much what the relative sizes of the bones are which frees up the modelling constraints. The SDK has done the job of calculating the rotations from the absolute joint positions for us so let’s explore how we can apply those orientations in code.
Code
I am going to program this by starting with the DirectX and XAML C++ template in Visual Studio which provides a basic DirectX 11 environment, with XAML integration, basic shaders and a cube model described in code ...
Body Data
Let’s start by getting the body data into our program from the sensor. As always we start with getting a KinectSensor object which I will initialise in the Sample3DSceneRenderer class constructor, then we open a BodyFrameReader on the BodyFrameSource, for which there is a handy property on the KinectSensor object. ...
Kinect Joint Hierarchy
The first subject to consider is how the Kinect joint hierarchy is constructed as it is not made explicit in the SDK. Each joint is identified by one of the following enum values:...
Bones
To draw each separate bone I modified the original cube model that was supplied with the default project template. I modified the coordinates of the original cube so that one end was at the origin and the other was 4 units in the y-direction; so when rendered ...
Abstract: Hydrographic printing is a well-known technique in industry for transferring color inks on a thin film to the surface of a manufactured 3D object. It enables high-quality coloring of object surfaces and works with a wide range of materials, but suffers from the inability to accurately register color texture to complex surface geometries. Thus, it is hardly usable by ordinary users with customized shapes and textures.
We present computational hydrographic printing, a new method that inherits the versatility of traditional hydrographic printing, while also enabling precise alignment of surface textures to possibly complex 3D surfaces. In particular, we propose the first computational model for simulating hydrographic printing process. This simulation enables us to compute a color image to feed into our hydrographic system for precise texture registration. We then build a physical hydrographic system upon off-the-shelf hardware, integrating virtual simulation, object calibration and controlled immersion. To overcome the difficulty of handling complex surfaces, we further extend our method to enable multiple immersions, each with a different object orientation, so the combined colors of individual immersions form a desired texture on the object surface. We validate the accuracy of our computational model through physical experiments, and demonstrate the efficacy and robustness of our system using a variety of objects with complex surface textures.
Did you know the Kinect for windows v2 has the ability to determine your Skin pigmentation and your hair color? – Yes I’m telling you the truth. One of the many features of the Kinect device is the ability to read skin complexion and hair color of a person who is being tracked by the device.
If you ever need or require the ability to read the skin complexion of a person or determine the color of a persons hair on their head, this posting will show you how to do just that.
The steps are rather quick and simple. Determining the skin color requires you to access Kinect’s HD Face features.
Kinect has the ability to detect Facial features in 3-D. This is known as “HD Face”. It can detect depth, height, and width. The Kinect can also use it’s High Definition Camera, to detect colors such as the Red, Green, and Blue intensities that reflect back, and infer the actual skin tone of a tracked face. Along with the skin tone, the Kinect can also detect the Hair color on top of a person’s head…
So What’s Your Skin Tone? Click Here to download the source code and try it out.
If you want to include this feature inside your application, the steps you must take are:
1. Create a new WPF or Windows 8.1 WPF application
2 Inside the new application, add a reference to the Microsoft.Kinect and Microsoft.Kinect.Face assemblies.
...
Once your application runs it should look similar to this (Minus the FrameStatus):
Today is (again) a quick post. I hope this one is my last error fixing of the year 2014. Todays issue is related to the installation process of Kinect SDK V2. If you were using old sdks, you’ll probably find this error message
Error code: 1920
Kinect Monitor (KinectMonitor) failed to start. Verify that you have sufficient privileges to start system services
So is time to check the log in temp folder. There is a message which suggest me that previous versions of the KinectSDK not deleted some files on the uninstall process. And that’s why, the current Installer had problems to deploy and register a new Kinect service .
We use the SDK to create all our own Kinect v2.0 games and think it can benefit the dev community. The $30 fee we ask for the package allows us to maintain and support the SDK. The USP of this SDK, in combination with the Kinemoto server (Windows 8.1 only), is the fact that it allows you to run Unity apps within a browser and still use the different Kinect streams.
We created a dedicated developer page that contains several tutorial video's to get you started. Have a look at http://developer.kinemoto.com.
If your interested in this SDK and want to check it out, let us know. We give away vouchers to non-profit organizations or people that help us improve the product by providing valuable feedback ...
his KinemotoSDK enables developers to use Kinect enabled Unity apps/games in the Unity Web Player. With the KinemotoSDK developers can easily add Kinect streams to their app/game, make use of Kinemoto functions and methods and build for Unity Web Player and Standalone.
Developers only need to download and install the KinemotoServer and voila!
The currently available streams are: Body, BodyIndex and Color. More streams will be added over time. Future releases also include WebGL and Android support.
Getting started
In order to work with the KinemotoSDK, you need to download and install the SDK, Kinect drivers and KinemotoServer first.
Throughout my previous article, I demonstrated how you can access the 2D positions of the eyes, nose, and mouth, using Microsoft’s Kinect Face API. The Face API provides us with some basic, yet impressive, functionality: we can detect the X and Y coordinates of 4 eye points and identify a few facial expressions using just a few lines of C# code. This is pretty cool for basic applications, like Augmented Reality games, but what if you need more advanced functionality from your app?
Recently, we decided to extend our Kinetisense project with advanced facial capabilities. More specifically, we needed to access more facial points, including lips, jaw and cheeks. Moreover, we needed the X, Y and Z position of each point in the 3D space. Kinect Face API could not help us, since it was very limited for our scope of work.
Thankfully, Microsoft has implemented a second Face API within the latest Kinect SDK v2. This API is called HD Face and is designed to blow your mind!
At the time of writing, HD Face is the most advanced face tracking library out there. Not only does it detect the human face, but it also allows you to access over 1,000 facial points in the 3D space. Real-time. Within a few milliseconds. Not convinced? I developed a basic program that displays all of these points. Creepy, huh?!
In this article, I am going to show you how to access all these points and display them on a canvas. I’ll also show you how to use Kinect HD Face efficiently and get the most out of it.
Although Kinect HD Face is truly powerful, you’ll notice that it’s badly documented, too. Insufficient documentation makes it hard to understand what’s going on inside the API. Actually, this is because HD Face is supposed to provide advanced, low-level functionality. It gives us access to raw facial data. We, the developers, are responsible to properly interpret the data and use them in our applications. Let me guide you through the whole process.
Step 1: Create a new project
Let’s start by creating a new project. Launch Visual Studio and select File -> New Project. Select C# as you programming language and choose either the WPF or the Windows Store app template. Give your project a name and start coding.
...
But wait!
OK, we drew the points on screen. So what? Is there a way to actually understand what each point is? How can we identify where they eyes are? How can we detect the jaw? The API has no built-in mechanism to get a human-friendly representation of the face data. We need to handle over 1,000 points in the 3D space manually!
Don’t worry, though. Each one of the vertices has a specific index number. Knowing the index number, you can easily deduce where does it correspond to. For example, the vertex numbers 1086, 820, 824, 840, 847, 850, 807, 782, and 755 belong to the left eyebrow.
Similarly, you can find accurate semantics for every point. Just play with the API, experiment with its capabilities and build your own next-gen facial applications!
If you wish, you can use the Color, Depth, or Infrared bitmap generator and display the camera view behind the face. Keep in mind that simultaneous bitmap and face rendering may cause performance issues in your application. So, handle with care and do not over-use your resources.
Today's library is one I've seen asked for a number of times on different forums and comments. Best of all, you can get it free and help them flesh it out...
We are excited to share our Finger Tracking library Aiolos for Kinect v2 with you. At this time, Aiolos is still in an experimental stage. Feel free to play with it, but don’t expect it to be perfect, yet. To improve #Aiolos we are interested in your feedback! What do you use it for? How would you like to use it? Please also tell us if you find bugs. This is especially important for us to further develop Aiolos.
Features
2-D position of finger tip-, middle- and root joint
2-D contour points of the hand
3-D position of finger tip-, middle- and root joint
Aiolos for Kinect v2 works side by side with the Kinect SDK. Get the infrared and depth images, put them into Aiolos, and get three 3D points for each finger. The download also includes a small sample program.
I've mentioned that with the Kinect for Windows v2 SDK you can now create Windows Store apps, Kinect to Windows Store App development. Recently the Kinect Team highlighted three real world examples of this...
In case you hadn't noticed, the Windows Store added something really special to its line-up not too long ago: its first Kinect applications. The ability to create Windows Store applications had been a longstanding request from the Kinect for Windows developer community, so we were very pleased to deliver this capability through the latest Kinect sensor and the public release of the Kinect for Windows software development kit (SDK) 2.0.
The ability to sell Kinect solutions through the Windows Store means that developers can reach a broad and heretofore untapped market of businesses and consumers, including those with an existing Kinect for Xbox One sensor and the Kinect Adapter for Windows. Here is a look at three of the first developers to have released Kinect apps to the Windows Store.
Nayi Disha – getting kids moving and learning
You wouldn’t think that Nayi Disha needs to broaden its market—the company’s innovative, Kinect-powered early education software is already in dozens of preschools and elementary schools in India and the United States. But Nayi Disha co-founder Kartik Aneja is a man on a mission: to bring Nayi Disha’s educational software to as many young learners as possible. “The Windows Store gives us an opportunity to reach beyond the institutional market and into the home market. What parent doesn’t want to help their child learn?” asks Aneja, somewhat rhetorically. In addition, deployment in the Windows Store could help Nayi Disha reach schools and daycare centers beyond those in the United States and India.
...
YAKiT: bringing animation to the masses
It doesn’t take much to get Kyle Kesterson yakking about YAKiT—the co-founder and CEO of the Seattle-based Freak’n Genius is justifiably proud of what his company has accomplished in fewer than three years. “We started with the idea of enabling anybody to create animated cartoons,” he explains. But then reality set in. “We had smart, creative, funny people,” he says, “but we didn’t have the technology that would allow an untrained person to make a fully animated cartoon. We came up with a really neat first product, which let users animate the mouth of a still photo, but it wasn’t the full-blown animation we had set our sights on.”
Then something wonderful happened. Freak’n Genius was accepted into a startup incubation program funded by Microsoft’s Kinect for Windows group, and the funny, creative people at YAKiT began working with the developer preview version of the Kinect v2 sensor.
Now, Freak’n Genius is poised to achieve its founders’ original mission: bringing the magic of full animation to just about anyone. Its Kinect-based technology takes what has been highly technical, time consuming, and expensive and makes it instant, free, and fun. The user simply chooses an on-screen character and animates it by standing in front of the Kinect v2 sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character. What’s more, with the ability to create Windows Store apps, Kinect v2 stands to bring Freak’n Genius’s full animation applications to countless new customers.
...
3D Builder: commoditizing 3D printing
As any tech-savvy person knows, 3D printing holds enormous potential—from industry (think small-batch manufacturing) to medicine (imagine “bio-printing” of body parts) to agriculture (consider bio-printed beef). Not to mention its rapid emergence as source of home entertainment and amusement, as in the printing of 3D toys, gadgets, and gimcracks. It was with these capabilities in mind that, last year, Microsoft introduced the 3D Builder app, which allows users to make 3D prints easily from a Windows 8.1 PC.
Now, 3D Builder has taken things to the next level with the incorporation of the Kinect v2 sensor. “The v2 sensor generates gorgeous 3D meshes from the world around you,” says Kris Iverson, a principal software engineer in the Windows 3D Printing group. “It not only provides precise depth information, it captures full-color images of people, pets, and even entire rooms. And it scans in real scale, which can then be adjusted for output on a 3D printer.”
Nayi Disha, YAKiT, and 3D Builder represent just a thin slice of the potential for Kinect apps in the Windows Store. Whether the apps are educational, entertainment, or tools, as in these three vignettes, or intended for healthcare, manufacturing, retailing, or other purposes, Kinect v2 and the Windows Store offer a new world of opportunity for both developers and users.
I'm not sure how I came across this, but I am glad I did as this is a great new series of labs (and source) to get you started building great Kinect for Windows v2 apps. This is now on the top of my Kinect Resources list... :)
This series will show you how to build a Windows 8.1 store app which uses almost every feature of the Kinect 2. The lessons in this series work the best when completed in order.
You can download a master copy of the complete app and all labs and referenced libraries through the github links on the left.
Or if you know a bit about development with the Kinect 2 already, you can skip to a particular lab by navigating to it at the top of the page. The running codebase is available through a link at the bottom of each page, which is complete and runnable as if you have just finished that lab.
If you've been following this blog for any length of time, you know how much I like the Kinect voice recognition and its potential. Every single time I use my Xbox One, it's just so natural to "Xbox Pause" or "Xbox Turn Off"...
Zubair Ahmed has just shared a simple sample, but simple in that it's easy to understand and to learn from and build on...
If you are new to Kinect for Windows v2 development, I posted my Speech Recognition Sample code on Github
The sample demonstrate Kinect for Windows v2 Speech Recognition capabilities. It Shows how to set up Kinect Speech Recognition Initializers, Add Grammar and how to perform an action when a speech is recognized.
It's not often we get a new development environment that gets Kinect Dev support, so when we do, and when it's focused on getting to coding quickly, well we have to highlight it!
Kinect for Small Basic is a set of extension object for Small Basic which allow anyone to program with the Microsoft Kinect Sensor and the information that it captures. Here are examples of what you can do with Kinect for Small Basic:
Show the color, infrared, depth, body index, and body sensor data
Capture images from the color, infrared, depth, and body index sensors
Replace the background behind people in the foreground with another image. This is similar to chroma key compositing or “green screen” processing.
Get the position and orientation of 26 different “joints” in up to 6 human bodies in both 3D space and on the screen
Gets the open/closed state of the hands of up to 6 humans in front of the sensor
Gets the lean angle of up to 6 humans in front of the sensor
Gets the position and orientation of the faces of up to 6 humans in front of the sensor
Programmers Reference
You will notice that three new objects now appear in the IntelliSense object list: KinectBodyList, KinectFaceList, and KinectWindow. All of the Kinect capabilities available in Small Basic are accessed through these objects. There are some capabilities in the Kinect sensor that are not available in the Kinect for Small Basic at this time which are available to developers who use Visual Studio and the full Kinect for Windows SDK.
Last week the Kinect for Windows team made an important announcement, one that is actually great news.
Since the first Kinect device came out there's been a great deal of confusion about which device, Xbox or Windows, does what. Which one can be "officially" used, etc. With the step, the confusion will hopefully fade away...
At Microsoft, we are committed to providing more personal computing experiences. To support this, we recently extended Kinect’s value and announced the Kinect Adapter for Windows, enabling anyone with a Kinect for Xbox One to use it with their PCs and tablets. In an effort to simplify and create consistency for developers, we are focusing on that experience and, starting today, we will no longer be producing Kinect for Windows v2 sensors.
Over the past several months, we have seen unprecedented demand from the developer community for Kinect sensors and have experienced difficulty keeping up with requests in some markets. At the same time, we have seen the developer community respond positively to being able to use the Kinect for Xbox One sensor for Kinect for Windows app development, and we are happy to report that Kinect for Xbox One sensors and Kinect Adapter for Windows units are now readily available in most markets. You can purchase the Kinect for Xbox One sensor and Kinect Adapter for Windows in the Microsoft Store.
The Kinect Adapter enables you to connect a Kinect for Xbox One sensor to Windows 8.0 and 8.1 PCs and tablets in the same way as you would a Kinect for Windows v2 sensor. And because both Kinect for Xbox One and Kinect for Windows v2 sensors are functionally identical, our Kinect for Windows SDK 2.0 works exactly the same with either.
Microsoft remains committed to Kinect as a development platform on both Xbox and Windows. So while we are no longer producing the Kinect for Windows v2 sensor, we want to assure developers who are currently using it that our support for the Kinect for Windows v2 sensor remains unchanged and that they can continue to use their sensor.
We are excited to continue working with the developer community to create and deploy applications ...
Vladimir Kolesnikov, Microsoft employee and part of the This Week on Channel 9 host team has released a new wrapper that will make the Python Kinect 2 Dev's smile... :)
Enables writing Kinect applications, games, and experiences using Python. Inspired by the original PyKinect project on CodePlex.
Only color, depth, body and body index frames are supported in this version. PyKinectBodyGame is a sample game. It demonstrates how to use Kinect color and body frames.
Tango Chen, Friend of the Galley, provides an amazing inspirational post... I don't think I've seen anything like this!
Motion Server – One Kinect v2 on Multiple Devices
One day I felt that using one Kinect on one PC isn’t enough. There could be more interesting things to do with multiple screens.
So I consider sharing the Kinect data with multiple devices. You may know that I created an app called Kv2 Viewer that can access Kinect data on the phone. With the similar technology, I successfully send Kinect data to multiple devices.
Firstly I write a server program to make a Win 8.1 PC as a server. Then all other clients that want the Kinect data would just need to connect to this PC.
Here are the simple demos I made in the video:
Body views
...
They all get the joint positions, joint states first and generate the body views on screen, not reciving body view images from the server.
The girls from different directions would stares at you (According to your head position). And if you act like pointing at something, they would stare at that (According to your hand position.)
What do you do after you’ve built a great app? You make it even better. That’s exactly what Carl Franklin, a Microsoft Most Valuable Professional (MVP), did with GesturePak. Actually, GesturePak is both a WPF app that lets you create your own gestures (movements) and store them as XML files, and a .NET API that can recognize when a user has performed one or more of your predefined gestures. It enables you to create gesture-controlled applications, which are perfect for situations where the user is not physically seated at the computer keyboard.
Franklin’s first version of GesturePak was developed with the original Kinect for Windows sensor. For GesturePak v2, he utilized the Kinect for Windows v2 sensor and its related SDK 2.0 public preview, and as he did, he rethought and greatly simplify the whole process of creating and editing gestures. To create a gesture in the original GesturePak, you had to break the movement down into a series of poses, then hold each pose and say the word “snapshot,” during which a frame of skeleton data was recorded. This process continued until you captured each pose in the gesture, which could then be tested and used in your own apps.
...
Another big change is the code itself. GesturePak v1 is written in VB.NET. GesturePak v2 was re-written in C#. (Speaking of coding, see the green box above for Franklin’s advice to devs who are writing WPF apps.)
Franklin was surprised by how easy it was to adapt GesturePak to Kinect for Windows v2. He acknowledges there were some changes to deal with—for instance, “Skeleton” is now “Body” and there are new JointType additions—but he expected that level of change. “Change is the price we pay for innovation, and I don't mind modifying my code in order to embrace the future,” Franklin says.
He finds the Kinect for Windows v2 sensor improved in all categories. “The fidelity is amazing. It can...
Carl Franklin offered these words of technical advice for devs who are writing WPF apps:
If you want to convert the AVI to other formats, use FFmpeg (http://ffmpeg.org/)
When building an app with multiple windows/pages/user controls that use the Kinect sensor, only instantiate one instance of a sensor and reader, then bind to the different windows
Initialize the Kinect sensor object and all readers in the Form Loaded event handler of a WPF window, not the constructor