Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 446 articles
Browse latest View live

Kinect v2 Point Cloud

$
0
0

Today's project from Edgar Maass shares one of the favorite things people like to do with the Kinect for Windows, create Point Clouds. His latest update upgrades the project for the Kinect for Windows v2 device and more...

Kinect v2 Point Cloud

The goal of this article is to extract a point cloud using the Microsoft Kinect v2 sensor, visualize it using the VTK toolkit, save it for printing or further work (e.g. using Meshlab)

image

Introduction, Quick Start

  1. Prerequisites:
    -A Kinect v2 sensor
    -The Microsoft Kinect V2 SDK version 2.0 installed
    -Visual Studio 2012 or higher
  2. Start the program KinectPointCloud.exe
  3. Check the checkbox "Save if depth frame ..."
  4. Click on „Capture“ – see below
  5. Hold still in front of the sensor while scanning.
    Capture will stop when the image quality is high – „Depth OK…“ is over 45% - see below
  6. Click on „Show Point Cloud“ to open the point cloud
  7. View the point cloud with or without color info
    image
  8. Open the point cloud for further editing in external tools like MeshLab (either the .ply file -contains color info - or the .xyz file)
    image

Other source code used

Different open source code is used within the project

-parts of the Microsoft Samples contained within the Kinect SDK

-The VTK library by means of the C# wrapper Activiz: Link

-parts of the code of Vangos Pternas from his Codeproject article: Article:Link

Coding

Grabbing the Data with Kinect

Extracting a point cloud from the Kinect by using the standard Microsoft sample code gives quite poor results – e.g. up to 30% of the depth frame points miss depth information. Either the depth is zero or there are artefacts on the depth image which occur from low depth precision

...

Improve depth quality

...

Remarks

The depth statistics is only valid if the target and the scanner do not move.

If large movements occur, the percentage values may not add to 100% total. The reason is that the depth points cut out of the image are different if the sensor or the scan target move.

Conclusion

A point cloud can be scanned with acceptable quality using the Microsoft Kinect v2 camera if one uses the procedures described in this article like image interpolation and saving the point cloud only if the depth precision is high.

Code Usage

The code and all information of this article may be used in any application as long as you cite this article in the acknowledgements.

Project Information URL: http://www.codeproject.com/Articles/824882/Kinect-v-Point-Cloud

Project Source URL: http://www.codeproject.com/Articles/824882/Kinect-v-Point-Cloud





Here's a hand for Kinect for Windows v2 and XNA

$
0
0

Frank McCown returns with a follow-up with to his XNA and Kinect for Windows v2 post,  XNA and the Kinect for Windows 2? Here's an example..., this time showing giving us a hand...

XNA and Kinect 2 hand motion demo

This demo will show you how to write a simple XNA application that reads hand motion from the Kinect v2. The Kinect sensor can detect motion for your entire body, but here I'll focus on just detecting hand motion and whether the hand is open (all fingers out) or closed (in a fist) as shown in the screenshot below.

image

Prerequisites

You must have the Kinect for Windows 2 correctly installed along with the SDK. There are plenty of online tutorials showing you how to program with the older Kinect; this is for the latest version.

See the Prerequisites section from my previous post on installing the necessary software to code this demo using Visual Studio 2013.

...

Press Ctrl-F5 to build and run the program. Stand in front of your Kinect, and you should see the PNG images move as you move your hands. Try opening and closing your hands to see the open/close images being displayed. If you have a friend nearby, ask them to join you so you can see four hands moving about the screen.

Project Information URL: http://frankmccown.blogspot.com/2014/12/xna-and-kinect-2-hand-motion-demo.html

Contact Information:




Kinect for Windows v1 device sales end this year...

$
0
0

This is a heads up that if you want a Kinect for Windows v1 device, the sooner you get it the better. The title of the below post says it all...

Original Kinect for Windows sensor sales to end in 2015

In October, we shipped the public release of the Kinect for Windows v2 sensor and its software development kit (SDK 2.0). The availability of the v2 sensor and SDK 2.0 means that we will be phasing out the sale of the original Kinect for Windows sensor in 2015. [GD: Emphasis added]

The move to v2 marks the next stage in our journey toward more natural human computing. The new sensor provides a host of new and improved features, including enhanced body tracking, greater depth fidelity, full 1080p high-definition video, new active infrared capabilities, and an expanded field of view. Likewise, SDK 2.0 offers scores of updates and enhancements, not the least of which is the ability to create and publish Kinect-enabled apps in the Windows Store. At the same time that we publicly released the v2 sensor and its SDK, we also announced the availability of the Kinect Adapter for Windows, which lets developers create Kinect for Windows applications by using a Kinect for Xbox One sensor. The response of the developer community to Kinect v2 has been tremendous: every day, we see amazing apps built on the capabilities of the new sensor and SDK, and since we released the public beta of SDK 2.0 in July, the community has been telling us that porting their original solutions over to v2 is smoother and faster than expected.

...

We hope everyone will embrace the latest Kinect technology as soon as possible, but we understand that some business customers have commitments to the original sensor and SDK. If you’re one of them and need a significant number of original Kinect for Windows sensors, please contact us as soon as possible. We will do our best to fill your orders, but no more original sensors will be manufactured after the current stock sells out.

...

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/12/30/original-kinect-for-windows-sensor-sales-to-end-in-2015.aspx

Contact Information:




Head Gesture Library Help Wanted...

$
0
0

Today's project is a call out for your help. Dwight Goins is new to the blog, but is someone I'm sure we're going to see more off in the future. Today he needs your help in beta test a library he's building...

Head Gesture Library for Kinect enabled applications: Beta Testers Wanted

Hi everyone,

I'm currently working on a library to detect Head Nods (Nodding in agreement), and Head Shakes (Shaking in disagreement) and I would like to know who would be interested in beta testing the Head Gesture Library for Windows 8.1 store applications? If this sounds like something you're interested in please like or up vote this post and send an email to DNGoins at Hotmail.

I will provide you with the details on how to get the library, usage and functionality. A quick write-up can be found ...

Project Information URL: https://social.msdn.microsoft.com/Forums/en-US/0d3b9f45-8611-452d-88fc-733175bf5fc7/head-gesture-library-for-kinect-enabled-applications-beta-testers-wanted?forum=kinectv2sdk

Using Kinect HD Face to make the MicroHeadGesture Library

Currently, I am working on a medical project which requires detection of Head Nods (in agreement), Head Shakes (in disagreement), and Head Rolls (Asian/East Indian head gesture for agreement) within a computer application.

Being that I work with the Kinect for Windows device, I figured this device is perfect for this type of application.

This posting serves as explanation to how I built this library, the algorithm used, and how I used the Kinect device and Kinect for Windows SDK to implement it.

Before we get into the Guts of how this all works, let’s talk about why the Kinect is the device that is perfect for this type of application.

The Kinect v2.0 Device has many capabilities. One of which allows the device to capture a persons face in 3-D… That is 3-Dimensions:

image

Envision the Z-axis arrow pointing straight out towards you in one direction, and out towards the back of the monitor/screen in the other direction.

In Kinect terminology, this feature is called HD Face. In HD Face, the Kinect can track the eyes, mouth, nose, eye brows, and other specific things about the face when a person looks towards the Kinect camera.

image

We can measure height, width, and depth of a face. Not only can we measure 3-d values and coordinates on various axes, with a little math and engineering we can also measure movements and rotations over time.

Think about normal head movements for a second. We as humans twist and turn our heads for various reasons. One such reason is proper driving techniques. We twist and turn our heads when driving looking for other cars on the road. We look up at the skies on beautiful days. We look down on floors when we drop things. We even slightly nod our heads in agreement, and shake our heads in disgust.

Question: So from a technical perspective what does this movement look like?

Answer: When a person moves their head, the head rotates around a particular axis. It’s either the X, Y, Z, or even some combination of the three axis. This rotation is perceived from a point on the head. For our purposes, let’s look at the Nose as the point of perspective.

...

If you’re interested in testing out this library, please contact me here through this blog.

Here’s the library and a sample Windows 8.1 store application using the library in action. In the picture below, on the right a windows 8.1 store application displays a 3-D cube that represents the person’s tracked face. It moves as the head moves. When the person shakes or nods it counts. On the left represents tracing data from Visual Studio .Net 2013, and KinectStudio recorded clip of me testing the application

image

Project Information URL: https://dgoins.wordpress.com/2015/01/10/using-kinect-hd-face-to-make-the-headgesture-library/

Contact Information:




Kinect to Point... Clouds

$
0
0

Edgar Maass continued his series, which we first highlighted here, Kinect v2 Point Cloud, showing more about the coolness that is Point Clouds and the Kinect in these two posts...

Point Cloud utility with C# and OpenGL, Kinect Point Cloud using OpenGL

image

This article is the follow-up of my article on grabbing a point cloud using the Microsoft Kinect v2. What's new is:

  • display of the point cloud in a OpenGL control. 
  • Point Cloud generation tools for geometric objects
  • Point cloud manipulation tools for rotation, translation, scaling

For the details of the OpenGL implementation, please read this article. It uses the library OpenTK as C# interface for OpenGL.

...

image

I use the Kinect v2 to extract point clouds, and needed a simple code to display and handle point clouds. For this I did not find a simple utility in the internet.

I tried to use the Point Cloud Library, but a simple interface for .NET is missing. Using it in C++ is also not quite easy, compiling means that you include about 1 million header files. Compilation is slow and there are a lot of things to take care, a lot of group entries to read until you get it running on your particular system. Perhaps it is easier with Linux, but at least with Windows 64 bit you spend some time to get it running. This overhead is large, if you want to simply load a point cloud, and do some operations on it.

The VTK library seemed to be an alternative, but unfortunately the community is not that active any more, so it is seldom updated and the current version is quite old. I gave it a try however, see the link of my Kinect article. Using the VTK in .NET is not very nice since the C++ implementation is not well encapsulated in the C# port (Activiz), so that for instance you are not able to check values in the debugger very well, or need to take care of destroying objects yourself. Things that slow down your development.
These are only the top two of my trials.

Then I found a very nice C# program which does almost everything I wanted, from Douglas Andrade at cmsoft. He gave me the permission to take his code and extract a part of it to use it as a user control.

This is what I did, made some improvements here and there, and here is a nice small library which you can use in your project. The utility uses OpenTK which encapsulates and extends OpenGL.

Now that I have extracted this user control, perhaps it is useful also for other people doing point clouds handling. I think, many people engaged in scientific 3D investigations are interested in having a simple library.
The disadvantage might be that it is currently relying on a MS Windows operating systems, but it could be ported also to Mono.

The great advantage is that you can start right away using the code, and do not need days of work for finding dependencies or other overhead which you have to do with most of the other existing project.

Features of the user control

For a brief description of the features please follow the steps

1. Start the test program

2. Load a point cloud e.g. the bunny.obj file, this is available in the bin/TestData folder of the source and exe distribution. (The bunny.obf file is a freely available point cloud from the Stanford university, used in a lot of projects in the web: Link).

...

Project Information URL:http://www.codeproject.com/Articles/839389/Point-Cloud-utility-with-Csharp-and-OpenGL, http://www.codeproject.com/Articles/861867/Kinect-Point-Cloud-using-OpenGL




Virtual Shoe Fitting

$
0
0

Tango Chen, Friend of the Galley, is back with a interesting usage of the Kinect, think cosplay for your feet... :)

Virtual Shoe Fitting Experiment with Kinect v2

The virtual shoe fitting store from Goertz in 2012 is quite impressing. They used 3 original Kinects. I decided to make my virtual shoe fitting application with a Kinect v2 last month. It’s still something rare, though we’ve seen many virtual dressing room demos.

I placed the Kinect low on a box, so it can capture my feet only. In this case, we can see more details of the shoe. But, I’ll need to write my own feet tracking algorithm as the original body tracking won’t work. This is one of the big challenges.

Another challenge is that I need to hide some parts of the shoes that covered by my feet. If you’ve ever tried implementing the virtual dressing room, you’ll also face this problem. Using single-sided materials is one of the solutions. But it won’t work in some cases and it’s flawed. For some reasons, I’d keep my solution private. ;)

Overall, the application doesn’t work very well due to the CPU capability and the current algorithm. But I enjoy the process of making it!

Project Information URL: http://tangochen.com/blog/?p=2004

Contact Information:

Related past posts you might find interesting;




Kinect without a Kinect

$
0
0

It's been a long time since we've highlighted Dan Hanan, the last was in 2011, .Net Rocks rocks WinRT & Kinect (yeah, wow).

Today he provides another example of Kinecting with out a Kinect... (The other being the post from Bruno Capuano, Two Kinect v2 Tips from El Bruno - Disconnected Dev and Body Counts)

Kinect Development Without a Kinect

Huh? How can you develop software that integrates with the Microsoft Kinect if you don’t have a physical Kinect? We have a number of Kinect devices around the office, but they’re all in use. I want to test and develop on an application we’re writing, … there is another way.

Enter Kinect Studio v2.0. This application is installed with the Kinect v2.0 SDK, and allows you to record and playback streams from the Kinect device. It’s usually used to debug a repeatable scenario, but we’ve been using it to spread the ability to develop Kinect-enabled applications to engineers that don’t have a physical Kinect device. There are just a couple settings to be aware of to get this to work.

Someone has to record the streams in the first place. They can select which streams (RGB, Depth, IR, Body Index, etc. list of streams shown below) to include in the recording. The recording is captured in an XEF file that can get large quickly depending on what streams are included (on the order of 4GB+ for 1 minute). Obviously, you need to include the streams that you’re looking to work with in the application you’re developing.

image

So I have my .XEF file to playback, what next?

  • Open the XEF file in Studio.
  • Go to the PLAY tab
  • IMPORTANT: Select which of the available streams you want playback to contain (see screenshot below)
  • Click the settings gear next to the playback window, and select what output you want to see during playback. This does not affect what you’re application code receives from the Kinect. It controls display in the Studio UI only.
  • Click the Connect to Service button
  • Click PLAY

You should now start getting Kinect events in your application code.

Here’s what my studio UI looks like (with highlights calling out where to change settings).
Hope that helps.

image

...

Project Information URL: http://blogs.interknowlogy.com/2015/01/14/kinect-development-without-a-kinect/

Contact Information:




Kinect to MatLab

$
0
0

Today's project has been in my queue to be highlighted since I first saw it in September, but there was always something...

Well enough of that (better late than never and all that). While this may be a niche use case, if you DO need this, this will really come in handy...

Kinect Version 2 Depth Frame to .mat File Exporter Tool

Tool for extracting depth frames from Kinect v2 to .mat files, with point cloud generator script. Ready to use!

Introduction

The new Kinect v2 is awesome, however for people who are not coding experts, it can be hard to get the data of the Kinect into a workable setting, like MatLab. This tool is meant as a solution to solve the problem of getting depth data from the Kinect SDK into MatLab. And no external libraries are needed! (except for the ones needed for Windows Kinect and Windows)

Furthermore, a class is also provided, which can be used to export any ushort (or uint16) array to a loadable .mat file.

Background

I tried a few libraries (including csmatio and matio) for extracting the depth frames to .mat files. None of them seemed to work and therefore I decided to make my own .mat file writer. It's not meant to be an example of good coding, but rather a usable tool.

Main Tool

The main tool is called "KinectMLConnect", which is both found as source code and .exe, ready to be built in VS (tested in VS 2013), or run directly in Windows.
(The .exe is located in: "KinectMLConnect\KinectMLConnect\KinectMLConnect\bin\Release".)
The tool simply listens for an active sensor (or for the Kinect studio sensor emulator), grabs the stream and exports each frame as a .mat file.

The interface is quite simple and self explanatory, and is shown here:

image

image

MATWriter Class

Included is also a class file, for the MATWriter class, which is the one used for the actual export of the frames. Its constructor (and only callable code) is given here:

...

DepthToXYZ

To make it even more simple, I've added a matlab script, ...

image

Project Information URL: http://www.codeproject.com/Tips/819613/Kinect-version-depth-frame-to-mat-file-exporter-to

Project Source URL: http://www.codeproject.com/Tips/819613/Kinect-version-depth-frame-to-mat-file-exporter-to

Contact Information:





CatchEye - Kinect to Eye-to-Eye Chatting

$
0
0

You've heard about how the XBox One can magically make it look like you are making eye contact when video chatting? Even though the Kinect may be above or below the screen?

Oh cool would it be if you could use your Kinect for Windows and do that in Skype for Windows?

CatchEye

The Video Chat problem

Have you ever noticed when you use a video chat system (such as Skype or Hangouts) that the person at the other end is not looking you in the eye, and sometimes not even looking at you at all ? Usually she seems to be looking down at you ! How annoying ! Why is this ? Because she is looking at the image of you in the video window in the center of her screen, while her webcam is viewing her from the top of the screen. The difference in the view angle between the video window and the webcam is precisely the angle that she seems to be looking down.

The CatchEye solution

The CatchEye solution takes advantage of the emerging depth (RGBD) cameras, to be available soon on every mobile device. You may have already used one of these – the Kinect – as part of Microsoft’s Xbox gaming system. These cameras produce not only a standard video stream, but also a “depth” value per pixel. Using this extra information, the CatchEye software tracks and maps the shape of the subject’s face, and seamlessly rotates this colored 3D object to its correct position in the image. All this is done in real-time as the video is streamed. The result is that your chat partner is now looking you straight in the eye !

The CatchEye history

The CatchEye technology was conceived and developed at the computer graphics laboratory of ETH Zurich (Switzerland’s premiere engineering institute) in 2012 by a research team consisting of computer graphics professors and graduate students. The team commercialized this deep patent-pending technology, spinning it off into a startup company.  The first goal is to release an add-on for Microsoft Skype and Google Hangouts by the end of 2014, enabling users of these video chat systems to finally look each other in the eye !

CatchEye 1.0 is now available !

Trouble installing or running CatchEye ? Consult the CatchEye manual here.

Be among the first to try out our CatchEye add-on for Skype or Hangouts.

image

To use CatchEye – you will need to have a Kinect for Windows, available for purchase here, installed on your PC, including its accompanying software, and a Windows PC equipped with:

–  Windows 8 or 8.1
–  A 64-bit (x64), dual-core 2.66-GHz or faster processor
–  A dedicated USB 3.0 controller
–  At least 2 GB RAM
–  A graphics card that supports DirectX 11 and OpenGL 4.0, e.g. an NVIDIA GeForce

With CatchEye, you will be able to look your video chat partner straight in the eye. Your partner will need to also have CatchEye in order to reciprocate.

To download CatchEye, please fill in your name and e-mail address below, and we will send you a link to the download, containing an installer and a manual.

The CatchEye manual is also available here.

Project Information URL: http://catch-eye.com/

Project Download URL: http://catch-eye.com/




Using Kinect for Windows v2 Sensor with openFrameworks in WinRT applications

$
0
0

Today we highlight a new Microsoft Virtual Academy Quick Start Challenge...

Quick Start Challenge: Kinect v2 Sensor and openFrameworks

Want to work with the Kinect sensor v2? In this hands-on lab, learn how to use the Kinect sensor v2 in an openFramework application running on Windows 8. We use an openFramework version available on GitHub, in MSOpenTech repositories. Build on this knowledge to implement a C++ modern class that allows you use the Kinect v2 WinRT object. And find out how to transpose the sensor data (pixel, depth, and body) into openFrameworks graphic classes. Don't miss this informative Quick Start Challenge!

Overview

In this challenge, you will learn how to use the Kinect for Windows v2 sensor in an openFramework application running on Windows 8. We use an openFramework version available on GitHub, in MSOpenTech repositories. You also learn how to implement a C++ modern class that allows you use the Kinect v2 WinRT object and then how to transpose the sensor data (pixel, depth, Body) into openFrameworks graphic classes.

What You’ll Learn

· How to use the Kinect v2 sensor in a openFramework application on WinRT (Modern app)

· How to use C++ modern with the Kinect SDK

· How to display Video and Body frames from the sensor

Tools You’ll Use

· Kinect for Windows v2 sensor http://www.microsoft.com/en-us/kinectforwindows/default.aspx

· Latest SDK for Kinect installed from http://www.microsoft.com/en-us/kinectforwindows/default.aspx

· openFramework installed from https://github.com/MSOpenTech/openFrameworks/tree/universal

The Challenge

This challenge includes the following exercises:

1. Create the initial OF project files

2. Displaying color frame from the v2 sensor

3. Accessing depth frame

4. Accessing body frame

5. Optional exercise: Track the handstate

...

Project Information URL: http://www.microsoftvirtualacademy.com/training-courses/quick-start-challenge-kinect-v2-sensor-and-openframeworks




Face Tracking without a Kinect

$
0
0

Dan Hanan is back and continues to help us development Kinect for Windows v2 device applications without actually having a Kinect for Windows v2 device.

Here's a couple related posts...

Kinect Development (Face tracking) – Without a Kinect

In a previous post I talked about how you can use Kinect Studio v2 Studio software to “play back” a recorded file that contains Kinect data. Your application will react to the incoming data as if it were coming from a Kinect, enabling you to develop software for a Kinect without actually having the device.

This of course requires that you have a recorded file to playback. Keep reading…

More specifically, Kinect for Windows v2 supports the ability to track not only bodies detected in the camera view, but tracking FACES. Even better, there are a number of properties on the detected face metadata that tell you if the person is:

  • looking away from the camera
  • happy
  • mouth moving
  • wearing glasses
  • …etc…

Here at IK, we have been doing a lot of Kinect work lately. It turns out the Kinect v2 device and driver are super picky when it comes to compatible USB 3 controllers. We have discovered that our laptops (Dell Precision m4800) do not have one of the approved controllers. Through lots of development trial and error, we have narrowed this down to mostly being a problem only with FACE TRACKING (the rest of the Kinect data and functionality seem to work fine).

So … even though I have a Kinect, if I’m working on face tracking, I’m out of luck on my machine in terms of development. However, using the technique described in the previous post, I can play back a Kinect Studio file and test my software just find.

To that end, we have recorded a short segment of a couple of us in view, with and without faces engaged, happy, looking and not, … and posted it here for anyone to use in their Kinect face tracking software. This recording has all the feeds turned on, including RGB, so it’s a HUGE file. Feel free to download it (below) and use it for your Kinect face tracking development.

DOWNLOAD HERE:... [Click through for the download link]

image

[GD: Post copied almost in full]

Project Information URL: http://blogs.interknowlogy.com/2015/01/22/kinect-face-tracking-without-a-kinect/

Contact Information:




Prepose, a Kinect for Windows v2 Scripting Language

$
0
0

Today's project is from Dave Voyles sharing a new possible scripting language for the Kinect coming from Microsoft Research, Prepose...

New “Prepose” scripting language for Kinect 2 from Microsoft Research

Prepose Scripting Language

Microsoft Research has taken this one step further though, and introduced a scripting language called “Prepose”, which allows for building Kinect gesture recognizers. You can find more information in the external tech report.

So how does it work?

“You create Kinect gesture recognizers by scripting high-level movements such as “raise your left leg to the side” instead of using machine learning or hand-tuned code. Prepose is the work of a team in Microsoft Research, powered by the Microsoft constraint solver Z3.”

” As examples (internally to Microsoft), we (David Molnar’s team) created Prepose scripts for tai chi, ballet, and physical therapy gestures this past summer, each tens of lines of code. I’ve included an example below to show what a Prepose script looks like.”

The team is are trying to figure out what to do next with the project so any feedback is helpful. You can always reach me here or find me on Twitter, @DaveVoyles.

Here’s an annotated example to show Prepose syntax and concepts from David Molnar:

...

Project Information URL: http://www.davevoyles.com/new-prepose-scripting-langauge-kinect-2/,

Prepose: Security and Privacy for Gesture-Based Programming

Abstract - With the rise of sensors such as the Microsoft Kinect, Leap Motion, and hand motion sensors in phones such as the Samsung Galaxy S5, natural user interface (NUI) has become practical. NUI raises two key challenges for the developer: First, developers must create new code to recognize new gestures, which is a time consuming process. Second, to recognize these gestures, applications must have access to depth and video of the user, raising privacy problems. We address both problems with Prepose, a novel domain-specific language (DSL) for easily building gesture recognizers, combined with a system architecture that protects user privacy against untrusted applications by running Prepose code in a trusted core, and only interacting with applications via gesture events.

Prepose lowers the cost of developing new gesture recognizers by exposing a range of primitives to developers that can capture many different gestures. Further, Prepose is designed to enable static analysis using SMT solvers, allowing the system to check security and privacy properties before running a gesture recognizer. We demonstrate that Prepose is expressive by creating novel gesture recognizers for 28 gestures in three representative domains: physical therapy, tai-chi, and ballet.We further show that matching user motions against Prepose gestures is efficient, by measuring on traces obtained from Microsoft Kinect runs.

Because of the privacy-sensitive nature of always-on Kinect sensors, we have designed the Prepose language to be analyzable: we enable security and privacy assurance through precise static analysis. In Prepose, we employ a sound static analysis that uses an SMT solver (Z3), something that works well on Prepose but would be hardly possible for a general-purpose language. We demonstrate that static analysis of Prepose code is efficient, and investigate how analysis time scales with the complexity of gestures. Our Z3-based approach scales well in practice: safety checking is under 0.5 seconds per gesture; average validity checking time is only 188 ms; lastly, for 97% of the cases, the conflict detection time is below 5 seconds, with only one query taking longer than 15 seconds.

...

image

image

...

Contact Information:




Haro3D - Kinect for Windows v2 for LabVIEW

$
0
0

Today's project is another one of those, if you need this, you really need it....

Labview library for Kinect V2

For those interested, I have developed a library of Labview VI's to access data from the Kinect V2. It seemed that such a library was not available even though one was developed for the Kinect V1.

You can get more information about the library at www.harotek.com and the library can be downloaded for free from National Instruments website at https://decibel.ni.com/content/docs/DOC-40832

Project Information URL: https://social.msdn.microsoft.com/Forums/en-US/9e34c8f7-02a5-480b-8df7-56f3adf5eba1/labview-library-for-kinect-v2?forum=kinectv2sdk

Haro3D

image

Haro3D™ is the first library for National Instruments LabVIEW™ providing access to the functionalities of the Kinect for Windows V2.

The following features of the Microsoft Kinect V2 Haro3D™ library gives access to the following functionalities for the Microsoft Kinect V2:

  • Bodies (people and joints tracking)
  • Colored 3D clouds of points.
  • Depth sensing
  • Color high-definition camera
  • Active infrared imaging
  • 3D volume reconstruction
In addition, the Haro3D™ library offers utilities to complement the Kinect functionalities:
  • Real-time interactive 3D display of bodies (skeletons)
  • Real-time interactive 3D display of colored clouds of points
  • File I/O of Clouds and Meshes in STL, PLY, XYZ formats
  • Fully functional examples for each of the functionalities.
  • Youtube video demonstrations of examples and programming tips.
The library can be downloaded for free from National Instruments with this link.
..
Download the Haro3D™ manual in pdf format here.

Project Information URL: http://harotek.weebly.com/products.html

Project Download URL: http://harotek.weebly.com/products.html

Kinect 2 - Haro3D™ VI Library

The Kinect is a sensor developed by Microsoft for the Xbox game console. Its main goal is to be able to interpret human positions and gestures. To accomplish this task, the Kinect is equipped with a depth measurement system based on active illumination. This feature makes the Kinect a low cost three-dimensional camera that can be used for applications outside the gaming industry. More information about the Kinect can be found at http://www.microsoft.com/en-us/kinectforwindows/meetkinect/features.aspx.

When the first Kinect came out, the  great Kinesthesia library was rapidly made available by University of Leeds. That library is based on .NET assemblies that can be used directly from Labview. The Kinect for Windows v2 was first made available at the end of 2013 within a beta program, and the public release was July 2014. To our knowledge, no VI library to access the Kinect v2 features from Labview has been available so far. One reason might be that there are apparently some issues to access the Kinect v2 from within Labview using the .NET assemblies.   

We believe that the Kinect is a great piece of hardware, and that Labview has great, but under-exploited 3D Visualization tools. At HaroTek, we developed a VI library called Haro3D™. This VI library contains API VI's to access some features of the Kinect v2. These API VI's are basically wrapper around two DLL's (one for 32-bit and the other for 64-bit versions of Labview) that were developed in C++.

Redistribution and use in source and binary forms of the Haro3D™ library, with or without modification, are permitted provided that the conditions expressed in the accompanying license are met (3-clause BSD license with add-ons for NI and Microsoft).

HaroTek is committed to maintain and keep growing the Haro3D™ library over time.

The Haro3D™ library can be downloaded from the current page. A copy of the manual is also available. The manual is simply the pdf version of the help file accessible from Labview.

Comments are welcome.

More information can be obtained at www.harotek.com.

Description

The Haro3D™ library gives access to the following functionality of theiesKinect v2:

  • Bodies (people and joints tracking)
  • Colored 3D cloud of points.
  • Depth sensing
  • Color high-definition camera
  • Active infrared imaging
  • 3D volume reconstruction

In addition, the Haro3D™ library offers Utilities VI's to complement the functionalities of the Kinect 2 like VI's to display and interact with the 3D data, and VI's to save and read back the 3D data (clouds of points and meshes). Fully functional examples for each of the functionalities are also provided.

...

image

image

image

Project Information URL: https://decibel.ni.com/content/docs/DOC-40832

Project Download URL: https://decibel.ni.com/content/docs/DOC-40832




Kinect4NES & Mike Tyson’s Punch-Out!

$
0
0

Paul DeCarlo, Kinect4NES (Yes, Kinect to a Classic NES), is back and continues to push forward on his cool project, Kinect4NES...

Training Kinect4NES to Control Mike Tyson’s Punch-Out!

In a previous post, I talked about how to create an interface to send controller commands to an NES based on interaction with the Kinect v2.  The idea was successful, but I received a bit of feedback on the control being less than optimal and a suggestion that it would likely work well with a game like Mike Tyson’s Punch-Out.

This aroused an interesting challenge, could I create a control mechanism that could allow me to play Mike Tyson’s Punch-Out using Kinect4NES with enough stability to accurately beat the first couple characters?

Let’s first look at how control was achieved in the first iteration of Kinect4NES.  There are essentially 2 ways of reacting to input on the Kinect, using a heuristic-based approach based on relatively inexpensive positional comparison of tracked joints or gesture based tracking (either discrete or continuous).  For my initial proof of concept, I used the following heuristic-based approach:

...

image

From here, I incorporated the relevant bits into GestureDetector.cs.  In my original implementation, I iterated through all recorded gestures and employed a switch to perform the button press when one was detected.  This proved to be ineffecient and created inconsistent button presses.  I improved this significantly in my second update using a dictionary to hold a series of Actions (anonymous functions that return void) and a parallel foreach, allowing me to eliminate cyclomatic complexity in the previous switch while allowing me to process all potential gestures in parallel.  I also created a Press method for simulating presses.  This allowed me to send in any combination of buttons to perform behaviors like HeadBlow_Right (UP + A).  I also implemented a Hold method to make it possible to perform the duck behavior (press down, hold down).  In the final tweak, I implemented a method to produce a RapidPress for the Recover gesture.  This allowed me to reproduce a well known tip in Punch-Out where you can regain health in between matches by rapidly pressing select.

This was a rather interesting programming excercise, imagine coding at 2 in the morning with the goal of optimizing code for the intent of knocking out Glass Joe in a stable repeatable manner.  The end result wound up working well enough to where a ‘seasoned’ player can actually TKO the first two characters with relative regularity.  In the video at the top of this post, the player had actually never used the Kinect4NES and TKO’d Glass Joe on his first try.  As a result, I am satisfied with this experiment, it was certainly a fun project that allowed me to become more familiar with programming for the Kinect while also having the joy of merging modern technology with the classic NES.  For those interested in replicating, you can find the source code on github. If you have any ideas on future games that you would like to see controlled with Kinect4NES, please let me know in the comments!

Project Information URL: http://pjdecarlo.com/2015/02/training-kinect4nes-to-control-mike-tysons-punch-out.html

Project Source URL: https://github.com/toolboc/Kinect4NES

Contact Information:




"Connecting with beer lovers..."

$
0
0

Today's inspirational project is one that, well, sells itself... (and one that I want in my Man Cave :)

Connecting with beer lovers, Kinect-style

A lost puppy and his Clydesdales stablemates may have commanded the advertising spotlight during Super Bowl XLIX, but for our money, the real marketing magic from brewer Anheuser-Busch was on display at the “House of Whatever,” a gigantic tent set up for three days outside the stadium. Inside was a huge bar, behind which hung a Kinect v2 sensor oriented toward the crowd of thirsty football (and beer) fans, with a large video screen above it.

As patrons walked into view of the sensor, the screen served up signage asking, “Can we can interest you in a drink?” Stepping up a little closer, the fan was presented onscreen options among the freshly poured, free glasses of Anheuser-Busch beers sitting on the bar. As the thirsty patron happily picked up a beverage, the screen displayed the choice, along with anonymous age and gender analytics of all visitors that day and a pie chart showing which beers had been the most popular.

image

The patron was then offered a chance to raise the glass and say “cheers,” at which point the Kinect sensor captured the image and displayed it onscreen. The fan could then retrieve that photo using a QR code and Instagram it with hashtag #getkinected, in order to be in the running to win a new Microsoft Surface Pro. 

The Kinect-enabled system was developed by Microsoft and incorporates world-leading biometric technology from NEC, which uses face recognition and measures the age, gender, and total headcount of patrons. NEC and Microsoft have been working together closely on this new breed of interactive retail systems, which offers a compelling shopping experience for customers and invaluable backend demographic and engagement data for the retailer. A version of the system was displayed earlier in January at the National Retail Federation’s (NRF) annual event in New York City.

The system can even recognize previous customers—provided the retailer has obtained express permission to store the customer’s facial image, as, say, part of a loyalty program. This feature allows the retailer to serve up ads and offers that tie directly to that patron’s past purchases. (It also recognizes store employees, allowing the system to ignore their presence.)

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/02/03/connecting-with-beer-lovers-kinect-style.aspx





Face Frame Data Dev Tip

$
0
0

Scott Young and new Friend of the Gallery InterKnowlogy, provides a nice tip for Face detection with the Kinect...

Some other recent InterKnowlogy blogger posts;

Kinect 2.0 Face Frame Data with Other Frame Data

Do you have a project that you are using the Kinect 2.0 Face detection as well as one, or more, of the other feeds from the Kinect? Well I am, and I was having issues with obtaining all the Frames I wanted from the Kinect. Let’s start with a brief, highlevel, overview, I had a need to obtain the data relating to the Color Image, the Body Tracking, and the Face Tracking. Seems very straight forward until I realized that the Face Data was not included in the MultiSourceFrameReader class. That reader only provided me the Color and Body frame data. In order to get the Face data I needed to use a FaceFrameReader. Which required me to listen for the arrival of two frame events.

For example, I was doing something like this.

...

In theory, this should not be a problem because the Kinect is firing off all frames at around 30 per second and, I assume, in unison. However, I was running into the issue that if my processing of the Face data took longer then the 30th of a second I had to process, the color or body data could be at a different point in the cycle. What I was seeing was images that appeared to be striped between two frames. Now I understand that this behavior could be linked of various issues that I am not going to dive into in this post. But what I had noticed was the more processing I tried to pack in to the Face Frame arrival handler, the more frequently I saw bad images. It is worth noting that my actual project will process all six of the faces that the Kinect can track, and when having to iterate through more then one face per frame, the bad, striped, images were occurring more then good images. This lead me to my conclusion (and my solution lead me to write this blog.)

I also did not like the above approach because it forced me to process frames in different places, and possibly on different cycles. So when something wasn’t working I had to determine which Frame was the offender, then go to that processing method. No bueno.

In trouble shooting the poor images, I had the thought “I just want the color and frame that the Face frame using.” Confused? I’ll try to explain. Basically, the Kinect Face tracking is using some conglomeration of the basic Kinect Feeds (Color, Depth, Body) to figure out what is a face, and the features of that face. I know this because if a body is not being tracked, a face is not being tracked. The depth is then use to track if the eyes are open or closed and other intricacies of the face. Anyways, back on track, I had a feeling that the Kinect Face Frame had, at least, some link back to the other frames that were used to determine the state of the face for that 30th of a second. That is when I stumbled upon FaceFrame.BodyFrameReference and FaceFrame.ColorFrameReference (FaceFrame.DepthFrameReference also exsits, it’s just not needed for my purposes). From those references you can get the respective frames.

After my epiphany my code turned into:

...

And with that, the occurrence of bad images appears to be greatly reduced. At least for now. We will see how long it lasts. I still get some bad frames, but I am at least closer to being able to completely blame the Kinect, or poor USB performance, or something other than me (which is my ultimate goal.)

Not a big thing, just something I could not readily find on the web.

Project Information URL: http://blogs.interknowlogy.com/2015/02/04/kinect-2-0-face-frame-data-with-other-frame-data/

Project Download URL: [URL]

Project Source URL: [URL]

[Screenshots: (1-3)]

[Code Snip]

Contact Information:




Determining Kinect Capabilities at Runtime

$
0
0

So you've got a Kinect and all Kinect devices are the same, right? Nope. While the Kinect for Windows v2 and the Kinect for Xbox One are very close, much closer than the 360-vs-Windows v1, they are still a little different (the Windows v2 device doesn't have the IR Blaster).

Should you always just assume that a feature or capability is available? You know what they say about "assume"... Luckily Friend of the Gallery Abhijit Jana provides a nice tip on how you don't have to assume....

How to Identify the Kinect Sensor Capabilities ?

When you are developing a Kinect application, you must ensure that the Kinect device attached with your application supports all the required capabilities. If the attached device does not support the capabilities that your application need, you can prompt a message to user on the same. Now, how do we identify the capabilities of Kinect sensor?  Kinect for Windows SDK has a Properties KinectCapabilities that returns the capabilities of the attached Kinect Sensor.

Following code snippet shows how to get the default connected sensor and the capabilities that the sensor supports.

KinectSensor sensor = KinectSensor.GetDefault();
if (sensor != null)
{
KinectCapabilities sensorCapabilities = sensor.KinectCapabilities;
}


Run the above code and put a breakpoint to check the sensorCapabilities.

image

KinectCapabilities is a Flag enum defined in Microsoft.Kinect assembly and having following list of items.

...

Project Information URL: http://dailydotnettips.com/2015/02/09/how-to-identify-the-kinect-sensor-capabilities/

Many of our past posts from Abhijit;

Contact Information:




Kinecting to the Classroom

$
0
0

Today's inspirational post provides a great view of how Ubi Interactive's very cool looking product can be used to enhance the classroom experience, without breaking the bank...

Ubi’s Kinect-powered touchscreens: an affordable solution for the classroom

Victor Cervantes was searching for a solution. As IT director for COBAEP, a public high school system in of the Mexican state of Puebla, he was committed to introducing digital technology into the system’s 37 high schools. Cervantes firmly believed that the use of technology would both improve students’ learning and prepare them for the tech-heavy demands of college and the modern workplace.

image

The problem was finding a technology solution that was pedagogically sound and user friendly—and that wouldn’t bust his budget. He considered interactive white boards, but was put off by their high price tag and the steep learning curve for teachers. He was already exploring the potential of Kinect for Windows when he learned about the educational promise of Ubi Interactive, an innovative, Kinect-based system that turns virtually any surface into a touchscreen.

He contacted Anup Chathoth, co-founder and CEO of Ubi, and arranged for a month-long trial of the product. Cervantes soon realized that Ubi was just what he was seeking. The product would allow teachers to project teaching materials onto their existing classroom whiteboard, turning it into a fully interactive touchscreen. Teachers and students could then page through the content with simple, intuitive touch gestures. Moreover, by using an Ubi Pen, a specialized stylus that runs on the Ubi Annotation Tool software app, students and teachers could mark up materials right on their giant touchscreen and save their annotations to the digital file. 

Cervantes recognized that the immersive, fun experience of Ubi would engage students and draw them into the learning process. And he liked the simplicity of the product; the fact that it uses intuitive hand gestures and the familiar action of writing with a pen meant that teachers could master the system almost effortlessly. Moreover, he appreciated the broad applicability of the application. It could work on any digital materials, including published educational products, materials created by the teacher, homework submitted by the students, websites, and any Microsoft Office documents.

...

image

...

Chathoth also notes that the Kinect v2 sensor also enabled a new Ubi feature: a simple way to control any Windows application by using gestures. “A user can turn toward the Kinect sensor and control the interactive display by simply waving their hands in the air,” he explains. “If the user hovers over a spot and makes a fist, Ubi will tell the Windows application that the user is touch-activating that interactive part of the onscreen display. This is especially useful for teachers, allowing them to roam more freely while presenting a lesson.”

All of which makes Cervantes eager to deploy Ubi in the remaining unequipped classrooms. “We’ve had great success with Kinect for Windows and Ubi software, and we plan to put the v2 version in the classrooms at our other 17 schools over the coming year. This has been a great partnership with Ubi Interactive.”

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/02/13/ubi-s-kinect-powered-touchscreens-an-affordable-solution-for-the-classroom.aspx

Contact Information:




WebSocketing the Kinect

$
0
0

Peter Daukintis, Microsoft Technical Evangelist, fills in a Kinect for Windows v2 sample gap, providing a means to connect to a Kinect web WebSockets and thereby a web browser...

Kinect 4 Windows V2 – in the browser

For the older Kinect v1.0 there was a sample included with the Kinect SDK which provided browser compatibility. The general idea is that the Kinect SDK can be used to retrieve the various Kinect data streams i.e., the RGB stream, body tracking data, etc. and from within a console or desktop app a webserver serves up that data for consumption over localhost by a web app running in the browser. This opens up compatibility with various javascript frameworks such as Babylon.js (another option might be to run inside a Windows Store javascript app). The official sample is pretty comprehensive so I thought it might be useful to have something simpler to hand. Anyway, after a bit of searching around I found a few examples for v1.0 but nothing with v2.0 support so I decided to hack one together. This might come in handy at the London Kinect Hackathon on 21st and 22nd March REGISTER here.

I used SuperWebSocket from within a .NET console app to retrieve the Kinect data and broadcast it to any connected web sockets. The socket server code is simple

...

image

Project Information URL: http://peted.azurewebsites.net/kinect-4-windows-v2-in-the-browser/

Project Source URL: https://github.com/peted70/kinectv2-webserver

Here's some of the other posts from Peter we've highlighted recently;

Contact Information:




Getting Going with Kinect v2 Development

$
0
0

Mike Taulty, long time Coding4Fun Friend, recently presented at a London Kinect Hack Event on how to quickly get started developing with the Kinect for Windows v2. Lucky for us, he's shared it with us...;

Here are just a few times we've highlighted Mike's work;

Get Set Up For Kinect for Windows V2 Development

Here’s a quick guide to getting yourself set up for Kinect for Windows V2 development.

It was put together specifically for the Kinect Hack for Windows London event but it’s generally applicable.

Here’s what you’re going to need to get going

Video

Here’s a video walkthrough of the rest of the post and a few other bits – for the XEF file that I used in the video see the bottom of the posts;

from Vimeo.

A Computer ...

Windows 8.1 (or 8.0) ...

Visual Studio ...

Kinect for Windows V2 SDK ...

Kinect for Windows V2 Sensor ...

Kinect Samples ...

Kinect Studio ...

A Sample XEF File ...

Project Information URL: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2015/02/01/get-set-up-for-kinect-for-windows-v2-development.aspx

Kinect for Windows V2 SDK: 3 ‘Hello World’ Videos

There’s a bunch of posts on the site around the Kinect for Windows V2 but as part of getting ready for the Kinect for Windows Hack London;

Dan asked whether I’d help with a few “getting started” materials (beyond the excellent info that’s already in the SDK with samples and so on).

...

The next step is to start building out some code and the SDK supports you in doing lots of different things including;

  • Building desktop applications in .NET or C++
  • Building Windows Store applications in .NET, JavaScript or C++

and I thought I’d put together the same ‘Hello World’ sample in a few of those technologies walking through from scratch what it looks like to put something together which gathers body data from the Kinect for Windows V2 sensor and displays it in a simple way.

Here’s the 3 videos…

Windows Store App in C# with Windows/XAML

Windows Desktop App in C# with WPF

Windows Store App in JavaScript with HTML

...

Project Information URL: http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2015/02/20/kinect-for-windows-v2-sdk-3-hello-world-videos.aspx

Contact Information:




Viewing all 446 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>