Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 446 articles
Browse latest View live

Making Kinect Data Distributable via Data Compression

$
0
0

Today we highlight a part of Marc Drossaers's Kinect Client Server (KCS) project, where he discusses how you can deal with the shear amount of data you can get from the Kinect for Windows device and how you can apply data compression to make it manageable

A couple other times we've highlighted Marc's work

Data Compression for the Kinect

Transmitting uncompressed Kinect depth and color data requires a network bandwidth of about 460Mbit/s. Using the RleCodec or the LZ4 library we achieve tremendous compression – a compression ratio of 10 or 22 respectively, at lightning speed – over 1600Mbytes/s. We achieve this not so much by the compression algorithms, but by removing undesirable effects (jitter, by the DiscreteMedianFilter) and redundancy (already sent data, by taking the Delta).

Introduction

From the start, one goal of the Kinect Client Server (KCS) project was to provide a version of the KCS viewer, called 3D-TV, from the Windows Store. Because of certification requirement 3.1 (V5.0)

“Windows Store apps must not communicate with local desktop applications or services via local mechanisms,..”

3D-TV has to connect to a KinectColorDepth server application on another PC. In practice, the network bandwidth that is required to transfer uncompressed Kinect depth and color data over Ethernet LAN using TCP is about 460Mbit/s, see e.g. the blog post on the jitter filter. This is a lot, and we would like to reduce it using data compression.

This is the final post in a series of three on the Kinect Client Server system, an Open Source project at CodePlex, where the source code of the software discussed here can be obtained.

...

Data Compression: Background, Existing Solutions

Theory

If you are looking for an introduction to data compression, you might want to take a look at Rui del-Negro’s excellent 3 part introduction to data compression. In short: there are lossless compression techniques and lossy compression techniques. The lossy ones achieve better compression, but at the expense of some loss of the original data. This loss can be a nuisance, or irrelevant, e.g. because it defines information that cannot be detected by our senses. Both types of compression are applied, often in combination, to images, video and sound.

The simplest compression technique is Run Length Encoding, a lossless compression technique. It simply replaces a sequence of identical tokens by one occurrence of the token and the count of occurrences. A very popular somewhat more complex family of compression techniques is the LZ (Lempel-Ziv) family (e.g. LZ, LZ77, LZ78, LZW) which is a dictionary based, lossless compression. For video, the MPEG family of codecs is a well known solution.

Existing Solutions

...

The RleCodec

I decided to write my own data compression codec, and chose the Run Length Encoding algorithm as a starting point. Why?

Well, I expected a custom algorithm, tailored to the situation at hand would outperform the general purpose LZ4 library. And the assumption turned out to be correct. A prototype implementation of the RleCodec supported by both the DiscreteMedianFilter and creating a Delta before compressing data really outperformed the LZ4 reference implementation, as can be read from the performance data in the Performance section.

It only dawned on me much later that removing undesired effects (like jitter, by the DiscreteMedianFilter) and redundant information (already sent data, by taking the Delta) before compressing and transmitting data is not an improvement of just the RLE algorithm, but should be applied before any compression and transmission takes place. So, I adjusted my approach and in the performance comparison below, we compare the core RLE and LZ4 algorithms, and see that LZ4 is indeed the better algorithm.

...

Implementation

Algorithm

In compressing, transmitting, and decompressing data the KinectColorDepth server application takes the following steps:

  1. Apply the DiscreteMedianFilter.
  2. Take the Delta of the current input with the previous input.
  3. Compress the data.
  4. Transmit the data over Ethernet using TCP.
  5. Decompress the data at the client side.
  6. Update the previous frame with the Delta.

Since the first frame has no predecessor, it is a Delta itself and send over the network as a whole.

Code

The RleCodec was implemented in C++ as a template class. Like with the DiscreteMedianFilter, traits classes have been defined to inject the properties that are specific to color and depth data at compile time.

The interface consists of:

...

Performance

How does our custom RLE codec perform in test environment and in the practice of transmitting Kinect data over a network? How does its performance compare to that of LZ4?. Let’s find out.

...

Conclusions

Using the RleCodec or the LZ4 library we achieve tremendous compression, a compression ratio of 10 or 22 respectively , at lightning speed – over 1600Mbytes/s. We achieve this not so much by the compression algorithms, but by removing undesirable effects (jitter, by the DiscreteMedianFilter) and redundancy (already sent data, by taking the Delta).

...

[Click through for the full post, details, tips and more]

Project Information URL: http://thebytekitchen.com/2014/03/24/data-compression-for-the-kinect/

Contact Information:




Kinecting to an Orchestra of Obedient Deltabots

$
0
0

Today's inspirational project is another vision of how the Kinect is impacting performance art...

Orchestra of Obedient Deltabots Sways at Your Command

More fun with inverted delta kinematics!

Sarah Petkus is an illustrator, graphic designer, and robotics artist from Las Vegas, Nevada. And this weekend she’s planning to unveil something big — a darkroom “stationary swarm” installation called Robot Army.

image

As of this writing, Robot Army consists of 30 soldiers which can be controlled, en masse, via a gestural interface based on Kinect.

Project updates, info, and kit sales are available online at robot-army.com.

Project Information URL: http://makezine.com/2014/05/16/orchestra-of-obedient-deltabots-sways-at-your-command/

image

image

Contact Information:



Everyone can be a Super Hero

$
0
0

Today's inspiration project is one that makes advertising fun, all with the power of the Kinect

SUPER HERO EXPERIENCE

image

Superhero-like experience

Each of the United States superhero film appeared, were able to win the attention of people all over the world, while those handsome helmet and stunning capacity often leaves us with envy. Now, we use technology, motion capture sensors through the bones, so you can control the superheroes on the big screen, handsome, cool skill, so you have to be a super hero and wonderful experience.

YESTERDAY'S SCIENCE FICTION, TODAY'S REALITY.

THE DREAM OF YESTERDAY

Science fiction yesterday, today's reality.

We look at the Hollywood science fiction film, as if they will never be the future world trailer. Movie robots, stereo image across the empty action, as well as the always compelling interpersonal interactivity is the focus of the most eye-catching.

In the iron man movie, starring Tony. Stark (Tony Stark), after you wear armor, using computers, "Jarvis," to control this helmet shield. Now we use the Kinect device skeletal capture technologies can do the same thing. Iron man that we can synchronize any of your action, raised his hand to activate the laser cannons, flying with open arms.

image

Project Information URL: http://next-digital.cn/ShowWorks.asp?ArticleID=3



Kinxct Ray - A Kinect for Windows v2 X-Ray

$
0
0

Medical uses of Kinect Workshop

$
0
0

using the Kinect for Windows in the medical industry is one, that you know, I find interesting and hopeful. This workshop is another example of how that's happening...

Workshop highlights medical uses of Kinect technology

In keeping with the January ritual of reflecting on the past year’s accomplishments, we’re eager to tell you about a very special event that Microsoft Research Cambridge hosted in November: the Body Tracking in Healthcare workshop. This occasion celebrated the completion of a two-year collaboration between Microsoft Research Cambridge and Lancaster University, during which we explored the use of touchless interactions in surgical settings, allowing images to be viewed, controlled, and manipulated without physical contact via the Kinect for Windows sensor.

image

The Kinect for Windows-based system, which has been widely covered in the popular press, enables surgeons to navigate through and manipulate X-rays and scans during operations, literally with a wave of the hands, without touching the non-sterile surface of a mouse or keyboard. It’s a prime example of the burgeoning field of natural user interface (NUI), which promises to change our relationship with today’s ubiquitous devices.

The workshop brought together experts from academia and industry to discuss the use of Kinect for Windows in medicine—in applications that extend well beyond the operating room. Kinect’s body tracking abilities are already being harnessed for clinical assessments of, for example, children with motor disabilities. One talk at the workshop demonstrated a system in which youngsters with cerebral palsy play simple computer games while Kinect for Windows monitors their movements, providing data that physicians can use to assess the state of the disease.

...

We hope to publish a comprehensive report on the projects shown at the workshop, either via a special issue of a journal or in a book. Meanwhile, a cover story in the January 2014 issue of Communications of the ACM features some of this work.

Learn more

[Click through for the entire post]

Project Information URL: http://blogs.msdn.com/b/msr_er/archive/2014/01/10/workshop-highlights-medical-uses-of-kinect-technology.aspx



"Comparing MultiSourceFrameReader and XSourceFrameReader"

$
0
0

Tom Kerkhove, Kinect MVP, is back, this comparing two different Frame Readers...

Gen. II Kinect for Windows – Comparing MultiSourceFrameReader and XSourceFrameReader

In a previous post I talked about how you can implement the basic data streams into your application. Next to that I also explained the new streaming model they introduced with the frame readers for each type of stream, f.e. ColorSourceFrameReader.

image

With the new model came an equivalent to the AllFramesReady event called a MultiSourceFrameReader that is capable of reading on several different data streams at once.

In this post I will go deeper in the differences between the MultiSourceFrameReader and the single SourceFrameReaders.

Disclaimer

I am bound to the Kinect for Windows Developer Program and can not share the new SDK/DLL.

“This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

MultiSourceFrameReader vs XSourceFrameReader

In my basics overview I visualized the camera with a Body overlay that showed a couple of joints by using code that is similar to the one below -

...

In the first generation each frame had a property FrameNumber that you could use to determine if both data frames are in sync. In the second generation this has been replaced with a RelativeTime that indicates the correlation between frames.

Kinect v1 offered a more efficient way that does this for use by using the AllFramesReady-event that includes all the enabled stream data, I’m glad to tell you that this concept is also available in the new source/reader concept by using a MultiSourceFrameReader.

...

We can easily replace our two XSourceFrameReader with one MultiSourceFrameReader where we specify the data streams by using the FrameSourceTypes-enumeration. The rest of the processing is pretty much the same – You listen to the MultiSourceFrameArrived-event, you retrieve the MultiSourceFrameReference and the MultiSourceFrame. Now with the MultiSourceFrame you are able to retrieve the corresponding frames for your selected FrameSourceTypes.

Here is an example on how you can use a MultiSourceFrameReader –

...

Conclusion

You should try to use the MultipleSourceFrameReader when your data should be synced or use the RelativeTime-property to check the correlation.

If this is not required you can still use the XSourceFrameReader for that specific data streams.

You can also use a combination of several MultiSourceFrameReader & XSourceFrameReaders depending on your requirements.

[Make sure you click through for the source and more]

Project Information URL: http://www.kinectingforwindows.com/2014/05/19/second-gen-kinect-comparing-multisourceframereader-and-xsourceframereader/

Contact Information:

Other posts from Tom you might also find interesting;

 



Kinect Translation Tool: Sign to Spoken and back again...

$
0
0

Today's project is another look at using the Kinect to translate sign into spoken text and the reverse.[note, the text below was quoted from the project site and contained some unfortunate phrasing, and has been edited. I believe this was simply a language issue on the part of the author, but regardless it has been edited to avoid offense]

Kinect Translation Tool: From Sign Language to spoken text and vice versa

Software System Component
1. Kinect SDK ver.1.7 for the Kinect sensor.

2. Windows 7 standard APIs- The audio, speech, and media APIs in Windows 7

This project was aimed to help the hearing-impaired people to communicate with [non-hearing-impaired] people effectively by simply using their sign language. By utilizing Microsoft Kinect technology, the hearing-impaired people only need to stand in front of the Kinect camera and begin to deliver their message by using sign language. The Kinect Sign Language should translate the sign language into written and spoken languages. Vice versa, [non-hearing-impaired] people may directly reply using their common spoken language that is easily translated into written language along with its animated 3D sign language gestures.

You can use this code or any original or modified portion of it in any project that you like. We only ask that you keep the authors' names in the copyright notice because a lot of thought and effort went into this.

image

Project Information URL: http://kinecttranslation.codeplex.com/

Project Download URL: http://kinecttranslation.codeplex.com/releases/view/122603

Project Source URL: http://kinecttranslation.codeplex.com/SourceControl/latest



"3D Movies with Kinect for Windows v2"

$
0
0

Today James Ashley provides a glimpse at what might be a very cool possible usage of the Kinect for Windows, making 3D movies, well more like flat holographic movies. Combine this with some of the recent holographic advances and you've really got something! While there's no download yet, but just the idea behind this is well worth highlighting it...

3D Movies with Kinect for Windows v2

image

To build 3D movies with Kinect, you first have to import all of your depth data into a point cloud.  A point cloud is basically what it sounds like: a cloud of points in 3D space.  Because the Kinect v2 has roughly 3 times the depth data provided by the Kinect v1, the cloud density is much richer using it.

The next step in building up a 3D movie is to color in the pixels of the point cloud.  Kinect v1 used an SD camera for color images.  For many people, this resolution was too low, so they came up with various ways to sync the data from an DSLR camera with the depth data.  This required precision alignment to make sure the color images lined up with and then scaled to the depth pixels.  This alignment also tended to be done in post-production rather than in real-time.  One of the most impressive tools created for this purpose is called the RGBD Toolkit, which was used to make the movie Clouds by James George and Jonathan Minard.  The images in this post, however, come from an application I wrote over Memorial Day weekend

Unlike its predecessor, Kinect for Windows v2 is equipped with an HD video camera.  The Kinect for Windows v2 SDK also has facilities to map this color data to the depth positions in real-time, allowing me to record in 3D and view that recording at the same time.  I can even rotate and scale the 3D video live.

...

image

I don’t really know what this would be used for – for now it’s just a toy I’m fiddling with--, but I think it would at least be an interesting way to tape my daughter’s next high school musical.  On the farther end of the spectrum, it might be an amazing way to do a video chat or to take the corporate video presentation to the next level.

[Click through to see all the images, and the rest of the post]

Project Information URL: http://www.imaginativeuniversal.com/blog/post/2014/05/27/3D-Movies-with-Kinect-for-Windows-v2.aspx

Contact Information:




"Frames Monitor" Utility from Tom Kerkhove

$
0
0

Today we highlight another Kinect for Windows v2 post from Kinect MVP, Tom Kerkhove.

Gen. II Kinect for Windows – Introducing ‘Frames Monitor’

In my previous post I introduced you to the MultiSourceFrameReader and how it syncs the frames between different data sources.

Disclaimer

I am bound to the Kinect for Windows Developer Program and can not share the new SDK/DLL.

“This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

What is Frames Monitor?

I created a simple tool that visualizes the amount of frames/second that are received when using the MultiSourceFrameReader in a certain environment.

It enables you to run the test in a specific environment and take in account how much of the frames you are going to receive. This result will depend on the whether you selected the color data streams & what the amount of light is in the scene.

This tool is only supported with the Gen. II Kinect for Windows and is created with C# & WPF and uses the MVVM-pattern.

You can download this tool here.

Screenshots

This monitor has been run in an environment where there was a lot of light in the room and the result is a total of 30 FPS.

image

This monitor has been run in an environment where there was a not much light in the room and the result is a total of 15 FPS.

...

Project Information URL: http://www.kinectingforwindows.com/2014/05/25/gen-ii-kinect-for-windows-introducing-frames-monitor/

Project Download URL: https://github.com/KinectingForWindows/G2KFramesMonitor

Project Source URL: https://github.com/KinectingForWindows/G2KFramesMonitor

Contact Information:

Other posts from Tom you might also find interesting;



Pre-order your Kinect for Windows v2 now!

$
0
0

It's here, finally! The Kinect for Windows v2 device is now available for pre-order, while supplies last. Buy it today, get it in July and the beta of the Kinect for Windows SDK.

What, you're still here? Go get it! Pre-order the v2 sensor 

Pre-order your Kinect for Windows v2 sensor

image

We are pleased to announce that beginning today, Kinect for Windows v2 sensors is now available for purchase during a pre-order period in 22 countries, before general availability later this year.  Sensors purchased during the pre-order phase will be shipped in July, at which time we will also release a public beta of our software development kit (SDK) 2.0. All of this will happen a few months ahead of general availability, giving developers who pre-order a head start on using the v2 sensor’s new and improved features, including increased depth-sensing capabilities, full 1080p video, improved skeletal tracking, and enhanced infrared technology.

Recent Background on Kinect for Windows v2:

  • The Kinect for Windows Developer Preview Program was launched in November 2013 which gave qualifying participants access to a pre-released v2 sensor and an alpha SDK. 
  • We received thousands of applications and selected participants based on the applicants’ expertise, passion, and the raw creativity of their ideas. Due to extremely high demand from the community, we expanded access to the program earlier this calendar year. 
  • This past April at Build 2014, we announced that the Kinect for Windows v2 sensor and SDK would be coming this summer and with them, the ability for developers to start creating Windows Store apps with Kinect for the first time.
  • With the opportunity to pre-order starting today, developers and businesses will be able to engage with the technology and start using the v2 sensor and SDK this July.

At BUILD in April, we told the world that the Kinect for Windows v2 sensor and SDK would be coming this summer, and with them, the ability for developers to start creating Windows Store apps with Kinect for the first time. Well here in Redmond, Washington, it’s not summer yet. But today we are pleased to announce that developers can pre-order the Kinect for Windows v2 sensor. Developers who take advantage of this pre-order option will be able to start building solutions ahead of the general public.

Sensors purchased during the pre-order phase will be shipped in July, at which time we will also release a public beta of our software development kit (SDK). All of this will happen a few months ahead of general availability of sensors and the SDK, giving pre-order customers a head start on using the v2 sensor’s new and improved features, including increased depth-sensing capabilities, full 1080p video, improved skeletal tracking, and enhanced infrared technology.

Thousands of developers wanted to take part in our Developer Preview program but were unable to do so—in fact, we’re still receiving requests from all around the world. So for these and other developers who are eager to start using the Kinect for Windows v2, the pre-order option offers access to the new sensor ahead of general availability. Bear in mind, however, that we have limited quantities of pre-order sensors, so order while supplies last.

The v2 sensors will also be shipped in July to those who participated in the Developer Preview program. For these early adopters, it’s been an amazing six months: we’ve seen more stunning designs, promising prototypes, and early apps than we can count—from finger tracking to touch-free controls for assembly line workers to tools for monitoring the environment. At BUILD, we showed you what Reflexion Health and Freak’n Genius were able to achieve with the v2 sensor in just a matter of weeks. And in July, when the sensor and SDK are more broadly available, we can only imagine what’s next.

Key links

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/06/05/pre-order-your-kinect-for-windows-v2-sensor-starting-today.aspx

Contact Information:




"The Future of Kinect"

$
0
0

The Future of Kinect

Zombies don’t have to be scary – especially when kids can create them in their own image. Using the Kinect for Windows v2 sensor and an app called YAKiT, children can step into the role of the undead and see it come to life using performance-based animation. Like so many who use the Kinect sensor, kids don’t need a laundry list of instructions to use it. They just step in front of it, creep like zombies and instantly, their animated figures move like them, sparking a cacophony of giggles.

While the latest version of Kinect has been available since the launch of Xbox One, the preorder of the Kinect for Windows version becomes available for all developers today. Both sensors are built on a set of shared technologies.

Companies such as Freak’n Genius, the Seattle-based company behind YAKit, have already had the chance to try the Kinect for Windows v2 sensor through its Developer Preview Program. “It’s so magical, honestly,” says Kyle Kesterson, Freak’n Genius founder. “We put people in front of it, and they light up without even having to do anything.”

But behind that magic is the culmination of years of machine learning. It’s all part of a complex 24-7 process that involves a legion of people and resources that gather data on voices, body gestures and facial expressions, then test the information and analyze it before the software makes its way to your living room.

...

Machine learning: Teaching software how to behave

At Microsoft, there’s a whole group of people in the NUI group focused on taking requests from different teams and gathering information about how people move and express themselves.

“We start with designing the hardware, getting the best eyes and ears into the living room. Then we go through the process of building the software for it – the brain that takes that raw signal and takes it into an understanding of the room and the people in it,” says Evans.

When it was released as part of Xbox One, Kinect was already programmed to recognize certain movements and objects as a baseline. But in order to improve that software, first Microsoft needs to document real people using it in their natural environments, then manually compare what Kinect sees with reality (“ground truth”). That data is then fed into a system, which runs algorithms to find where its software recognition doesn’t match the ground truth – and that’s where it knows to improve.

Collecting data for Kinect means bringing volunteers to labs on the Microsoft campus, suiting up for motion capture sessions and visiting Microsoft employees’ homes – a diverse group that spans age, gender, languages and ethnicity – to record video clips of bodies in natural motion.

letterswheel

...

The Ground Truth

All that data then goes to taggers who establish “ground truth.” It’s a tedious but necessary set of tasks that involve skeleton tracking, tagging 25 joints on the human body electronically, defined on a frame-by-frame basis. This is how movement is documented in 3D spaces and fed into machine learning. About 20 in-house taggers have to define where hand, shoulders, hands and feet are – as well as other areas on a body.

...

Passing the Gauntlet

Vince Ortado’s team at Microsoft processes up to 180,000 video clips an hour, running machine learning algorithms that improve Kinect’s software. More than 300 Xbox developer kits operate 24-7, divided into groups testing anything from hand gestures to identity.

It’s important to have all these millions of frames of video go through as fast as possible, as the teams working on Kinect can only act after they’ve received the results. And they’re on a schedule to act at a brisk pace with monthly software releases that give users an experience that continuously improves.

...

Right now, people can experience Kinect through Xbox One: playing games, choosing movies and using Skype. Or they might be out and about and interact with a Kinect for Windows sensor as part of a retail experience, or in other spaces such as museums, hotels or corporate offices. Or they may happen upon interactive animation experiences such as those Freak’n Genius has staged, that put people on stage dancing as a company mascot. The availability of preorders on Thursday will allow even more Kinect for Windows v2 sensors to get into the hands of developers and enable a wider variety of user scenarios.

As for the teams of people who continue working to improve Kinect, Kinect’s Evans says, “It’s all about making Kinect work whether or not you have a puffy couch or a ficus in your living room that might look like a person. Being able to always get it right and understand who you are in your natural environment, in every living room with every person. That’s the investment we make in doing the machine learning. It’s to get it right for everybody.”

[Click through to read the full article]

Project Information URL: http://www.microsoft.com/en-us/news/features/2014/jun14/06-05kinect.aspx




Kinect Cursor WPF Control

$
0
0

Today Friend of the Gallery is back, today with a cool control you can use in your next Kinect for Windows WPF app.

Other recent Gallery Posts for Vangos Pterneas;

Kinect cursor for hand tracking

image

Navigating through a Natural User Interface using your palm is quite common – after all, it’s the primary navigation mechanism XBOX uses. Many Windows Kinect applications implement hand tracking for similar purposes. Today, I would like to share a Kinect hand cursor control I developed and you can use for your own apps. This hand cursor control will save you tons of time and you’ll be able to integrate it right into your existing WPF code!

Here is the final result of this handful user control:

image

Using the control in your project is fairly easy. Read on!

Prerequisites

The code

OK, let’s type some quick code now.

Step 1: Download the project from GitHub

Download the source code and build it using Visual Studio. Locate the assembly named KinectControls.dll.

Step 2: Import the assembly to your project ...
Step 3: Import the assembly to your XAML code ...
Step 4: Move the cursor using C# ...

..

Copyrights

You are free to use the user control as you wish for your personal and commercial projects, just by making a simple attribution in your project or buying me a beer.

PS: New Kinect book - 20% off

This blog post is part of a new book I am publishing a new ebook in a few days. The book is an in-depth developer guide about Kinect, using simple language and step-by-step examples. You'll learn usability tips, performance tricks and best practices for implementing robust Kinect apps. Please meet Kinect Essentials, the essence of my 3 years of teaching, writing and developing for the Kinect platform. Oh, did I mention that you'll get a 20% discount if you simply subscribe now? Hurry up

[Click through too see the samples, read the entire post and more]

Project Information URL: http://pterneas.com/2014/06/06/kinect-cursor-for-hand-tracking/

Project Download URL: Download from GitHub

Project Source URL: Download from GitHub

Contact Information:




Kinect Common Bridge v2 Beta now available

$
0
0

With the coming release of the Kinect for Windows v2 device, which you can pre-order now, we should start seeing more of our favorite frameworks, utilities and tools being updated.

We've highlighted Kinect Common Bridge a couple times before, Kids, Kinect, Cinder and some C++ too... Meet the Kinect Common Bridge, A Bridge not to far... The Kinect Common Bridge get face tracking and voice recognition. Well it's now time to highlight it again with its new support for the Kinect for Windows v2 device...

Get Your Hands on the Kinect Common Bridge v2 Beta!

MS Open Tech first released the Kinect Common Bridge last fall to support creative developers looking to harness the capabilities of Microsoft Kinect. We updated this toolkit again earlier this year  to help developers integrate Kinect capabilities within their code.

The MS Open Tech Hub’s close coordination Kinect development cycles has made it possible for us to release the Kinect Common Bridge v2 beta today via GitHub. The primary focus of this new version release is to enable developers to quickly integrate the Kinect v2’s new sensor capabilities within a simplified set of C- based API’s. Kinect Common Bridge v2 complements the Kinect for Windows SDK v2, a set of resources designed to integrate Kinect scenarios into a variety of creative development libraries and toolkits.

Integrated Innovation of the Platform

We are also pleased to share that the innovation of the Kinect Common Bridge platform is inspiring others - notably framework creators like Cinder, openFrameworks  and Unity  – each of whom leverage Kinect Common Bridge to integrate Kinect support via KCB or the way KCB is working but also the Kinect team itself.

As a result, there are several new resources available for developers to integrate Kinect v2 sensor functionality into their code:

Please note that a separate version for WinRT will not be necessary. Since MS OpenTech has ported openFrameworks to Windows 8, we have also produced samples to use Kinect v2 within openFrameworks applications running on WinRT  check our sample repo here

Kinect v2: Building an Enhanced Sensory Experience

The Kinect for Windows v2 SDK brings the sensor’s new capabilities to life:

  • Window Store app development: ...
  • Unity Support: ...
  • Improved anatomical accuracy: ...
  • Simultaneous, multi-app support: ...
Getting Started

You can now pre-order your Kinect for Windows v2 sensor , and start using the SDK right away!

[Click through to read the entire post]

Project Information URL: http://msopentech.com/blog/2014/06/11/get-your-hands-on-the-kinect-common-bridge-v2-beta/

Project Download URL: https://github.com/MSOpenTech/KinectCommonBridge/tree/2.0

Project Source URL: https://github.com/MSOpenTech/KinectCommonBridge/tree/2.0




Programming the Kinect for Windows [v2] Jump Start, July 15th

$
0
0

Friends of the Gallery, Ben Lower and Rob Relyea are hosting a free, live, day long Jump Start just for you, the Kinect for Windows v2 developer...

Programming the Kinect for Windows Jump Start

Devs, are you looking forward to building apps with Kinect for Windows v2 this summer? In this Jump Start, explore the brand new beta Software Development Kit with experts from the Kinect engineering team, and see how Kinect v2 enables speech, gesture, and human understanding in applications and experiences.

​Learn about the new APIs and app model, and see fascinating demos and samples (plus source code) for both desktop and Windows Store apps. Get the details on Kinect Fusion (real-time 3D modeling), Face Tracking, and Visual Gesture Builder. Discover the new sensor technology, natural user interface (NUI), accessibility potential, and practical applications.

Even if you don't have a Kinect device, you won't want to miss this entertaining event. The instructors even show you how you can start building an app without a sensor. Be sure to bring your questions!

Course Outline:

  • ​Introducing Kinect Development
  • Kinect Data Sources and Programming Model
  • Kinect Interactions and Speech
  • Using Kinect with Other Frameworks or Libraries
  • Face Tracking, HD Face, and Kinect Fusion
  • Kinect Studio and Visual Gesture Builder
  • Advanced Topics: Custom Sources, Filtering, and More

Live Event Details

July 15, 2014, 9:00am–5:00pm PDT
What time is this in my time zone?

What: Fast-paced live virtual session

Cost: Free

Audience: D​evs interested in programming for the Kinect for Windows v2 hardware.

Prerequisites: No previous Kinect experience necessary, but attendees should be familiar with developing apps for Windows. Experience with Visual Studio, along with C#, C++, Visual Basic, or JavaScript, is highly recommended.

Project Information URL: http://www.microsoftvirtualacademy.com/liveevents/programming-the-kinect-for-windows-jump-start




Handle your frames with caution...

$
0
0

Bruno Capuano, Friend of the Gallery, has a quick Kinect for Windows v2 tips, that might help you save you from pulling out your hair

[#KINECTSDK] Caution when you work with frames! (dispose objects correctly)

Hello!

Some time ago I wrote a post about the importance of properly destroy the objects of the KinectSdk when you close an app. The lesson today is similar, but bounded to work with a Frame.

For example, the following piece of code shows 2 ways of processing a FRAME with KinectSDK V2

image

The function between lines 1 and 19, used the frame within a using() and 2nd option does not. The strange thing is that the 2nd option does not give an error or anything. However when not destroyed the dmard, it never returns to process a new frame with what the Kinect information process is stopped for this application.

Lesson learned live!

Project Information URL: http://elbruno.com/2014/06/11/kinectsdk-caution-when-you-work-with-frames-dispose-objects-correctly/

Contact Information:





Kinect to Oculus Rift with Kintinuous

$
0
0

Today's kind of, sort of inspirational project is from a Hack-a-Day post by Brian Benchoff. It's kind of inspirational in that since the project's authors can't release the source, they can at least release their background research...

Virtual Physical Reality With Kintinuous And An Oculus Rift

The Kinect has long been able to create realistic 3D models of real, physical spaces. Combining these Kinect-mapped spaces with an Oculus Rift is something brand new entirely.

[Thomas] and his fellow compatriots within the Kintinuous project are modeling an office space with the old XBox 360 Kinect’s RGB+D sensors. then using an Oculus Rift to inhabit that space. ...

While Kintinuous is very, very good at mapping large-scale spaces, the software itself if locked up behind some copyright concerns the authors and devs don’t have control over. This doesn’t mean the techniques behind Kintinuous are locked up, however: anyone is free to read the papers (here’s one, and another, PDF of course) and re-implement Kintinuous as an open source project. ...

Project Information URL: http://hackaday.com/2014/06/21/virtual-physical-reality-with-kintinuous-and-an-oculus-rift/

image




Hand Gestures, Kinect and Conversations

$
0
0

This inspirational project provides another view of how the Kinect might be used to extend and enhance communication and interaction (and as a "hand talker" myself I think it's just kind of cool :)

Kinect for Windows helps decode the role of hand gestures during conversations

We all know that human communication involves more than speaking—think of how much an angry glare or an acquiescent nod says. But apart from those obvious communications via body language, we also use our hands extensively while talking. While ubiquitous, our conversational hand gestures are often difficult to analyze; it’s hard to know whether and how these spontaneous, speech-accompanying hand movements shape communication processes and outcomes. Behavioral scientists want to understand the role of these nonverbal communication behaviors. So, too, do technology creators, who are eager to build tools that help people exchange and understand messages more smoothly.

To decipher what our hands are doing when we talk to others, researchers need to obtain traces of hand movements during the conversation and be able to analyze the traces in a reliable yet cost-efficient way. Professor Hao-Chuan Wang and his team at National Tsing Hua University in Taiwan realized that they could solve this problem by using a Kinect for Windows sensor to capture and record both the hand gestures and spoken words of a person-to-person conversation.

image

“We thought to use Kinect because it’s one of the most popular and available motion sensors in the market. The popularity of Kinect can increase the potential impact of the proposed method,” Wang explains. “It will be easy for other researchers to apply our method or replicate our study. It's also possible to run large-scale behavioral studies in the field, as we can collect behavioral data of users remotely as long as they are Kinect users. Kinect's software development kit is also … easy to work with.”

With the advantages of Kinect for Windows in mind, ...

...

During the resulting collaborative research, the team placed two Kinect sensors back-to-back between two conversational participants to document the session. The sensors captured the speech and hand movements of each of the interlocutors simultaneously, providing a time-stamped recording of the spoken words and hand traces of the interacting individuals.

image

Schematic depicting the placement of the Kinect for Windows sensors during the experiments

To demonstrate the utility of the approach, the researchers compared the amount and similarity of hand movements under three conditions: face-to-face conversation, video-mediated chat, and audio-mediated chat. The two participants could see each other during the face-to-face and video chat conversations, but they had no visibility of one another during the audio chat.

...

“It's easy to set up and program Kinect, so it greatly reduces the overhead of applying it to cross-disciplinary research, where the goal is to spend time on studying and solving the domain problems rather than technical troubleshooting,” Wang explains.

A full paper about Wang’s collaboration project with Microsoft Research Asia was presented at CHI 2014, the ACM SIGCHI Conference on Human Factors in Computing Systems, which was held in Toronto, Canada, this April.

“I really enjoyed working with Microsoft Research Asia. I received both great support and freedom to pursue the topics of interest to me. This makes the collaboration really unique and valuable,” Professor Wang says, “and I hope to closely collaborate with Microsoft researchers to scale up the current work. The proposed method has the potential to help us better understand communication behaviors in unconventional communication settings, such as cross-cultural and cross-linguistic communications, and in educational discourse, such as teacher-student interactions. Because language-based communication often doesn't go well in these situations, the non-verbal part may become more functional. Deeper understanding of the processes is likely to inform the design of technologies to better support these situations.”

Winnie Cui, Senior Program Manager, Microsoft Research Asia

[Click through to read the entire article]

Project Information URL: http://blogs.msdn.com/b/msr_er/archive/2014/06/10/kinect-for-windows-helps-decode-the-role-of-hand-gestures-during-conversations.aspx

Contact Information:




Tilt and Smoothing Parameters WPF controls for your next Kinect Project

$
0
0

Today's Kinect for Windows v1 project is a work in progress found on CodePlex. Why re-invent when you can just reuse. :)

WPF Kinect User Controls

Current release contains two Kinect related UI WPF Controls to simplify developers live:

- Kinect tilt control - provides ability to tilt the controller up and down using Slidebar,

- Kinect smoothing parameters - provides ability to set advanced parameters for the skeleton tracking, see http://msdn.microsoft.com/en-us/library/jj131024.aspx for details.

The controls are designed to minimize the amount of the code and time to start working with. All you need is to instantiate in XAML code, set few parameters and ... that's it. Both controls require KinectSensor object instantiated in the code-behind, bound to its' dependency properties.

Sample ColorStream solution is provided, explaining how to use controls, see WpfKinectUserControlSample solution in download section.

What can I do with those controls?

KinectTiltControl - provides visual component (a slidebar) to set current tilt of the Kinect sensor attached. It synchronizes with sensor hardware in both directions (reading current value on sensor attachment). Control can handle dynamic sensor change during runtime.

KinectFilteringParamsControl - provides visual component to configure tracking parameters of the skeleton tracking mechanism embedded in the Kinect SDK. This include all the parameters specified by the Kinect SDK (based on Kinect SDK v.1.8): http://msdn.microsoft.com/en-us/library/jj131024.aspx . It also supports serialization and deserialization of those parameters to the XML file, to simplify experiment management.

Each control is bound to the KinectSensor instance thus you can use more than one to manage multiple Kinects independently.

The control disables and enables automatically - based on existence of the active Kinect sensor attached.

Install and using guide.

Starting Kinect development with those controls is pretty simple:

  1. Download latest DLL
  2. Reference in your project
  3. Add namespace to your xaml window / control:
    xmlns:uc="clr-namespace:WpfKinectUserControls;assembly=WpfKinectUserControls"
  4. Instantiate control in XAML (if you're using more than one Kinect in your project, there is no problem - put as many as you need).
  5. Configure controls - actually the only required parameters in case of the KinectTiltControl control is
    CurrentKinectSensor="{Binding Path=Kinect}"
    bind it to your KinectSensor property.
    • in case of the KinectFilteringParamsControl there are five more properties you may want to set to provide default values to the control.

The SOURCE CODE section provides sample solution.

Project Information URL: https://wpfkinect.codeplex.com/

Project Download URL: https://wpfkinect.codeplex.com/releases

Project Source URL: https://wpfkinect.codeplex.com/SourceControl/latest




Kinect for Windows v2 Live('ish)

$
0
0

Last week Ben Lower gave a live Kinect for Windows v2 presentation during dotnetConf 2014 which shows all allot of the new features of the Kinect for Windows v2 device and coming SDK.

dotnetConf 2014 - Kinect for Windows

We will take a look at what's new in Kinect for Windows v2 including the improvements in core sources like Infrared and Depth data.  We will also show how the new Kinect Studio enables Kinect development even while travelling via plane, train, or automobile (note: you should not dev and drive) and how Kinect Interactions can be used to add a new input modality to Windows Store applications.

Project Information URL: http://channel9.msdn.com/Events/dotnetConf/2014/Kinect-for-Windows

image




Kinecting to Dinos

$
0
0

Today's inspirational project, highlighted by Rob Wolf, shows a pretty unique real-world implementation of the Kinect. And one that's really pretty cool. Like the Rob says, who doesn't like dino's, especially Kinect powered ones!

#ICreatedThis: Kinect-powered dinosaurs

image

Who doesn’t love dinosaurs? Matt Fisher and his team at KumoTek Robotics took the traditional interactive exhibit experience one step further in their recent Red Dirt Dinos exhibit for the Oklahoma Museum Network. Instead of creating dinosaurs that would only respond to actions based on the location, position and number of faces in a crowd, the dinosaurs in Matt’s exhibit detect full body gestures with the help of a Microsoft Kinect. This allows visitors to hold out their hands to experience friendly horse-like responses from herbivores and menacing lion-like responses from predators, raising the bar in interactivity for large scale robotics exhibits.

“Understanding the limitations of using only face detection, we decided to incorporate new sensors and interactive programs into our exhibits," Matt says. "Microsoft Kinect was the first and final candidate for a combined hardware and software platform that could both detect full-body gestures, while providing our development team with an open software framework to interface with our Windows 7 based Human Interaction System.”

image

Matt Fisher is featured on the Microsoft Facebook Page in #ICreatedThis, an ongoing series that showcases people doing interesting things at Microsoft and with Microsoft technology.  Know someone else doing something amazing? Tweet us @Microsoft using the #ICreatedThis hashtag or email the story to cmgsocial@microsoft.com.

Project Information URL: http://blogs.technet.com/b/firehose/archive/2014/06/26/icreatedthis-kinect-powered-dinosaurs.aspx

New Interactive Dinosaur Experience: Robots with Personality!

We are pleased to announce the successful launch of our next generation interactive robotic dinosaur exhibit: Red Dirt Dinos.

image

These fully interactive robotic dinosaurs are controlled exclusivley by KumoTek's Human Interaction System and incorporate all of the capabilities seen in previous interactive dinosaur shows, plus full body recognition, blazing fast Intel processors and never before seen behaviors.

image

Previous versions of KumoTek's Human Interaction System combined facial recognition software with our interactive behavior program to animate robotic dinosaurs that would detect and track guests in real-time. This technology was first deployed as RoboSUE at The Field Museum in Chicago in 2010 and welcomed over 500,000 guests during its first year of operation. RoboSUE was a huge success for KumoTek as it was featured prominently on the Discovery Channel and PBS.

Red Dirt Dinos is a significant step up in interactivity and technology. Through advanced sensor technology and recently added behavioral features, the interactive robotic dinosaurs are now capable of detecting full bodies, as well as hand, arm and leg gestures, while responding based on the location, size and respective behaviors of each guest.

image

Loud noises and movement around the dinosaur's field of view also play a part in determining how the creatures react and adds to their animal-like personalities. The effect is a much more immersive and seemingly realistic interactive experience.

Experience the cute and cuddly Tenontosaurus:

...hold a camera in front of her and watch her rear up on two hind legs and put on a spectacular show.

...hold out your hand and see her respond like horse eating from your hand.

...gather around closely with your friends and watch her shy away into the nearby bushes.

These and many other interactive behaviors can be seen within the Red Dirt Dinos experience at the Science Museum Oklahoma or the Oklahoma Museum Network.

Project Information URL: http://kumotek.com/index.htm




Viewing all 446 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>