Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 446 articles
Browse latest View live

Kinect'ing to you with BioGaming

$
0
0

Today's inspirational project provides another glimpse into how the Kinect continues to impact the medical industry. Who would have thought it was going to have this impact of impact when it was released a couple years ago?

Playing for better health with BioGaming

Have you ever hurt your back, had a sports related injury, suffered a strain or sprain, broken a bone, or had surgery? If so, you have probably experienced some form of therapy delivered by clinicians who specialize in physical medicine and rehabilitation. Depending on your injury or condition, you may have started therapy in the hospital or in an outpatient clinic. Your therapy may have gone on for months and required frequent trips back and forth between your home and a physical therapy center. And let’s face it, the last thing you want to do when you’re already hurting is to haul yourself into a car or other transportation and go somewhere for therapy.

It’s no wonder then that as I travel the world I see so many technology solutions aimed at making physical therapy a better and easier experience. This is particularly true for solutions that include some sort of on-line, computer-based therapy that uses motion sensing technology such as Microsoft Kinect for Xbox or Windows PCs. In fact, I am aware of dozens of companies doing work in this space. Perhaps that’s not surprising considering that the US physical therapy market alone is valued at $29.4 Billion. Market research puts a value of $0.5 to $1.0 Billion annually for these new computer or gaming-based adjunct solutions for physical therapy and rehabilitation. The home-use market is expected to be an even larger opportunity.

One of the newer companies (2012) working in this space is an Israeli company called BioGaming. The company says its BioGaming platform is a cloud-based solution that lets physical therapists and trainers create personalized exercise programs that are automatically transformed into interactive, engaging games and instructions. BioGaming currently is piloting its technology at centers in the US and Israel.

Take particular note of the word “engaging”. You see,...

image

BioGaming creates fun, interactive therapy sessions that look and feel more like some of the best video games you’ve ever played. For patients, the BioGaming platform offers a variety of games from very basic and easy to exciting, graphics-rich, sophisticated challenges. The Kinect 3D sensor is able to monitor and analyze performance in real time so patients can receive feedback as they exercise and complete assignments.

...

image

If one examines the so-called “triple aim” of the health reform movement (higher quality, improved access to care, and lower cost of care) it is easy to see how BioGaming and other companies with a focus on improving the patient experience in physical medicine and rehabilitation programs are contributing to all three. Game on!

Project Information URL: http://blogs.msdn.com/b/healthblog/archive/2014/03/24/playing-for-better-health-with-biogaming.aspx

Contact Information:


"Turning Microsoft Kinect into a physical therapy tool"

$
0
0

Today we highlight another example of the Kinect making it possible to provide cutting edge therapy, without a cutting edge price. 

Microsoft CityNext: Turning Microsoft Kinect into a physical therapy tool

In many cities around world, officials are looking for ways to cope with the cost of caring for an aging population. The challenge will require health organisations to reimagine how they deliver care and track results. Often officials are worried they’ll be faced with a choice between lowered costs and improving care – but the city of Esbjerg, Denmark, found a way to do both.

Esbjerg is the first of about 20 Danish cities that are rolling out a solution from Welfare Denmark that uses Kinect for Windows and Microsoft Lync 2010 to allow patients to perform their physical therapy exercises from home. The Kinect’s camera allows physiotherapists to monitor patients’ progress through a routine incorporating any of 103 different exercises. In addition to verifying that the regimen is being followed, a therapist can use Microsoft Lync 2010 to offer feedback to provide specific instructions and ensure a routine is performed correctly.

By allowing patients to do their therapy exercises in the comfort and safety of their own home, care providers can dramatically lower the cost of providing such services. In one case, a patient in Esbjerg was able to complete three months of therapy for almost $2,500 less than it would cost to provide the same service at a medical facility.

But the savings may not be limited to less expensive treatments. Welfare Denmark says that these kinds of telemedicine solutions can help prevent health problems from reoccurring, which means fewer costly hospital visits.

”This generation of elderly people are not interested in going to elderly homes,” says Ulrik Møll, Managing Director of Welfare Denmark. ”We can help them remain at home and rehabilitate in a modern way through this solution.”

[See the full case study, Municipalities Adopt Home Health Care Solution, Reduce Rehabilitation Costs]

Project Information URL: http://blogs.msdn.com/b/microsoft_uk_health_blog/archive/2013/12/30/microsoft-citynext-turning-microsoft-kinect-into-a-physical-therapy-tool.aspx

Contact Information:

"Kinect for Windows v2 Developer Preview入門 ― C++プログラマー向け連載 [C++ programmer series]"

$
0
0

Today's article series and project comes to use from Tsukasa Sugiura,a Kinect Application Developer in Japan who's work we've recently highlighted, "Kinect for Windows SDK 実践プログラミング [Practical Programming]"

Today there's more code and an entire new article series on the Kinect for Windows v2 (in Japanese, but machine translates well)

Kinect for Windows v2 Developer Preview入門 ― C++プログラマー向け連載 [C++ programmer series]

image

Project Information URL: http://www.buildinsider.net/small/kinectv2cpp

What's nice is that each article translates well enough to be easily readable.

Then there's the code, which always translates... :)

UnaNancyOwen / Kinect2Sample

Structure
Structure of this Sample Program is as follows.

    Kinect2Sample
    │
    │  // Sample Program
    ├─Sample
    │  │
    │  ├─Sample.sln
    │  │
    │  ├─Color
    │  │  ├─Color.vcxproj
    │  │  └─Color.cpp
    │  │
    │  ├─Depth
    │  │  ├─Depth.vcxproj
    │  │  └─Depth.cpp
    │  │
    │  ├─BodyIndex
    │  │  ├─BodyIndex.vcxproj
    │  │  └─BodyIndex.cpp
    │  │
    │  ├─Body
    │  │  ├─Body.vcxproj
    │  │  └─Body.cpp
    │  │
    │  ├─Infrared
    │  │  ├─Infrared.vcxproj
    │  │  └─Infrared.cpp
    │  │
    │  │  // Property Sheet
    │  ├─Sample.props
    │  │
    │  │  // License
    │  ├─License.txt
    │  │
    │  │  // Read Me
    │  └─ReadMe.txt
    │
    └─README.md


About Sample Program
This Sample Program is setting to be able to build in the following environments.
Please rewrite the appropriate setting depending on the environment.

    Visual Studio Express 2013 for Windows Desktop
    Kinect for Windows SDK v2 Developer Preview 1311
    OpenCV 2.4.8

...

Project Source URL: https://github.com/UnaNancyOwen/Kinect2Sample

image

Contact Information:

3D Scanner, with a little help from some LEGOs, a wiper motor and the Kinect

$
0
0

How can you go wrong combining LEGOs, a wiper motor, off the shelf software, 3d scanning, 3d printing and the Kinect?

Kinect + Wiper Motor + LEGO = 3D Scanner

Christopher] from the Bamberg Germany hackerspace, [Backspace], wrote in to tell us about one of the group’s most recent projects. It’s a Kinect-based 3D scanner (translated) that has been made mostly from parts lying around the shop.

There are 2 main components to the hardware-side of this build; the Kinect Stand and the Rotating Platform. The Kinect sits atop a platform made from LEGO pieces. This platform rides up and down an extruded aluminum rail, powered by an old windshield wiper motor.

The Rotating Platform went through a couple of iterations. The first was an un-powered platform supported by 5 roller blade wheels. The lack of automatic rotation didn’t work out so well for scanning so out came another windshield wiper motor which was strapped to an old office chair with the seat replaced by a piece of MDF. This setup may not be the best for the acrophobic, but the scan results speak for themselves.

[Christopher] also shares the software workflow that the group uses to complete the 3D scans and print the models; Skanect for scanning, Meshlab and Meshmixer for editing the model and KISSslicer to generate the g-code for the 3D Printer.

Project Information URL: http://hackaday.com/2014/03/09/kinect-wiper-motor-lego-3d-scanner/ -> https://www.hackerspace-bamberg.de/3dme

image

imageimage

Kinect for Windows v2 and the Kinect Studio

$
0
0

Bruno Capuano, Friend of the Gallery, shows off a new piece of the Kinect for Windows v2 SDK, an updated Kinect Studio...

[#KINECTONE] #KinectStudio in the refresh of #KinectSDK V2 (cool!)

[Machine Translated]

...a new release of the SDK for Kinect One has been released. There were many requests in the backlog, and the product team has decided to go for productivity. This version includes a version of Kinect Studio for Kinect One

If you have worked with Kinect SDK 1.7 or higher, for sure you know this app. Kinect Studio allows you to “record” a session with the Kinect sensor and then you can reuse it when you debug an application. In this way, you avoid get up in front of the Kinect every 5 minutes to test your application again and again.

Note: I’ve written a couple of post about, you can see them here and here .

As well, this refresh of the SDK includes some improvements and… including this excellent application. A detail, much more complete that the Kinect V1 version, in this case the source is separated into IR, Depth, Camera, etc and it is possible to analyze frame by frame that is received from the sensor.

image

Project Information URL: http://elbruno.com/2014/03/29/kinectone-kinectstudio-in-the-refresh-of-kinectsdk-v2-cool/

Contact Information:

Build on the Kinect for Windows v2

$
0
0

The recent Build 2014 conference had some very interesting announcements for the Kinect for Windows v2 device. Tom Kerkhove, Kinect MVP, provided a great summary and roll-up in his post...

//BUILD/ 2014 – Introduction to Kinect, releasing this summer and support for Unity & Windows Store apps

This week the annual Microsoft event //build/ took place in San Francisco with tons of new announcements! A few of these announcements are focusing on Kinect for Windows.

Here is a small summary of BUILD 2014!

Develop Windows Store apps with Kinect for Windows!

The biggest announcement was that you will be able to develop Windows Store apps that are using Kinect for Windows! You can do this with your favorite set of tools, f.e. XAML & C#.

Unfortunately you will not be able to target RT devices, this is because the app still requires the Kinect drivers & runtime to be installed on the device. Next to that the RT tablets lack the CPU/GPU bandwidth that is required. Keep in mind that you will have to check this when you are starting your app to maintain a nice user experience.

At BUILD they also announced the universal apps that you can build for Windows 8.1, Windows Phone 8.1 and Xbox One. This is not possible at the moment for Kinect for Windows, only desktop and Windows Store apps!

Introducing Unity support

On top of the currently supported tools you are now also able to develop Unity applications for the desktop or Windows Store apps by using a Unity-plugin!

...

//BUILD/ Sessions
Kinect 101 – Introduction to Kinect for Windows

Chris White gave an introduction session on Kinect for Windows talking about the capabilities of the sensor & what the hardware offers. Next to that he gave the audience an idea about what applications are currently in use and how they are built. Last but not least he gave some demos & comparison with the two generations and points out the differences between them!

You can watch his session here.

Bringing Kinect into Your Windows Store App

The second Kinect for Windows session on //build/ was Kevin Kennedy on how you can build Windows Store apps that are using the Kinect for Windows!

He starts off with a quick introduction to Kinect with a high level overview of the architecture where after he explains how the Kinect methodology works. In the coding demos he creates a Windows Store app that illustrates the infrared stream and map the HandLeft-joint on it by using the BodyStream!

Very nice introduction to Windows Store apps and well explained!
This session is available on-demand here.

Start coding this summer!

They also announced that Kinect for Windows Gen. II will be general available this summer! ..

Getting Started & Addition information

Want to get started? Feel free to read some of my introduction posts -

  • Create your own Kinect Television with Gen. I Kinect for windows here!
  • Read my introduction to the Kinect for Windows Gen. II streams here
  • Read how I started developing an application to control my AR Drone here

Here is some additional information -

  • Read the official statement – Windows Store app development is coming to K4W (post)
  • Read the official statement – BUILDing business with Kinect for Windows v2 (post)
  • Download the Kinect for Windows Gen. I SDK (link)
  • Download the Kinect for Windows Gen. I Toolkit (link)
  • Download the Human Interface Guidelines (link)

Project Information URL: http://www.kinectingforwindows.com/2014/04/07/build-2014-introduction-to-kinect-releasing-this-summer-and-support-for-unity-windows-store-apps/

Contact Information:

Heremo - Modern rehabilitation with Kinect for Windows

$
0
0

Today's project is one that reaches out to me on a number of levels. It's a commercial app for the Kinect for Windows v1, it has a free version, shows how the Kinect continues to be used outside of the gaming industry, helps people help themselves and is physical therapy related (my daughter is a Dr. of PT ;)

Heremo - Modern rehabilitation with Kinect for Windows

Modern rehabilitation with Kinect for Windows.

No clutter. Straight to the point.

Quick and reliable

Heremo is compatible with computers running Windows 7 or Windows 8.

Undefined number of exercises.

The exercise sets are tailored to your needs and demands. Choose the perfect one!

Biofeedback

Heremo will inform you about the accuracy of the performed exercises as well as award you points for them.

What is Heremo?

Heremo (derived from Health Rehabilitation Motion) is a project, which utilizes the Microsoft Kinect Motion Sensor to supervise exercises performed by the user. The range of possible uses is quite wide as the software can be used for rehabilitation exercises as well as physiotherapy. Fans of stationary physical activities will also be able to participate in Heremo by choosing a routine that suits them.

How does Heremo work?

Heremo utilizes the data received from the Microsoft Kinect motion sensor, which are focused on the movement different body parts. The system compares them to the pre-recorded data, which – after crucial calculations – allows it to assess the accuracy of particular exercises

What kinds of issues does Heremo address?

  1. Patients with motor dysfunctions, who have difficulties with paying frequent visits to specialists, will now be able to conduct many of the necessary exercises in their own homes.
  2. Having recommended a particular set of exercises to be performed in a defined period of time, the specialist is granted access to the statistics allowing him to follow and measure his patient’s progress.
  3. Hospitals, clinics or institutions collaborating with NatuMed Sp. z o.o. [Ltd] are available to develop their own exercise sets dedicated to particular ailments. The aforementioned establishments are then able to recommend to their patients the performing of exercises prior to their appointment or – depending on the ailment – perform the exercises at home and brief their doctor electronically.
  4. Those wishing to develop their physique can work out at home utilizing technology’s latest inventions, design their own workout routines and select exercise sets they currently feel like performing. Preparing for the skiing or bicycling season? Feeling the need for a quick warm-up before going out for a swim or a run? Choose the right exercise set and start your workout.
  5. Clients of fitness clubs or gyms, who want to properly warm-up before their workout, can receive the assistance of a virtual trainer. The implementation of Heremo into such places based on exercises prepared by the institutions might serve as a refreshing functionality and emphasize the business’ modern spirit.

What do I need to start using Heremo?

To start using Heremo you need a Kinect sensor for Windows, a computer running Windows 7 or Windows 8 as well a monitor/television set to see yourself on the screen. If you wish to brief your doctor about your progress, you will also need Internet access.

Is Heremo available for individual clients?

Indeed, Heremo is available for individuals, who will be given the option to buy particular exercise sets and run them on their private computers in their homes.

...

Project Information URL: http://heremo.com/

Project Download URL: http://heremo.com/

image

Kinect to Windows Store App development

$
0
0

Today's post goes into a little more depth on the Kinect for Windows v2 Windows Store App future

Windows Store app development is coming to Kinect for Windows

Today at Microsoft BUILD 2014, Microsoft made it official: the Kinect for Windows v2 sensor and SDK are coming this summer (northern hemisphere). With it, developers will be able to start creating Windows Store apps with Kinect for the first time. The ability to build such apps has been a frequent request from the developer community. We are delighted that it’s now on the immediate horizon—with the ability for developers to start developing this summer and to commercially deploy their solutions and make their apps available to Windows Store customers later this summer.

The ability to create Windows Store apps with Kinect for Windows not only fulfills a dream of our developer community, it also marks an important step forward in Microsoft’s vision of providing a unified development platform across Windows devices, from phones to tablets to laptops and beyond. Moreover, access to the Windows Store opens a whole new marketplace for business and consumer experiences created with Kinect for Windows.

The Kinect for Windows v2 has been re-engineered with major enhancements in color fidelity, video definition, field of view, depth perception, and skeletal tracking. In other words, the v2 sensor offers greater overall precision, improved responsiveness, and intuitive capabilities that will accelerate your development of voice and gesture experiences.

Specifically, the Kinect for Windows v2 includes 1080p HD video, which allows for crisp, high-quality augmented scenarios; a wider field of view, which means that users can stand closer to the sensor—making it possible to use the sensor in smaller rooms; improved skeletal tracking, which opens up even better scenarios for health and fitness apps and educational solutions; and new active infrared detection, which provides better facial tracking and gesture detection, even in low-light situations.

The Kinect for Windows v2 SDK brings the sensor’s new capabilities to life:

  • Window Store app development: Being able to integrate the latest human computing technology into Windows apps and publish those to the Windows Store will give our developers the ability to reach more customers and open up access to natural user experiences in the home.
  • Unity Support: We are committed to supporting the broader developer community with a mix of languages, frameworks, and protocols. With support for Unity this summer, more developers will be able to build and publish their apps to the Windows Store by using tools they already know.
  • Improved anatomical accuracy: With the first-generation SDK, developers were able to track up to two people simultaneously; now, their apps can track up to six. And the number of joints that can be tracked has increased from 20 to 25 joints per person. Lastly, joint orientation is better. The result is skeletal tracking that’s greatly enhanced overall, making it possible for developers to deliver new and improved applications with skeletal tracking, which our preview participants are calling “seamless.”
  • Simultaneous, multi-app support: Multiple Kinect-enabled applications can run simultaneously. Our community has frequently requested this feature and we’re excited to be able to give it to them with the upcoming release.

Developers who have been part of the Kinect for Windows v2 Developer Preview program praise the new sensor’s capabilities, which take natural, human computing to the next level. We are in awe and humbled by what they’ve already been able to create.

Technologists from a few participating companies are on hand at BUILD, showing off the apps they have created by using the Kinect for Windows v2. See what two of them, Freak’n Genius and Reflexion Health, have already been able to achieve, and learn more about these companies.

The v2 sensor and SDK dramatically enhance the world of gesture and voice control that were pioneered in the original Kinect for Windows, opening up new ways for developers to create applications that transform how businesses and consumers interact with computers. If you’re using the original Kinect for Windows to develop natural voice- and gesture-based solutions, you know how intuitive and powerful this interaction paradigm can be. And if you haven’t yet explored the possibilities of building natural applications, what are you waiting for? Join us as we continue to make technology easier to use and more intuitive for everyone.

The Kinect for Windows Team

Key links

[GD: Post copied in full]

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/04/02/windows-store-app-development-is-coming-to-kinect-for-windows.aspx

image

image

Contact Information:


Kinect.ReactiveV2 for the Kinect for Windows v2

$
0
0

This week is going to be a v2 theme week, with Friend of the Gallery, Marcus Kohnert kicking it off, showing off his Rx skills with the Kinect.

Other times he's been featured;

Kinect.ReactiveV2 – Rx-ing the Kinect for Windows SDK

A few weeks ago I was finally able to get my hands on to the new Kinect for Windows V2 SDK. There are a few API changes compared to V1. So I started to port Kinect.Reactive to the new Kinect for Windows Dev Preview SDK and Kinect.ReactiveV2 was born.

Kinect.ReactiveV2 is, as it’s older brother, a project that contains a bunch of extension methods that should ease the development with the Kinect for Windows SDK. The project uses the ReactiveExtensions (an open source framework built by Microsoft) to transform the various Kinect reader events into IObservable<T> sequences. This transformation enables you to use Linq style query operators on those events.

Here is an example of how to use the BodyIndexFrame data as an observable sequence.

image

You’ll also get an extension method called SceneChanges() on every KinectSensor instance which notifies all it’s subscribers whenever a person entered or left a scene.

image

...

Please be aware that “This is preliminary software and/or hardware and APIs are preliminary and subject to change”.

[Click through for the entire post]

Project Information URL: http://passiondev.wordpress.com/2014/04/07/kinect-reactivev2-rx-ing-the-kinect-for-windows-sdk/

Project Download URL: https://www.nuget.org/packages/Kinect.ReactiveV2/

Project Source URL: https://github.com/MarcusKohnert/Kinect.ReactiveV2

Contact Information:

"BUILDing business with Kinect for Windows v2"

$
0
0

Today the Kinect for Windows team highlights two companies that are building their business on the Kinect for Windows v2.

BUILDing business with Kinect for Windows v2

BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.

image

Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.

But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.

What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. ...

...

Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail.  It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way.  The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.

Not only does the Kinect for Windows application provide better results for patients and therapists, it also fills a need in an enormous market. PT is a $30 billion business in the United States alone—and a critical tool in helping to manage the $127 billion burden of musculoskeletal disorders. By extending the expertise and oversight of the best therapists, Reflexion Health hopes to empower and engage patients, helping to improve the speed and quality of recovery while also helping to control the enormous costs that come from extra procedures and re-injury. Moreover, having the Kinect for Windows v2 supported in the Windows Store stands to open up home distribution for Reflexion Health. 

Mark Barrett, a lead software engineer at Reflexion Health, is struck by the rewards of working on the app. Coming from a background in the games industry, he now enjoys using Kinect technology to “try and tackle such a large and meaningful problem. That’s just a fantastic feeling.”  ...

...

From creating your own animations to building a better path for physical rehabilitation, the Kinect for Windows v2 sensor is already in the hands of thousands of developers. We can’t wait to make it publicly available this summer and see what the rest of you do with the technology.

The Kinect for Windows Team

Key links

[Click through for the entire post]

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/04/02/build-more-with-kinect-for-windows-v2.aspx

Contact Information:

Kinect for Windows v2 will make you green (with envy at its background removal features)

$
0
0

That last project in our v2 week comes from Vangos Pterneas where he shows off the new background removal, replacement capabilities...

Other recent Gallery Posts for Vangos Pterneas;

Background removal using Kinect 2 (green screen effect)

Throughout the past few days, I got many requests about Kinect color to depth pixel mapping. As you probably already know, Kinect streams are not properly aligned. The RGB and depth cameras have a different resolution and their point of view is slightly shifted. As a result, more and more people have been asking me (either in the blog comments or by email) about properly aligning the color and depth streams. The most common application they want to build is a cool green-screen effect, just like the following video:

As you can see, the pretty girl is tracked by the Kinect sensor and the background is totally removed. I can replace the background with a solid color, a gradient fill, or even a random image!

Nice, huh? So, I created a simple project that maps a player’s depth values to the corresponding color pixels. This way, I could remove the background and replace it with something else. The source code is hosted on GitHub as a separate project. It is also part of Vitruvius.

Requirements

How background removal works

When we refer to “background removal”, we need to keep the pixels which form the user and remove anything else that does not belong to the user. The depth camera of the Kinect sensor comes in handy for determining a user’s body. However, we need to find the RGB color values, not the depth distances. We need to specify which RGB values correspond to the user’s depth values. Confused? Please don’t.

Using Kinect, each point in space has the following information:

  • Color value: Red + Green + Blue
  • Depth value: The distance from the sensor

The depth camera gives us the depth value and the RGB camera provides us with the color value. We map those values using CoordinateMapper. CoordinateMapper is a useful Kinect property that determines which color values correspond to each depth distances (and vice-versa).

Please note that the RGB frames (1920×1080) are wider than the depth frames (512×424). As a result, not every color pixel has a corresponding depth mapping. However, body tracking is performed primarily using the depth sensor, so no need to worry about missing values.

The code

In the GitHub project I shared, you can use the following code to remove the background and get the green-screen effect:

...

image

PS: Vitruvius

The BackgroundRemovalTool is part of Vitruvius, an open-source library that will speed-up the development of your Kinect projects. Vitruvius supports both version 1 and version 2 sensors, so you can use it for any kind of Kinect project. Download it and give it a try.

PS 2: New Kinect book - 20% off [Scroll down to the bottom of the post]

[Click on through for the source and more]

Project Information URL: http://pterneas.com/2014/04/11/kinect-background-removal/

Project Download URL: https://github.com/Vangos/kinect-2-background-removal

Project Source URL: https://github.com/Vangos/kinect-2-background-removal 

Contact Information:

"Kinect to Midi"

$
0
0

Today's Kinect for Windows v1 project is that lets you make music with your movement. Can you just imagine the uses for this? And the best part is the source is available. So you can not only play with this, but see how it was done too.

Kinect to Midi

Summary

In general, this app receives Kinect data as input and sends MIDI signals as output. With using Kinect to Midi, you can build some collection of conditions and send MIDI signals if skeleton points’ coordinates correspond all these specified conditions.

image

Conditions

Within a single block, you can add multiple conditions. Each condition can be one of two types:

· Skeleton point to coordinate

· Skeleton point to skeleton point

All coordinates are in meters.

With using skeleton point to coordinate condition type you can specify some cuboid and skeleton joint that coordinates are compared with the cuboid.

image

With skeleton point to skeleton point condition type, it is possible to compare two skeleton joints with some specified radius and second point’s center shift.

image

Midi

If all conditions in the block were met, specified MIDI signals are sent. Each of these signals can be one of two types:

· CC (Control Change)

image

· MIDI Note

By specifying event type, you can define when the signal should be sent:

· In – the signal will be sent right after all conditions in the block are met.

· Over – the signal will be sent each time when the program receives updated skeleton joints coordinates if these coordinates corresponds specified conditions.

· Out – signal will be sent if all conditions were met a moment ago but now they are not.

Expressions

For minimum and maximum values of MIDI CC instead of constant values it’s possible to use some simple mathematical equations with using X, Y and Z coordinates of specified joint (for example: (2 + (X^10)*Y)/Z )

Other features

· Saving/loading all blocks and MIDI settings to/from a file.

· Possibility to specify MIDI port that should be used with Kinect to MIDI.

· Possibility to open Kinect depth/skeleton stream in separate window. Press Enter when the window is focused to switch it in the full screen mode and Esc key to return back.

Prerequisites

  • Windows 7, Windows 8, Windows 8.1
  • Kinect for Windows
  • Kinect for Windows Runtime v1.8 or higher
  • .NET Framework 4.5

Project Information URL: https://kinecttomidi.codeplex.com/

Project Download URL: https://kinecttomidi.codeplex.com/releases/

Project Source URL: https://kinecttomidi.codeplex.com/SourceControl/latest

MyRobotLab Kinects

Home Automation, Kinect powered

$
0
0

Today's project is from Friend of the Gallery, Dan Thyer and how he's improving his home automation system with some help from the Kinect, the Kinect Point Cloud, Netduino and more.

Home Automation with Microsoft Kinect Point Cloud and Speech Recognition

I love using the Microsoft Kinect for my home automation projects. The Microsoft Kinect for Windows API is really amazing and can be used to track our movement in the physical world in unique and creative ways outside of the traditional game controller. Traditional computer vision systems are too slow to track normal human motion, but the Kinect is able to give you coordinates of 20 joints 30 times a second. The Kinect is able to simplify the computer vision problem by creating what is called a Point Cloud out of infrared light. This infrared light is similar to visible light but has a longer wavelength than what we can see. The Point Cloud is able to be seen with a special camera or night vision goggles as shown in the image below.

image

The Kinect has a special lens that sends out a known pattern of spaced lines of infrared light. The light makes dots on the objects it touches creating a Point Cloud. The Kinect has a special camera for seeing the infrared dots. The vision system on the Kinect measures the distance between the dots and analyses the displacement in the pattern to know how far away an object is. See the image below to see how close up objects have dots closer together and further objects have dots spaced further apart. The Kinect is able to analyze the spacing of the infrared dots to build a depth map and quickly see a human outline because the human is in front of other objects.

image

Create a Natural UI with the Kinect

There are some great user interfaces with the Kinect but most require you to be looking at a computer screen. I built a system that does not require you to look at a computer in order to select a device and turn it on or off. You can simply point to a device with one hand and raise your other hand above your head and wave one direction to turn on and the other direction to turn off. In addition to using gestures, I use the Kinect speech recognition engine to turn devices on or off.

...

Z-Wave

Z-Wave is a wireless communications protocol designed for home automation. There are tons of commercially available Z-Wave devices. The LogicalLiving.ZWave project contains a Class Library to control Z-Wave devices through the Areon Z-Stick Z-Wave USB Adapter. I purchased the USB adapter online for less than fifty dollars and all of my Z-Wave devices were also each under fifty dollars. I installed the Z-Wave devices by turning off the power in the house and then replaced the standard light switches and power outlets with Z-Wave devices. I wrote the LogicalLiving.Zwave.DesktopMessenger project to be a sample Windows Forms UI to control the LogicalLiving.ZWave Class Library. It is useful to use the LogicalLiving.Zwave.DesktopMessenger to figure out the values for the Z-Wave DeviceNode. Each Z-Wave device has its own unique DeviceNode which is required in the Z-Wave message to change its state.

Netduino

Netduino is a wonderful open-source electronics prototyping platform based on the .NET Micro Framework. I use the netduino plus 2 and a custom circuit that I built to control many devices in my home including turning on my fireplace, aiming a squirt gun at the pool, watering the garden and opening the garage door. I use the Kinect.Living gestures and audio commands for turning on the fireplace. We have a new kitten in the house who is very interested in the fireplace. I quickly became concerned that the kitten would crawl into the fireplace at the wrong time while someone was doing the gesture or audio command to turn it on. For safety, I wired up a mesh screen curtain that she cannot get behind! Please read my previous articles on the netduino and jQuery Mobile:

Summary

It is really fun to use the Microsoft Kinect for Windows API for home automation projects. This project presents a much more natural UI for controlling your devices in your house. It is really nice to be able to control devices without needing a remote control. In our house, the remote control is always lost in the sofa somewhere, but no worries anymore. With the Microsoft Kinect and this project, you are the remote control to control the devices in the entire house.

The ideas in this article can reach far beyond home automation. There are many other useful applications for having a computer know what object you are pointing to. We are living in an exciting time where vision systems are packaged up into inexpensive devices and are readily available such as the Microsoft Kinect. The Kinect and the Kinect for Windows SDK enables us to build incredible applications with minimal effort that would seem like science fiction 10 years ago.

[Click through to the full post to see all the details, source and more]

Project Information URL: http://www.codeproject.com/Articles/715858/Home-Automation-with-Microsoft-Kinect-Point-Cloud

Project Download URL: http://www.codeproject.com/Articles/715858/Home-Automation-with-Microsoft-Kinect-Point-Cloud

Project Source URL: http://www.codeproject.com/Articles/715858/Home-Automation-with-Microsoft-Kinect-Point-Cloud

Contact Information:

"Kinect Client Server System" v0.2 and the 3D-TV app

$
0
0

Today we're covering an update to project from Marc Drossaers that we originally highlighted a couple years ago (wow, really, years? Yep, we've been at this since 2011...), "Kinect Client Server System", that he's rev'd to v0.2 and added a cool example app to too.

Related, we've also recently mentioned his, Jitter Filter for the Kinect project.

Kinect Client Server System

This system shows 'Kinect for Windows' data in a WinRT application. Using the keyboard, you can navigate through the scene that is registered by the Kinect.

Version 0.2 has been released. Please find details in three blog posts:

The Kinect Client Server system consists of a desktop application and a WinRT application. The desktop is a server that sends Kinect for Windows data over a WinSocket2 socket. The WinRT application is a WinRT client that connects to the server to receive and show the Kinect data.

You can download the server binary from here, and run it. You can also download a compiled version of the client (named 3D-TV) from here directly. You will need a developer license to install it on your Windows 8 pc. Only the x86 platform is currently supported. An entry in the Windows store for the client will come in time (removes the requirement of a developer license).

You can also download the source code and build the system yourself. The client application requires the WinRT DirectX Bus to build. So you will have to download that one too and use it as described. The Project file assumes that the TheByteKitchenLibs directory (either the source code or the compiled version) is a sibling of the KinectClientServer directory.

Project Information URL: https://thebytekitchen.codeplex.com/

Project Download URL: https://thebytekitchen.codeplex.com/releases

Project Source URL: https://thebytekitchen.codeplex.com/SourceControl/latest

Kinect Client Server System V0.2

The Kinect Client Server System V0.2 adds the possibility to V0.1 to watch Kinect Color and Depth data over a network, and to navigate the rendered 3D scene.

To support data transfer over TCP, the Kinect Client Server System (KCS system) contains a custom build implementation of Run Length Encoding compression.

To both maximize compression and improve image quality the KCS system uses a jitter filter

Introduction

Version 0.1 of the KCS system allowed the display of Kinect data in a Windows Store app. This is a restricted scenario: for security reasons, a Windows Store app cannot make a network connection to the same PC it is running on, unless in software development scenarios. Version 0.2 overcomes this restriction by:

1. Support for viewing Kinect data from another PC.

2. Providing the 3D-TV viewer from the Windows Store (free of charge).

Of course, V0.2 is an open Source project, the code and binaries can be downloaded from The Byte Kitchen’s open Source project at CodePlex.

Usage

The easiest way to start using the KCS system v0.2 is to download 3D-TV from the Windows Store, navigate to the Help-About screen (via the ‘Settings’ popup), click on the link to the online manual and follow the stepwise instructions.

The general usage schema is depicted below.

image

...

3D-TV Manual

3D-TV is a remote client for viewing integrated Kinect Color and Depth data. The intended use is to run 3D-TV on one PC (the client) and to run the KinectColorDepthServer software on another PC (the server) to which a Kinect has been attached. The client and the server are to be connected by gigabit Ethernet

Server requirements

In order to run the KinectColorDepthServer application on your server PC, the following steps are required:

  1. download and install the Kinect runtime.
  2. Download and install the DirectX end-user runtime (June 2010), if required.
  3. Download the KinectColorDepthServer executable.
  4. Install the applicable VC++ redistributable (included in the KinectColorDepthServer package)
  5. Copy the server executable and DLL to a suitable location on your pc.
  6. Connect your Kinect for Windows.
  7. Configure a Gigabit LAN adapter to use ip4 address 192.168.0.20. 3D-TV is hardcoded configured to connect to that ip4 address. Typical network throughput is larger than 100Mbit/s.
  8. Connect the cable to your client pc, directly or indirectly.
  9. Double click KinectColorDepthServerApp.exe.

The KinectColorDepthServer application should now display a message stating that it is waiting for connections.

Client requirements

In order to run the 3D-TV application on your client PC, the following steps are required:

...

Navigation

Once the client is rendering Kinect frames, you can navigate through the scene.

  • Use the arrow buttons to retarget the virtual camera.
  • Alternatively use the A, D keys to turn left and right.
  • Alternatively  use the R, F keys to turn up and down.
  • Use the W, S keys to move forward an backward.

Contact Information:


Real-time scanning with the Kinect v2 (and gnomes)

$
0
0

Friend of the Gallery, ReconstructMe, is back, this time showing of real-time 3D scanning that's coming with the Kinect for Windows v2 device.

Some of the other times we've mentioned, directly or indirectly, ReconstructMe;

Real-time 3D scanning stuns the gnome world

Garden gnomes: they decorate our yards, take bizarre trips, and now can be scanned in 3D in real time by using readily available computer hardware, as can be seen in this video from ReconstructMe. The developers employed the preview version of the Kinect for Windows v2 sensor and SDK, taking advantage of the sensor’s enhanced color and depth streams. Instead of directly linking the input of the Kinect with ReconstructMe, they streamed the data over a network, which allowed them to decouple the reconstruction from the data acquisition.

Developer Christoph Heindl (he’s the one holding the gnome in the video) notes that the ReconstructMe team plans to update this 3D scanning technology when the Kinect for Windows v2 is officially released this summer, saying, “We’re eager to make this technology widely available upon the release of Kinect for Windows v2.”

Heindl adds that this real-time process has potential applications in 3D scanning, 3D modelling through gestures, and animation. Not to mention the ability to document gnomic travels in 3D!

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/04/30/real-time-3d-scanning-stuns-the-gnome-world.aspx

Contact Information:

The Forest Project, Unreal 4 and the Kinect, could help in Alzheimer's and dementia care

$
0
0

Today's article from Rich Edmonds highlights a crowdfunding project, A virtual forest for dementia, that provides a look at how care for Alzheimer's and dementia might be improved with the help from the Kinect for Windows v2. It's days away from closing and it might not hit it's target, but having been personally touched by Alzheimer's (twice) I have to applaud their effort and hope they make it...

New Alzheimer's and dementia care project powered by Unreal Engine 4 and Microsoft's Kinect

image

There are two serious illnesses affecting elderly around the world – Alzheimer's and dementia. While cures would be the perfect option for patients suffering from said cognitive diseases, until research and science combine to produce such results, we're left with technology attempting to make lives that bit more bearable.

The Forest Project utilizes the Unreal Engine 4, Smart-enabled TVs and Microsoft Kinect 2 to create an immersive experience in the form of virtual worlds. 

As well as helping patients with temporary reprieves, the technology also aims to help improve overall care by providing support teams with simulation to better understand how their patients see the world. The developers are currently running crowdfunding to reach a goal of AU$90,000 and further improve the project with beach or Christmas-themed environments.

Opaque Multimedia (www.opaquemultimedia.com) is working with Alzheimer's Australia Vic (www.fightdementia.org.au/victoria) on the Virtual Dementia Experience (VDE). Be sure to check out respective websites for more information.

Project Information URL: http://www.wpcentral.com/unreal-4-kinect-power-alzheimers-dementia-care, http://www.pozible.com/project/179761

Understanding the Kinect Coordinate Mapping, with a little help from Vangos Pterneas

$
0
0

Vangos Pterneas, Friend of the Gallery, has posted another great article, this time helping all of us understand the Kinect Coordinate Mapping for v1 AND v2...

Other recent Gallery Posts for Vangos Pterneas;

Understanding Kinect Coordinate Mapping

This is another post I publish after getting some good feedback from my blog subscribers. Seems that a lot of people have a problem in common when creating Kinect projects: how they can properly project data on top of the color and depth streams.

As you probably know, Kinect integrates a few sensors into a single device:

  • An RGB color camera – 640×480 in version 1, 1920×1080 in version 2
  • A depth sensor – 320×240 in v1, 512×424 in v2
  • An infrared sensor – 512×424 in v2

These sensors have different resolutions and are not perfectly aligned, so their view areas differ. It is obvious, for example, that the RGB camera covers a wider area than the depth and infrared cameras. Moreover, elements visible from one camera may not be visible from the others. Here’s how the same area can be viewed by the different sensors:

An example

Suppose we want to project the human body joints on top of the color image. Body tracking is performed using the depth sensor, so the coordinates (X, Y, Z) of the body points are correctly aligned with the depth frame only. If you try to project the same body joint coordinates on top of the color frame, you’ll find out that the skeleton is totally out of place:

image

CoordinateMapper

Of course, Microsoft is aware of this, so the SDK comes with a handy utility, named CoordinateMapper. CoordinateMapper’s job is to identify whether a point from the 3D space corresponds to a point in the color or depth 2D space – and vice-versa. CoordinateMapper is a property of the KinectSensor class, so it is tight to each Kinect sensor instance.

You can download a test project from GitHub and check how CoordinateMapper is used. To understand it more thoroughly, continue reading this tutorial.

...

[Please click through for the rest of the article, code and information]

Project Information URL: http://pterneas.com/2014/05/06/understanding-kinect-coordinate-mapping/

Project Source URL: Kinect for Windows version 1, SDK 1.8, Kinect for Windows version 2, SDK 2.0

Contact Information:

Kinect for Windows gamifies rehabilitation, "Kinect-powered stroke rehab system gets FDA clearance"

$
0
0

Today's inspirational project comes to us from the Windows Kinect Blog and the team at Jintronix.

You all know I have a soft spot for seeing the Kinect used to help with PT...

Jintronix makes rehabilitation more convenient, fun, and affordable with Kinect for Windows

A stroke can be a devastating experience, leaving the patient with serious physical impairments and beset by concerns for the future. Today, that future is much brighter, as stroke rehabilitation has made enormous strides. Now, Jintronix offers a significant advance to help stroke patients restore their physical functions: an affordable motion-capture system for physical rehabilitation that uses Microsoft Kinect for Windows.

image

The folks at Montreal- and Seattle-based Jintronix are tackling three major issues related to rehabilitation. First, and most importantly, they are working to improve patients’ compliance with their rehabilitation regimen, since up to 65% of patients fail to adhere fully—or at all—with their programs.[1] In addition, they are addressing the lack of accessibility and the high cost associated with rehabilitation. If you have just had a stroke, even getting to the clinic is a challenge, and the cost of hiring a private physical therapist to come to your home is too high for most people.

Consider Jane, a 57-year-old patient. After experiencing a stroke eight months ago, she now has difficulty moving the entire right side of her body. Like most stroke victims, Jane faces one to three weekly therapy sessions for up to two years. Unable to drive, she depends on her daughter to get her to these sessions; unable to work, she worries about the $100 fee per visit, as she has exhausted her insurance coverage. If that weren’t enough, Jane also must exercise for hours daily just to maintain her mobility. Unfortunately, these exercises are very repetitive, and Jane finds it difficult to motivate herself to do them. 

Jintronix tackles all of these issues by providing patients with fun, “gamified” exercises that accelerate recovery and increase adherence. In addition, Jintronix gives patients immediate feedback, which ensures that they perform their movements correctly. This is critical when the patient is exercising at home

from Vimeo.

Motion capture lies at the heart of Jintronix. The first-generation Kinect for Windows camera can track 20 points on the body with no need for the patient to wear physical sensors, enabling Jintronix to track the patient’s position in three-dimensional space at 30 frames per second. Behind the scenes, Jintronix uses the data captured by the sensor to track such metrics as the speed and fluidity of patients’ movement. It also records patients’ compensation patterns, such as leaning the trunk forward to reach an object instead of extending the arm normally.

Jintronix then uses this data to place patients in an interactive game environment that’s built around rehabilitation exercises. For example, in the game Fish Frenzy, the patient's hand controls the movement of an on-screen fish, moving it to capture food objects that are placed around the screen in a specific therapeutic pattern, like a rectangle or a figure eight.

...

Jintronix is working to remove all the major barriers to physical rehabilitation by making a system that is fun, simple to use, and affordable. Jintronix demonstrates the potential of natural user interfaces (NUI) to make technology simpler and more effective—and the ability of Kinect for Windows to help high tech meet essential human needs.

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/01/22/jintronix-makes-rehabilitation-more-convenient-fun-and-affordable-with-kinect-for-windows.aspx

UPDATE: Kinect-powered stroke rehab system gets FDA clearance

...

Now, as Dr. Bill Crounse, senior director of Worldwide Health at Microsoft reports in his blog, Jintronix has received 510(k) clearance from the US Food and Drug Administration (FDA) for its rehab system. This marks an important milestone for Jintronix. "We're very excited about receiving FDA clearance, which paves the way for Jintronix to help in the rehabilitation of countless stroke victims," said CEO Shawn Errunza.

...

Contact Information:

MAGECA, market place gesture controlled apps, games and more

$
0
0

Looking for a new source of paid and free Kinect for Windows Apps? There's a market place just for you...

MAGECA

MAGECA is a market place for gesture controlled apps, games and more!

Our vision is to embrace the revolutionary change that has started and has to do with the way people interact with all the kinds of clever machines ( pc , smartphone, tablet, tv etc ) in order to make it simple and natural... to make it more human.
We want to deliver useful tools that will help both the developers and the users to invent a better future.

MAGECA offers the ability to users to review and rate applications that they have freely downloaded or purchased. This way we aim to focus on and promote applications of high quality and standards to our community.

On the other hand, MAGECA offers the ability to developers to upload and sell their applications. Developers can decide whether their application is free, paid or even open source, helping this way new developers to begin experimenting with gestural APIs.

image

image

image

Project Information URL: http://mageca.com/apps?category=Applications&sensor=Kinect+for+win, http://mageca.com/apps?category=Games&sensor=Kinect+for+win, http://mageca.com/apps?category=More&sensor=Kinect+for+win



Viewing all 446 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>