Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 446 articles
Browse latest View live

Target and Press Natural Selection Patent

$
0
0

Today's post shows off a patent from Microsoft showing off how they might be bringing much better Kinect based gesture/selection, a more natural selection, out...

Microsoft wants to make it easier to select and activate objects in a GUI using the Kinect

image

How many of you have tried to use the Kinect sensor to maneuver around your Xbox One home screen and found it to be a bit frustrating? It can be difficult to select and activate objects in a graphical user interface (GUI) using a natural user input, such as the Kinect. According to Microsoft, users tend to naturally be inclined to perform a pressing gesture to select and activate an object on-screen. Sometimes this will cause the wrong object to be selected and pressed. Microsoft, which we discovered in a patent filing today, wants to improve this.

Microsoft suggests that the Kinect sensor be utilized to model the user via the Kinect's depth camera. Once this process is complete, the cursor on the GUI will move based on the position of a joint of the virtual skeleton created by the Kinect sensor. In other words, the user's physical movements can be interpreted as controls and can be used to operate a cursor. The user can also select and activate information presented in a pressable user interface.

"A cursor in a user interface is moved based on the position of a joint of the virtual skeleton. The user interface includes an object pressable in a pressing mode but not in a targeting mode. If a cursor position engages the object, and all immediately-previous cursor positions within a mode-testing period are located within a timing boundary centered around the cursor position, operation transitions to the pressing mode. If a cursor position engages the object but one or more immediately-previous cursor positions within the mode-testing period are located outside of the timing boundary, operation continues in the targeting mode," the patent application explains.

...

Project Information URL: http://www.winbeta.org/news/microsoft-wants-make-it-easier-select-and-activate-objects-gui-using-kinect

USPTO - TARGET AND PRESS NATURAL USER INPUT

A cursor is moved in a user interface based on a position of a joint of a virtual skeleton modeling a human subject. If a cursor position engages an object in the user interface, and all immediately-previous cursor positions within a mode-testing period are located within a timing boundary centered around the cursor position, operation in a pressing mode commences. If a cursor position remains within a constraining shape and exceeds a threshold z-distance while in the pressing mode, the object is activated.

...

[Click through too see the entire patent]





Kinect for Windows v2 SDK and Unity 3D

$
0
0

The Kinect for Windows SDK v2 is almost out, the SDK is just about ready for public beta and the excitement is really starting to grow. A number of developers, including many of you, have been coding your fingers off in the private alpha/developer preview program.

Until the curtains go up on the public beta, here's a tease on what's coming from our awesome community (yes, you guys). Today I'm highlighting a couple posts from the Unity 3D Armenia Site where Vardan Meliksetyan reached out to us about a couple recent posts on their new Kinect for Windows vv2 SDK and Unity 3D plugin...

KinectV2 + Unity3D Plugin July Update Testing

Microsoft post new update K4W V2 SDK June update, thanks Microsoft !

This is preliminary software and/or hardware and APIs are preliminary and subject to change – Microsoft Message

This SDK and Unity3D Plugin is private and for 500 developers all over the world, so other developers can only look this ))

I test it and make video so you can look that it work fine !

Project Information URL: http://unity3d.am/2014/07/03/kinectv2-unity3d-plugin-july-update-testing/

KinectV2 + Unity3D Plugin Hand Control Tutorial 1

Today I find a little time to make this tutorial( simple but useful ). I will show in this tutorial how to make hand control with Kinect V2 Unity3D Plugin.
This is preliminary software and/or hardware and APIs are preliminary and subject to change – Microsoft Message

I am waiting answer from Microsoft, then they let me I will show project and source in GitHub  https://github.com/Armenia-Unity-Users-Group

[Click through for the rest and to see the code snips]

Project Information URL: http://unity3d.am/2014/07/04/kinectv2-unity3d-plugin-hand-control-tutorial-1/

Contact Information:




SnowBall Effect now supports Kinect V2

$
0
0

Vardan Meliksetyan, you know from yesterday's post, Kinect for Windows v2 SDK and Unity 3D, also clued me in on this product announcement...

SnowBall Effect Game Supports Kinect V2

image

SnowBall Effect Game has been released in winter and since then injects addictive interest into people. Month after month this game supports different platforms like iOS, Android, Windows phones, Leap Motion, Kinect…Today SnowBall Effect already supports KINECT V2.

Bet most of you have already got acquainted with this game and are aware of the tricks of the game. Any way let us describe the craze of the game for Kinect V2.

The little SnowBall wants to grow and needs your attentiveness and mobility so as to get bigger and bigger. You should stand before the Kinect and instead of your fingertips or hands move your body leading the SnowBall through dangerous obstacles like rock crashes, tree crashes and so on. The longer you keep SnowBall moving and rolling the bigger and heavier it will be.

The game is extremely absorbing each time you intend to score more and take challenges.

The colorful and vivid HD graphics, cool winter atmosphere, progressively growing snowball, twinkling and tempting coins, painted A-frame for snowball, obstacles on and on along the way: stone vaults, rocks and trees and absorbing music design won’t let you out of the game.

Project Information URL: http://x-tech.am/snowball-effect-game-supports-kinect-v2/




Raspi Kinect

$
0
0

Brian Benchoff of Hack-a-Day highlights a project that will gladden many a Kinect hardware hackers heart, a  non-Microsoft driver for the Kinect and Raspberry Pi. As he says, let the race begin...

The Race Is On To Build A Raspi Kinect 3D Scanner

The old gen 1 Kinect has seen a fair bit of use in the field of making 3D scans out of real world scenes. Now that Xbox 360 Kinects are winding up at yard sales and your local Goodwill, you might even have a chance to pick one up for pocket change. Until now, though, scanning objects in 3D has only been practical in a studio or workshop setting; for a mobile, portable scanner, you’d need to lug around a computer, a power supply, and it’s not really something you can fit in a back pack.

Now, finally, that may be changing. [xxorde] can now get depth data from a Kinect sensor with a Raspberry Pi. And with just about every other ARM board out there as well. It’s a kernel driver that’s small, fast

...With a Raspi or BeagleBone Black, this driver has the beginnings of a very cheap 3D scanner that would be much more useful than the current commercial or DIY desktop scanners.

Project Information URL: http://hackaday.com/2014/06/03/the-race-is-on-to-build-a-raspi-kinect-3d-scanner/

Project Download URL: https://github.com/xxorde/librekinect

Project Source URL: https://github.com/xxorde/librekinect




It's Kinect for Windows v2 Day!

$
0
0

The Kinect for Windows v2 is should be shipping today, what more Kinect news is more important than that?

Well, we'll see. Since this will go live early in the morning (at least for me), I'll update this post when/if more news, information and details become available...

Update - 8:30 AM Pacific: Just got my pre-ordered Kinect "Microsoft Store - Shipment Confirmation" (Which, besides what you might have read elsewhere, has been available for preorder since early June, Pre-order your Kinect for Windows v2 now!, it's the ship date that was announced that was the real news... ;)

Update - 9:30 AM Pacific: The Kinect for Windows team announces the release of the public beta Kinect for Windows v2 SDK, The Kinect for Windows v2 sensor and free SDK 2.0 public preview are here

Today, we began shipping thousands of Kinect for Windows v2 sensors to developers worldwide. And more sensors will leave the warehouse in coming weeks, as we work to fill orders as quickly as possible.

Additionally, Microsoft publicly released a preview version of the Kinect for Windows SDK 2.0 this morning—meaning that developers everywhere can now take advantage of Kinect's latest enhancements and improved capabilities. The SDK is free of cost and there are no fees for runtime licenses of commercial applications developed with the SDK.

We will be releasing a final version of the SDK 2.0 in a few months, but with so many of you eagerly awaiting access, we wanted to make the SDK available as early as possible. For those of you who were unable to take part in our developer preview program, now you can roll up your sleeves and start developing. And for anyone else out there who has been waiting—well, the wait is over!

The new sensor's key features include:

  • Improved skeletal tracking: ...
  • Higher depth fidelity: ...
  • 1080p HD video: ...
  • New active infrared capabilities: ...
  • Wider/expanded field of view: ...

In addition to the new sensor's key features, the Kinect for Windows SDK 2.0 includes:

  • Improved skeletal, hand, and joint orientation: ...
  • Support for new development environments: ...
  • Powerful tooling: ....
  • Advanced face tracking: ...
  • Simultaneous multi-app support: ...

When the final version of the SDK is available, people will be able to start submitting their apps to the Windows Store, and companies will be able to make their v2 solutions available commercially. We look forward to seeing what everyone does with the new NUI.

Head over to the Kinect for Windows Development Center for more, including links to download the beta SDK and more




 

Kinect to suds (and soccer)

$
0
0

Today's inspirational post combines the World Cup craze, beer, and how the Kinect can provide a very unique shopping experience...

Kinect with soccer and suds

What’s better than watching your team compete in the World Cup? Perhaps enjoying the game with a refreshing beer, especially if you purchased the brew at a nice discount. Thanks to a Kinect for Windows application, Argentine fans were able to do just that.

image

The application was created by Kimetric, an Argentine company that specializes in real-time customer analytics. It involved a special World Cup display, which featured a video of renowned Argentine footballer Oscar Ruggeri, placed in the beer aisle at several supermarkets in Buenos Aires. The Kinect for Windows sensor’s camera detected when customers engaged with the display, and at that point, Ruggeri’s video persona asked them if they’re old enough to drink. If the customers answered yes, the system scanned them to see if they did indeed appear to be over the legal drinking age. Then the sensor looked to see if they were wearing an Argentine soccer jersey. The Kinect for Windows sensor could identify whether or not a customer was clad in Argentine soccer attire based on a machine-learning algorithm. More than 50,000 images of different Argentine soccer jerseys and other apparel were used to train the system.

So, what happened if the customer was wearing an Argentine jersey? He or she was rewarded with a discount coupon for the purchase of Quilmes Cristal, a popular Argentine beer. If the Kinect for Windows sensor detected just one jersey-clad customer, the discount was 10 percent. If the sensor detected two such customers, the discount rose to 15 percent, and if there were three or more customers posing in Argentine soccer apparel, the discount jumped to 25 percent. Bringing your soccer-crazy friends with you on a beer run really paid off.

If the customer wasn’t wearing an appropriate jersey, he or she still got a shot at scoring a beer bargain. In those cases, Ruggeri asked a couple of questions about Argentine soccer, and customers who answered correctly were rewarded with a 10 percent discount coupon.

The display was in stores from early May until the end of June 2014...

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/07/11/kinect-with-soccer-and-suds.aspx




"Hello, Kinect?" - Kinect v2 Speech Recognition example

$
0
0

Bruno Capuano, Friend of the Gallery, shows how fast the Kinect for Windows v2 SDK has been evolving, this time showing off the new Speech Recognition...

[#KINECTSDK] Speech recognition now available in SDK V2!

“This is preliminary software and/or hardware and APIs are preliminary and subject to change”

Hello!!!

We asked for it, and now we finally have weekly builds for the new Kinect V2 SDK. Now the interesting thing is that we have tons of interesting work in each release to review what you have inside. So today, a small review of something that already exists in Kinect SDK V1.8 and needed in V2: Speech Recognition.

The basis is simple and safe that you know, create a dictionary with words within the same. The interesting thing is that we make the Association of the audio feed of Kinect with the speech grammar defined in line 5 in line 18

SNAGHTMLa37a38

...

Project Information URL: http://elbruno.com/2014/07/03/kinectsdk-speech-recognition-now-available-in-sdk-v2/

Project Source URL: https://gist.github.com/elbruno/e4816d4d5a59a3b159eb#file-elbrunokw4v2speech

Contact Information:




Oh snap! Body Snap for the Kinect for Windows v2 Public Preview

$
0
0

With last week's shipping of the Kinect for Windows v2 devices and the release of the public preview of the SDK, I expect a flood of updates from those who have been involved in the previously closed dev preview process.

For example, I just came across this new project...

Body Snap

3D body models from the Kinect in a snap

image

image

Be a part of the 3D revolution. Help us to test this revolutionary ultra-portable scan-to-3D model solution!

With this app — now in beta release — you can use a single Microsoft Kinect to take full body scans anywhere and quickly convert these scans to 3D body models. These body models, of yourself or others, can be posed, animated, measured, and used in a variety of applications.

How does it work?
1. Body Modeling Made Simple

With a single Kinect, take 4 shots of your subject’s body in a static pose and 2 shots of the face. No turntable needed!

2. Receive Your Body Model

Once you take the scan, click “get model” to upload it to Body Hub, our body model web app. In the same day (and this will get faster!) your 3D body model will be available for download.

What can you do with your body model?
  • Custom Clothing / Made-to-measure: With the body model, we can extract measurements and provide you with a highly accurate visualization of your client’s shape.
  • Character Creation: Using Mixamo Fuse, you can upload your body model and create and animate your own character. See how easy it is to put yourself in a game or film project here.
  • But Wait…There’s More!: Well…there will be soon. In the near future, all kinds of applications — from apparel and equipment design to fitness apps and recommendation engines — will be able to use your 3D body models.

Want to know how we do it? Check out our lab notes below

Body Snap uses a single Kinect sensor at a fixed position, which captures several views of the person from different angles, as they turn. This is in contrast with other systems, where an operator waves a single sensor around the person being scanned, or multiple Kinect sensors simultaneously capture data from different angles.

As a person moves, their body shape changes, so rigid 3D alignment across these views is impossible. To cope with variation in pose across views, we use a parameterized body model which factors 3D body shape and pose variations. This enables the estimation of a single, consistent shape across views, while allowing pose to vary.

Based on monocular, low-resolution image silhouettes and coarse range data, we estimate our parametric model to reconstruct shape. See these papers for more details:

Project Information URL: http://www.bodysnapapp.com/

Project Download URL: http://www.bodysnapapp.com/

scanning the body with Body Snap app using Kinect for Windows v2 for character creation in Mixamo/Fuse

Hey guys, we're beta testing a new application -- Body Snap -- we built that let's you scan anybody at home and create a 3D digital model of that body. These bodies can be used as a base shape for character creation in Mixamo, and can then be used in video games. We are currently working on improving our automatic alignments from the scanner input, and would love some (friendly!) feedback on our bodies, and some general UI/UX stuff. We just released it 2 days ago for a public beta, so it's not completely optimized yet, but we were hoping to get some people who have the Kinect SDK to play around with it and give us some feedback. Thanks! TLDR: Make avatars with kinect, tell us what you think.

Project Information URL: http://www.reddit.com/r/kinect/comments/2b2ziv/scanning_the_body_with_body_snap_app_using_kinect/





"Kinect Physical Therapy – Boat Driving"

$
0
0

Today's inspirational project comes from a new Friend of the Gallery, Unity 3D Armenia and Vardan Meliksetyan.

You all know how what I think about how the Kinect is being used in medical practices, practically for Physical Therapy (in short, it's I love seeing this kind of thing!). Today's project combines that and also the very awesome Image Cup

Kinect Physical Therapy – Boat Driving

image

About the Project


Game is called “Kinect Physical therapy – Boat Driving”. There are 410 buoys in the ocean which need to be taken by the boat within the certain time limit. Each buoy is a point. The direction is being controlled by the boat without oars. Instead of real oars the patients are controlling the boat with a help of hands. User implements the same movements as being inside the real boat with oars. The aim of the game is to move imaginary oars correctly and grab as many buoys as possible. The game requires full meander of arms on and on. The procedures can be realized twice a day or thrice a day. Actually the pursuit of scoring invites to play again and again, each time better which leads by the progressive recovery of the patient.

People who has poor action making ability can try out this Kinect virtual therapy training game and gradually taste the progress of wellness.

...

Project Information URL: https://www.imaginecup.com/Team/Index/34791#?fbid=LM7fKT_NRTA

Contact Information:




Capturing Facial Expressions with the Kinect for Windows v2

$
0
0

Today Tom Kerkhove, Kinect for Windows MVP, shows off one of the coolest things in the new Kinect for Windows v2 Device and SDK, how we now have built in facial expression support...

First look at Expressions – Displaying expressions for a tracked person

One of the biggest feature requests was the ability to track the expressions of the users. Today I’m happy to tell you that this is now available in the alpha [UPDATE (15/07/2014) – The sample is updated based on the public preview SDK.] SDK thanks to the facetracking!

In this post I will walk you through the steps to display the expressions for one user but this is possible for all the tracked persons!

Template

I developed a small template that displays the camera so you can follow along & is available here.

Tutorial

Setting up expression tracking is pretty easy – We just need to set up body tracking, assign a FaceFrameSource to it and start processing the results. This requires us to add two references – Microsoft.Kinect for body tracking & Microsoft.Kinect.Face for the face tracking.

As I mentioned in my basic overview we need to create a BodyFrameReader to start receiving BodyFrameReferences in the FrameArrived event.

...

Testing the application

When you give the application a spin this is how it should look like –

image

Conclusion

In this post I illustrated how easy it is to set up expression tracking for one person and what it allows you to do f.e. user feedback when they see a new product at a conference.

Keep in mind that the sensor is able to track up to six persons and your algorithm should support this as well.

...

[Click through for all the code, the source links and more]

Project Information URL: http://www.kinectingforwindows.com/2014/07/13/first-look-at-expressions-displaying-expressions-for-a-tracked-person/

Project Source URL: https://github.com/KinectingForWindows?tab=repositories

Contact Information:

Other posts from Tom you might also find interesting;




Terminator, Sky Biometry and the Kinect

$
0
0

Today's project is from someone new to the Gallery, Jamie Dixon, but I'm sure we're going to see much more from (he's got this awesome Eject-a-Bed project that I'm sure will appear in a number of places here on Coding4Fun, and that's just to start)

Today he's also showing off a service I've not seen before, Sky Biometry, and how he's integrated it into a Kinect app...

Terminator Program: Part 1

I am starting to work on a new Kinect application for TRINUG’s code camp.  I wanted to extend the facial recognition application I did using Sky Biometry and have the Kinect identify people in its field of view.  Then, I want to give the verbal command “Terminate XXX” where XXX is the name of a recognized person.  That would activate a couple of servos via a netduino and point a laser pointer at that person and perhaps make a blaster sound.  The <ahem> architectural diagram </ahem? looks like this

image

Not really worrying about how far I will get (the fun is in the process, no?), I picked up Rob Miles’s excellent book Start Here: Learn The Kinect API and plugged in my Kinect.

The first thing I did was see if I can get a running video from the Kinect –> which was very easy.  I created a new C#/WPF application and replaced the default markup with this::

...

With the ability to identify individuals, I then wants to take individual photos of each person and feed it to Sky Biometry.  To that end, I added a method to draw a rectangle around each person and then (somehow) take a snapshot of the contents within the triangle.  Drawing the rectangle was a straight-forward WPF exercise:

...

image

Which is great, but now I am stuck.  I need a way of isolating the contents of that rectangle in the byte array that I am feeding to bitmap encoder and I don’t know how to trim the array.  Instead of trying to learn any more WPF and graphic programming, I decided to take a different tact and send the photograph in its entirety to Sky Biometry and let it figure out the people in the photograph.  How I did that is the subject of my next blog post…

[Click through for the code...]

Project Information URL: http://jamessdixon.wordpress.com/2014/07/08/terminator-program-part-1/

Terminator Program: Part 2

Following up on my last post, I decided to send the entire photograph to Sky Biometry and have them parse the photograph and identify individual people.  This ability is built right into their API.  For example, if you pass them this picture, you get the following json back.

image

I added the red highlight to show that Sky Biometry can recognize multiple people (it is an array of uids) and that each face tag has a center.x and center:y.  Reading the API documentation, this point is center of the face tag point and their point is a percentage of the photo width.

So I need to translate the center point of the skeleton from the Kinect to eqiv center point of the sky biometry recognition output and I should be able to identify individual people within the Kinect’s field of vision.  Going back to the Kinect code, I ditched the DrawBoxAroundHead method and altered the UpdateDisplay method like so...

...

Notice that there are two rectangles because I was not sure if the Head.Position or the Skeleton.Position would match SkyBiometry.  Turns out that I want the Head.Position for SkyBiometry (besides, the terminator would want head shots only)

image

...

The next step is to get the Kinect photo to Sky Biometry.  I decided to use Azure Blob Storage as my intermediately location.  I updated the architectural diagram like so

image

...

And I am getting a result back from Sky Biometry.

image

Finally, I added in the SkyBiometry X and Y coordinates for the photo and compared to the calculated ones based on the Kinect Skeleton Tracking:

...

And the results are encouraging –> it looks like I can use the X and Y to identify different people on the screen:

Match Value is: 53
Sky Biometry X: 10
Sky Biometry Y: 13.33

Kinect X: 47.5
Kinect Y: 39.79

Up next will be pointing the laser and the target…

[Click through to see all the code and much more]

Project Information URL: http://jamessdixon.wordpress.com/2014/07/08/terminator-program-part-2/

Contact Information:




"Programming Kinect for Windows v2" Jumpstart On-Demand

$
0
0

Here's your On-Demand jump start to get going with the Kinect for Windows v2 device and SDK. If you're thinking about developing Kinect for Windows v2 apps or already are, this is a must view series...

Programming Kinect for Windows v2

Devs, are you looking forward to building apps with Kinect for Windows v2? In this Jump Start, explore the brand new beta Software Development Kit with experts from the Kinect engineering team. Learn about the new APIs and app model, see fascinating demos, and get samples (plus source code) for both desktop and Windows Store apps. Even if you don't have a Kinect device, you won't want to miss this entertaining event.

Full course outline:

Project Information URL: http://channel9.msdn.com/Series/Programming-Kinect-for-Windows-v2




Unboxing the Kinect for Windows v2 Device

$
0
0

Today's post is for those of you who are looking at the Kinect for Windows v2 device, but haven't pulled the trigger yet. James Ashley shares the unboxing of his just received device...

Kinect v2 Final Hardware

...

The final Kinect hardware arrived at my front door this morning.  I’ve been playing with preview hardware for the past half year – and working on a book on programming it as various versions of the SDK were dropped on a private list – but this did not dampen my excitement over seeing the final product.

image

The sensor itself looks pretty much the same as the the preview hardware – and as far as I know the internal components are identical.  The cosmetic differences include an embossed metal “Kinect” on the top of the sensor and the absence of the razzmatazz stickers – which I believe were simply meant to cover up Xbox One branding.

...

Project Information URL: http://www.imaginativeuniversal.com/blog/post/2014/07/25/Kinect-v2-Final-Hardware.aspx

Contact Information:




Kinect for Windows v2 on a Mac? Oh yeah...

$
0
0

I couldn't pass this article up. Kinect for Windows v2 and a Mac? Yep, you sure can... and Carmine Sirignano shows us how...

Developing with Kinect for Windows v2 on a Mac

With the launch of the Kinect for Windows v2 public preview, we want to ensure that developers have access to the SDK so that you can start writing Kinect-based applications. As you may be aware, the Kinect for Windows SDK 2.0 public preview will run only on Windows 8 and Windows 8.1 64-bit systems. If you have a Windows 8 PC that meets the minimum requirements, you’re ready to go.

For our Macintosh developers, this may be bittersweet news, but we’re here to help. There are two options available for developers who have an Intel-based Mac: (1) install Windows to the Mac’s hard drive, or (2) install Windows to an external USB 3.0 drive. Many Mac users are aware of the first option, but the second is less well known.

First, you need to ensure that your hardware meets the minimum requirements for Kinect for Windows v2.

Due to the requirements for full USB 3.0 bandwidth and GPU Shader Model 5 (DirectX 11), virtualization products such as VMWare Fusion, Parallels Desktop, or Oracle VirtualBox are not supported. If you’re not sure what hardware you have, you can find out on these Apple websites:

Installing Windows on the internal hard drive of your Intel-based Macintosh

...

Key links

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2014/07/28/developing-with-kinect-v2-on-a-mac.aspx




Kinect for Windows v2 Transformer Game Project

$
0
0

Today's inspirational project is from Friend of the Gallery, Tango Chen,  who has gotten his hands on a Kinect for Windows v2 and has started to bend it to his will.

There's no source yet available, but still this shows off some of the cool things we're going to start seeing flowing in from our Kinect for Windows community...

Kinect v2 Transformers Game Project

image

Inspired by the movie Transformers 4, I decided to make a transformers-alike game.

Imagine that I can control the transformer and switch between robot and car, hit the enemies. That would be a lot of fun! So I do this.

I bought the 3D models on the web. It’s not one of the official Transformers though. (I will not be able to buy and use it if it is…)

...

By using Unity 4, all the work is done in 4 days.
To do it, what I needed to solve are:

  • Controlling the 3D model
  • Walk/Run/Turn left/Turn right detecting
  • Two-hands-above-head gesture to switch to car/robot
  • Animations of switching to car/robot
  • Driving gesture detecting
  • Spawn enemies(Helicopter/Car) and make them follow the target(Transformer robot)
  • Enemies exploding when hit. “Hit Targets” +1

Project Information URL: http://tangochen.com/blog/?p=1846

Contact Information:





Kinetisense and Kinect app development the right way

$
0
0

Today, Kinect MVP Vangos Pterneas gives us a peek at the work behind a  "real world" application being developed using the Kinect for Windows v2 device and SDK.

Kinetisense: developing a Kinect app the right way

Today, I would like to share one of my favorite projects my company has worked on. This project has been the result of a 5-month effort and is expected to launch publicly very soon.

Kinetisense is one of these startups that you have a feeling they’ll succeed even before their product launch. When I was initially approached by its founders, David and Ryan, I was impressed by how focused their product idea was. They came to me with a real problem to solve, an idea for the solution and valuable feedback for the whole development process. Throughout our extensive meetings, we set the goals and expectations of a revolutionary product that would serve a very specific purpose: change the game in the rehabilitation field.

Artificial intelligence, meet the consumers

If you’ve read any of my previous publications, you should already know that I’m a deeply technical person and I love programming for the sake of programming (just see my blog). However, when it comes to business, the most essential element of a new product is the market need it covers. Unless the product solves real problems and pains, it won’t succeed.

Kinetisense is different than any other competitor. It is inspired by founders with significant experience and impact in the medical field and it is tailored to fit their needs as much as possible. In this case, the creator is a customer as well (“Build an app that you’d buy”).

The problem

So, since a product needs to solve a pain, how exactly does Kinetisense succeed on this? First of all, it targets a niche market: rehabilitation professionals, practitioners, chiropractors. It is not just another fancy app for the average consumer. Instead, it is a solid platform for medical professionals who need a digital assistant to do their job better.

For years, practitioners have been using the same techniques to measure the range of motion of their patients: physical tools, such as the goniometer and the inclinometer, help them identify the angles formed by a patient’s joints. Technology has several times tried to substitute the physical tools in the form of wearable sensors or mobile apps. Guess what? Even the latest technological applications do not provide much of the desired efficiency in the whole process. Wearable sensors seem quite weird, plus they require a lot of time to setup. Mobile apps simply replace the physical goniometer with a digital one, so the end results are approximately the same.

You know you innovate when you change the way something is done, not the medium.

The innovation

Kinetisense makes a huge step forward by utilizing the power of the most accurate consumer 3D sensor: Microsoft’s Kinect for Windows version 2. ...

image

...

Technical characteristics

Kinetisense is developed for Windows 8.1 (using the WinRT APIs) and follows the Modern UI design aesthetics. It is relying on Kinect for Windows version 2 and will be published via the Windows Store. In my opinion, it’s the best fit for the new Surface 3. In short:

Kinetisense + Kinect 2 + Surface 3 = Magic

image

Usability ...

[Click through to read the entire post]

Project Information URL: http://pterneas.com/2014/07/24/kinetisense/

Project Information URL: http://www.kinetisense.com/

Contact Information:




Break Out with Kinect-Breaker

$
0
0

Gordon Beeming emailed us recently, letting us know about today's pretty darn cool looking project, one that contains a whole number of our favorite things, Kinect, Windows Phone and SignalR and the source is available too...

Two of our Microsoft Mobile Devs (one being a Windows Development MVP Taylor Gibb https://twitter.com/taybgibb) made a Breakout type game (http://en.wikipedia.org/wiki/Breakout_(video_game)).

It’s controlled by a Kinect and the game itself runs browser getting it’s commands from a console app through SignalR J and it took them 4 hours to write =D.

kinect-breaker

image

kinect-breaker is actually a clone of the original Atari hit "Breakout" which ran on the Atari 2600. It was written in about 4 hours as part of a hackathon, but we thought it was good enough to publish. While we keep to the original rules, we added a few touches of our own. Most noticeably, you can play the game with a few different inputs:

  • Xbox Kinect
  • Windows Phone
  • Remote Web Browser
  • Local Web Browser

from Vimeo.

Update: Head Control too! 

Project Information URL: https://github.com/taylorgibb/kinect-breaker

Project Source URL: https://github.com/taylorgibb/kinect-breaker 





Kinect Controlled Creepy Cat Eyes

$
0
0

Today's project is a combined hardware/software project, using Arduino, simple-openni, a little arts and crafts and of course a Kinect to build a project that will help you creep out your friends... (found via Hack-a-Day's post, Creepy Cat Eyes with a Microsoft Kinect)

Tracking Cat Eyes via Kinect

image

This instructable was made as part of the CS graduate course "Tangible Interactive Computing" at the University of Maryland, College Park taught by Professor Jon Froehlich. The course focused on exploring the materiality of interactive computing and, in the words of MIT Professor Hiroshii Ishii, sought to "seamlessly couple the dual worlds of bits and atoms. Please visit http://cmsc838f-s14.wikispaces.com/ for more details.

This project involved the use of Microsoft Kinect and servo motors. Although a simple idea, you are guaranteed to get some reactions! As you probably guessed from the title, the general idea behind this project was to use a Kinect to track movement, and then use output from the Kinect to make cat eyes follow people as they walk by.

Shopping List

  1. Creepy Poster (we suggest a cat poster)
  2. 2x 1 1/2" Wooden Balls
  3. Paint
  4. 2x Standard Servo TowerPro SG-5010 Motors
  5. 8xAA Batteries (battery case optional)
  6. Arduino Uno
  7. IC Breadboard
  8. Microsoft Kinect
  9. Hot Glue Gun

Now for the final step. Programming! On the Arduino side, a Servo library was used. This abstracts out most of the details required to understand how servos truly operate.

Processing was used to work with the Kinect, due to its simplicity. Specifically, a processing library called simple-openni was used to interface with the Microsoft Kinect. Although not currently supporting as many verbose Kinect features as other languages (i.e. C#, C++) and documentation is somewhat lacking, it is a good choice to use as the only information we seek is basic tracking information. There is sufficient amount of code examples by the author to skim through as well.

Feel free to use the attached code as a starting point.

Now go out and have some fun with people!

Project Information URL: http://www.instructables.com/id/Tracking-Cat-Eyes-via-Kinect/?ALLSTEPS

Project Source URL: http://www.instructables.com/id/Tracking-Cat-Eyes-via-Kinect/?ALLSTEPS




Hello (Kinect for Windows v2) World Series

$
0
0

Mike Taulty has been blogging away, sharing his recent Kinect for Windows v2 experiences and, best of all, code...

Kinect for Windows V2 SDK: Hello (Color) World

I’ve watched 2 videos from the Kinect series of development on Channel 9; Programming-Kinect-for-Windows-v2 and as a newcomer to Kinect I’m impressed by a few things;

  1. I like the design of the APIs. I’ve always been a big fan of APIs that emphasize a high level of consistency and I like the approach that the APIs seem to take to;
    1. grabbing hold of a sensor on the Kinect (or possibly multiple ones)
    2. receiving/polling for frames of data from that sensor
    3. acquiring/releasing those frames of data
  2. I like that the APIs have been made as consistent as possible across Native, Managed and WinRT layers
  3. I like that WinRT means that potentially you can code this stuff up for Windows Store apps in C++, C# and JavaScript.

But even after a couple of videos I felt that I wanted to make some kind of “Hello World” just so that it seemed like I was dipping a bit of a toe into the water and trying things out.

First off, I thought I’d play with a WPF application and see if I could make something that used the Kinect as a web cam. I figured that it wasn’t wise to attempt to be too ambitious and, in hindsight, I’m glad that I didn’t try to go too far too soon as I needed to figure some things out

...

image

... [Click through to see the code and more]

See the rest of his current posts;

Contact Information:




Tv2 - Terminator, Kinect for Windows v2 conversion...

$
0
0

Jamie Dixon is back!!!... He's taken his Kinect for Windows v1 program, Terminator, Sky Biometry and the Kinect, and updated for the Kinect for Windows v2, sharing the process with us all....

Terminator Program: With The Kinect 2

I got my hands on a Kinect2 last week so I decided to re-write the Terminator program using the Kinect2 api.  Microsoft made some major changes to the domain api (no more skeleton frame, now using a body) but the underlying logic is still the same.  Therefore, it was reasonably easy to port the code.  There is plenty of places in the V2 api that are not documented yet but because I did some work in the V1 api, I could still get things done.  For example, the V2 api documentation and code samples use event handlers to work with any new frame that arrives from the Kinect.  This lead to some pretty laggy code.  However, by using polling on a second thread, I was able to get the performance to where it needs to be.  Also, a minor annoyance is that you have to use Win8 with the Kinect 2.

So here is the Terminator application, Gen 2.  The UI is still just a series of UI controls:

...

This is pretty much like V1 where the video byte[] is being written to a WritableBitmap and the body is being drawn on the canvas.  Note that like V1, the coordinates of the body need to be adjusted to the color frame.  The API has a series of overloads that makes it easy to do the translation.

With the display working, I added in taking the photo, sending it to Azure blob storage, and having Sky Biometry analyze the results.  This code is identical to V1 with the connection strings for Azure and Sky Biometry broken out into their own methods and the sensitive values placed into the app.config:

...

With the code in place, I can the run the Terminator Gen 2:

image

I think I am doing the Sky Biometry recognition incorrectly so I will look at that later.  In any event, working with the Kinect V2 was fairly easy because it was close enough to the V1 that the concepts could translate.  I look forward to adding the targeting system this weekend!!!

Project Information URL: http://jamessdixon.wordpress.com/2014/07/29/terminator-program-with-the-kinect-2/

Contact Information:




Viewing all 446 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>