Quantcast
Channel: Coding4Fun Kinect Projects (HD) - Channel 9
Viewing all 446 articles
Browse latest View live

Kinect'ing to Wall Climbing

$
0
0

Today's inspirational project is another great example of how the Kinect for Windows is being used to much more than gaming and in very unique ways...

Kinect projector guides people up a climbing wall

image

Can a computer help you climb a rock face? Perttu Hämäläinen of Aalto University in Finland thinks so.

Hämäläinen is interested in using augmented reality in sports, particularly ones in which fluid body movement is key, such as contemporary dance, trampoline, martial arts and rock climbing.

In a system that he developed with his colleague Raine Kajastila, a projector, a depth-sensing camera and computer software combine to track the movement of rock climbers on a training wall, and provide feedback and suggestions in real time.

Training walls, found in many gyms around the world, are covered in arrays of differently shaped and textured holds. Typically, climbers will set routes of varying difficulty by colour-coding the holds. Kajastila and Hämäläinen's system projects routes onto the wall as a glowing line, and tracks the movement of climbers as they follow it, suggesting the next move as they go.

...

The team has also added a game feature, with a massive chainsaw projected onto the wall, which climbers need to avoid.

Javi Sánchez, a climbing instructor at Espacio Acción in Madrid, Spain, and director of Onsight Boulder Holds, based in Madrid, is impressed. He says the feedback system should help climbers resolve sequences faster because any mistakes a climber makes can be immediately corrected. "This is of great importance as it allows climbers to evaluate their own movements more efficiently," says Sánchez.

Kajastila says the team is working to get a prototype up and running in a real gym this autumn.

Project Information URL: http://www.newscientist.com/article/dn25940

Contact Information:





KinectingForWindows GitHub Repos

$
0
0

"3D Scanning with Kinect V2"

$
0
0

Today's post isn't about Kinect coding or development, but it's pretty darn cool... Come on, what's cooler than using the Kinect to do 3D scanning and 3D printing...

3D Builder Tutorial Part 5: 3D Scanning with Kinect V2

In this video, Alex Blondin and Gavin Gear demonstrate how you can plug a Kinect V2 sensor into your PC and turn it into a portable 3D scanning setup using the latest version of 3D Builder.

In this video a custom "turntable" was used so that the person being scanned us rotated smoothly, but you can also use an office chair, or move the Kinect sensor and keep the subject stationary. An Alienware 17 laptop was used for scanning because the powerful onboard GPU can process scan data at a high rate, which means you can achieve a high level of detail and fidelity when scanning.

imageimage

...

Trust me, with a scanning setup like this, you'll have hours of fun scanning people and things. I'm even wondering how I can get my cat to stand still for 30 seconds (too bad that won't happen).

Here's what you'll need to get a setup like this up and running:

  • A Windows PC running Windows 8 or greater OS
  • Kinect for Xbox V2 sensor or Kinect for Windows V2 sensor
  • (optional) Revolving platform/support

Your PC hardware must also meet some basic requirements:

  • 64-bit (x64) processor
  • Dual-core 3.1-GHz (2 logical cores per physical) or faster processor 
  • 4 GB RAM
  • Graphics card that supports DirectX 11
  • A compatible USB 3.0 port (Intel or Renesas)

...

Project Information URL: http://channel9.msdn.com/Blogs/3D-Printing/3D-Builder-Tutorial-Part-5-3D-Scanning-with-Kinect-V2




Holographic Window using Kinect V2

$
0
0

I found today's project via Clemente Giorio's tweet and once I watched the video, there was no question about me highlighting it here as soon as possible... While Hololens might make this look like child's play, we don't have them yet and we DO have Kinect's... :)

Real life Portal; a holographic window using Kinect

Real life portals without trickery

The Kinect V2 is a sensor that can be used to record a 3D view of the world in real-time. It can also track users and can see what their body pose is. This can be used to perform head tracking and reconstruct a camera view into a 3D world as if looking through a virtual window. By using one Kinect for head tracking and another Kinect for reconstructing a 3D view, the virtual window effect of a portal can be created in reality. By using both Kinects for 3D world reconstruction and head tracking a two way portal effect can be achieved.

...

Hardware setup

In the setup used for recording the video two Kinect V2 where used. My laptop was connected to a projector that projected onto the wall. The PC displayed the other much smaller portal. Smaller portals hide more of the other side of the scene and allow for larger head movements. With a bigger portal you will run into the limitations in the field of view of the Kinect much earlier.

Instead of transferring the recorded 3D scene I swapped the Kinects and only transferred the recorded bodyframes through a network connection to mimimize latency. This limits the maximum range that the portals can be placed from each other. (about 7 meters when using USB3 extensions cables)

A portal opens as soon as a user is detected by the Kinect so proper head tracking can be done.  For the video I used the right hand joint for controlling the camera so the viewer would experience what it would look like when head tracking is applied.

Holographic window

Holography is quickly becoming the buzzword of 2015. It’s getting harder to keep a clear understanding of what holographic actually means. (and yes I’ve abused the term too) I like the view that Oliver Kreylos has on the term holography. (See: What is holographic, and what isn’t?)

Since both worlds are already rendered in 3D it is a small step to add stereo rendering. For instance with a Microsoft HoloLens. This brings us closer to a holographic window.
Here’s the checklist that Oliver Kreylos uses in his article:...

image

image

...

Project Information URL: http://smeenk.com/real-life-portal/

Contact Information:




Kinect Controlled Unity Avatars with RUIS Toolkit

$
0
0

I had to search the blog here a couple times before posting this. I just couldn't believe I've not highlighted this previously!

RUIS Toolkit for Kinect controlled Avatars in Unity

I've seen several topics about Kinect controlled Avatars in Unity, so I'll bring to your attention our RUIS toolkit which is intended for creating virtual reality applications and can be used to animate 3D avatars:
http://blog.ruisystem.net/download/

In 2013 we used our toolkit and Kinect v1 to animate the character in this demo:

For Kinect v2 avatars our toolkit includes the following useful features:
- Bones can automatically obtain rotation and scale from Kinect, so that avatar joint positions match what Kinect sees
- Bone rotations can be filtered (currently joints between torso and hands only)
- Angular velocity for bone rotations can be capped, making the avatar more stable but less responsive
- Fingers can be set to curl when Kinect v2 detects that you are making a fist
- Avatar root position can be scaled, so that your movement is amplified and covers a larger area
- The avatar can also be controlled with keyboard/gamepad, in which case its leg pose is blended with walking animation
- You can create your own scripts to blend custom Mecanim animations into the Kinect animated avatar's individual limbs
- You can use fist gesture to grab objects

If you download RUIS toolkit, you can get quickly started by opening the example scene in \RUISunity\Assets\RUIS\Examples\KinectTwoPlayers

The above example includes Constructor model from Unity standard assets. If you want to replace that model with your own, you need to parent your rig under the MecanimBlendedCharacter gameObject, and move all the scripts and components from Constructor gameObject to your rig, and relink the joint transforms. For details, see the last paragraph from "Oculus Rift with Kinect, PS Move, and Razer Hydra" section of our readme:
http://blog.ruisystem.net/readme.pdf

Please note that currently the main "modules" of RUIS (RUISInputManager and RUISDisplayManager) are coupled, and if you animate your Kinect avatars with RUIS, you also need to use the display management of RUIS, replacing your scene camera with RUISCamera prefab that you link to RUISDisplay. See the readme for details.

Project Information URL: https://social.msdn.microsoft.com/Forums/en-US/213d0e1f-acea-4aca-b194-79ad23bce1af/ruis-toolkit-for-kinect-controlled-avatars-in-unity?forum=kinectv2sdk

Reality-based User Interface System

Reality-based User Interface System (RUIS) is an open source toolkit for creating the next generation of virtual reality applications. The idea of RUIS is to give hobbyists an easy access to the state-of-the-art interaction devices, so that they can bring their innovations into the field of virtual reality.

RUIS is available for Unity3D and Processing, both of which are widely used development environments that are favored by many game developers, artists, designers, and architects. RUIS enables development of virtual reality applications where devices like Kinect, Razer Hydra, and PlayStation Move are used together with immersive display devices such as Oculus Rift. The simultaneous use of these devices allows creation of novel virtual reality applications and 3D user interfaces.

RUIS includes a versatile display manager for handling several display devices simultaneously, features stereo 3D rendering and head-tracking, and supports the use of Kinect (1&2), Oculus Rift DK2, and PlayStation Move together in the same coordinate system. Developers can implement and test their own motion controlled applications even with just a mouse and keyboard, which are emulated as 3D input devices. RUIS has been used to teach virtual reality concepts and application implementation for five consecutive years in Aalto University’s virtual reality course.

RUIS project was initiated by researchers Tuukka Takala and Roberto Pugliese in Aalto University’s Department of Media Technology, Finland. Other important contributors are Mikael Matveinen and Yu Shen.

Project Information URL: http://blog.ruisystem.net

Project Download URL: http://blog.ruisystem.net/download/

Contact Information:




Is Skeletal Tracking in the Kinect for Windows v2 Really Better? Yep!

$
0
0

Josh Blake has shared a video that makes it pretty darn clear just how much better the skeletal tracking is with the Kinect for Windows v2..

No bones about it: Kinect for Windows v2 skeletal tracking vastly better

You can read about the improvements that Kinect for Windows v2 offers over its predecessor, but seeing the differences with your own eyes is really, well, eye-opening—which is why we’re so pleased by this YouTube video posted by Microsoft MVP Josh Blake of InfoStrat. In it, Blake not only describes the improvements in skeletal tracking provided by the v2 sensor and the preview SDK 2.0 (full release of SDK 2.0 now available), he actually demonstrates the differences by showing side-by-side comparisons of himself and others being tracked simultaneously with the original sensor and the more robust v2 sensor.

As Blake shows, the v2 sensor tracks more joints, with greater anatomical precision, than the original sensor. His video also highlights the major improvements in hand tracking that the v2 sensor and SDK 2.0 provide, and, with the help of two colleagues, he demonstrates how Kinect for Windows v2 can track more bodies than was possible with the original sensor and prior releases of the SDK.

When asked how the improved skeletal-tracking capabilities can be utilized, Blake responded,...

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/02/27/no-bones-about-it-kinect-for-windows-v2-skeletal-tracking-vastly-better.aspx

image




Unity 5 and the Kinect...

$
0
0

Today we've got two posts about the new Unity 5 and the Kinect for Windows v2 and how, "it just works"... (I love those words ;)

Try to use Kinect with Unity 5

During Global Developer Conference there were many announcements like Unity 5 availability for free. Of course, Unity has been offering a free version of the editor for many years but it was an editor without many important features. For example it was not possible to use external libraries in free version of the editor. It was the main stopper for many educational projects like projects related to Kinect. From one side we have a very cool solution like Kinect, which researches can use in many interesting projects but from other side, it’s very hard to develop something using DirectX directly. Unity might save the situation but Kinect SDK worked with Pro version only.

But Unity decided to change business model and announced that since Unity 5 there is Personal Edition, which contains all features.

image

Of course, I decided to return to my projects with Kinect and tried to rebuild some of them using the Unity 5 Personal. And there is great news – everything works fine! So the last stopper was broken and today you can try to use Kinect and Unity for free.

If you are interested in the topic I would recommend to use my previous article about Kinect as an instruction how to start.

Project Information URL: http://blogs.msdn.com/b/cdndevs/archive/2015/03/11/try-to-use-kinect-with-unity-5.aspx

Contact Information:

Kinect 4 Windows V2 – Unity3D 5

With the recent announcements at GDC2015 from Unity about their new licensing model I thought I would update my Kinect and Unity3d post as now a pro Unity license is no longer a requirement. I installed Unity 5 and chose the ‘Personal Edition’ license – more details here http://unity3d.com/get-unity – this is a free download in which all features of the engine are available.

I ran through the steps from my previous post and everything just worked!

image

Project Information URL: http://peted.azurewebsites.net/kinect-4-windows-v2-unity3d-5/

Contact Information:




More on the openFrameworks and the Kinect

$
0
0

I guess it's Peter Daukintis week. :) Today's project he's showing off the openFrameworks.

Reminder about openFrameworks?

openFrameworks is designed to work as a general purpose glue, and wraps together several commonly used libraries, including:

The code is written to be massively cross-compatible. Right now we support five operating systems (Windows, OSX, Linux, iOS, Android) and four IDEs (XCode, Code::Blocks, and Visual Studio and Eclipse). The API is designed to be minimal and easy to grasp.

openFrameworks is distributed under the MIT License. This gives everyone the freedoms to use openFrameworks in any context: commercial or non-commercial, public or private, open or closed source. While many openFrameworks users give their work back to the community in a similarly free way, there is no obligation to contribute.

Simply put, openFrameworks is a tool that makes it much easier to make things with code. We find it super useful, and we hope you do too.

Here are some related posts;

OPENFRAMEWORKS + KINECT 4 WINDOWS V2 (II)

image

In my previous post I showed how to get an environment set up with Openframeworks and Kinect V2.0 for Windows Store. I’d like to elaborate on that a little and run through a demo app I recently created. Since these posts are based on the MSOpenTech fork of Ofx which works with Windows Store we are restricted to using OpenGL ES as this is what is currently supported by Project Angle. What this means in practice is that if you find an oFx addon that you want to use you need to make sure that it doesn’t call methods unsupported by OpenGL ES. This includes things like glBegin..glEnd immediate mode rendering syntax – this is replaced with vertex buffer rendering. If an addon has used this style then it can’t be used with Windows Store currently. Another option here would be to create an oFx desktop app with Kinect integration – more on that in a subsequent post…

Anyway, whilst leafing through the interesting addons I could use I found Box2D which is a 2d physics engine which I thought might work well in a demo.

In the build of the oFx project generator that I am using (from the universal branch here https://github.com/MSOpenTech/openFrameworks) the addons feature doesn’t seem to work. To work around this I used a different fork of oFx from here https://github.com/liquidzym/openFrameworks/tree/VS2013 which has support for Visual Studio 2013 and I added the addon and then I copied the entries from the Visual Studio project files across into my Windows Store projects.

Using that method I got Box2D into my app. From there I could define a Box2D world with physical properties and boundaries and I could introduce rigid bodies into the world. I chose to populate the world with differently-coloured circles, I set the boundaries to be the edges of the app and I used the Kinect body data to make my hands attraction points for the circles.

image

...

It is mildly amusing to make the circles chase you as you swing your hands around but not difficult to let your imagination run riot thinking about the possibilities here. My demo was put together with a few lines of code – that’s the power of OpenFrameworks and other similar frameworks – they include a host of creative elements without the need to reinvent everything.

Project Information URL: http://peted.azurewebsites.net/openframeworks-kinect-4-windows-v2-ii/

Project Source URL: https://github.com/peted70/kinect-ofx-box2d-circles

Contact Information:





Brekel Pro Face 2 Released

$
0
0

While I don't usually highlight commercial products, this product and others from Brekel deserve an exception...

Brekel Pro Face 2

For Kinect for Windows v2 & Kinect for Xbox One sensors

Brekel Pro Face v2 is a Windows application that enables 3D animators to record and stream 3D face tracking of up to 6 people from your living room or office using a Kinect sensor.

  • multi-person face tracking (1-6 people simultaneously)
  • track head position/rotation
  • track 20 different face shapes (including left/right asymmetry)
  • rewritten to fully utilize all the data of the Kinect v2 sensor
  • can run simultaneously with Pro Body 2 sharing data from the same sensor
  • works in realtime, no offline processing required
  • no calibration required, just stand in front of the sensor and record
  • supports FBX formats v6, v7, Ascii and Binary (other file formats coming)
  • export as mesh with blendshapes/morphs that can be used to drive your own characters
  • record pointcloud data in sync for use in Pro PointCloud 2
  • record audio in sync from Kinect’s microphone or any other audio source
  • adjust smoothing/filtering
  • adjustable symmetry constraint
  • adjustable scale/offset per animation unit
  • build face mesh resembling actor
  • visualizes Color, InfraRed, Depth, 3D PointCloud and Face Mesh
  • ability to resample output data from 30fps to custom frame rates
  • ability to sync recording between multiple Brekel app on the same machine or different machines on the same network
  • Optionally stream and record directly to Autodesk MotionBuilder using included plugins for 2009-2015 32 and 64 bit versions

System requirements:

  • Windows 8 / 8.1 / 10 (USB stack of Windows 7 or below can’t handle bandwidth requirements of v2 sensor and is NOT supported)
  • USB 3.0 port, Intel and Renesas chipsets only! (others brands may or may not work)
  • DirectX 11 capable GPU (Intel HD4000, AMD Radeon HD6470M / HD6570, NVIDIA Geforce 610M or above)
  • 4 GB or more RAM
  • Dual Core 3.1Ghz i7/i5 CPU or equivalent (slower should work but may drop frames)
  • 1280×1024 screen (recommended: 1920×1080 or higher)

image

Project Information URL: http://brekel.com/brekel-pro-face-2/




Kinect to Windows Store Apps

$
0
0

I've mentioned that with the Kinect for Windows v2 SDK you can now create Windows Store apps, Kinect to Windows Store App development. Recently the Kinect Team highlighted three real world examples of this...

Windows Store provides new market for Kinect apps

In case you hadn't noticed, the Windows Store added something really special to its line-up not too long ago: its first Kinect applications. The ability to create Windows Store applications had been a longstanding request from the Kinect for Windows developer community, so we were very pleased to deliver this capability through the latest Kinect sensor and the public release of the Kinect for Windows software development kit (SDK) 2.0.

The ability to sell Kinect solutions through the Windows Store means that developers can reach a broad and heretofore untapped market of businesses and consumers, including those with an existing Kinect for Xbox One sensor and the Kinect Adapter for Windows. Here is a look at three of the first developers to have released Kinect apps to the Windows Store.

Nayi Disha – getting kids moving and learning

You wouldn’t think that Nayi Disha needs to broaden its market—the company’s innovative, Kinect-powered early education software is already in dozens of preschools and elementary schools in India and the United States. But Nayi Disha co-founder Kartik Aneja is a man on a mission: to bring Nayi Disha’s educational software to as many young learners as possible. “The Windows Store gives us an opportunity to reach beyond the institutional market and into the home market. What parent doesn’t want to help their child learn?” asks Aneja, somewhat rhetorically. In addition, deployment in the Windows Store could help Nayi Disha reach schools and daycare centers beyond those in the United States and India.

...

YAKiT: bringing animation to the masses

It doesn’t take much to get Kyle Kesterson yakking about YAKiT—the co-founder and CEO of the Seattle-based Freak’n Genius is justifiably proud of what his company has accomplished in fewer than three years. “We started with the idea of enabling anybody to create animated cartoons,” he explains. But then reality set in. “We had smart, creative, funny people,” he says, “but we didn’t have the technology that would allow an untrained person to make a fully animated cartoon. We came up with a really neat first product, which let users animate the mouth of a still photo, but it wasn’t the full-blown animation we had set our sights on.”

Then something wonderful happened. Freak’n Genius was accepted into a startup incubation program funded by Microsoft’s Kinect for Windows group, and the funny, creative people at YAKiT began working with the developer preview version of the Kinect v2 sensor.

Now, Freak’n Genius is poised to achieve its founders’ original mission: bringing the magic of full animation to just about anyone. Its Kinect-based technology takes what has been highly technical, time consuming, and expensive and makes it instant, free, and fun. The user simply chooses an on-screen character and animates it by standing in front of the Kinect v2 sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character. What’s more, with the ability to create Windows Store apps, Kinect v2 stands to bring Freak’n Genius’s full animation applications to countless new customers.

...

3D Builder: commoditizing 3D printing

As any tech-savvy person knows, 3D printing holds enormous potential—from industry (think small-batch manufacturing) to medicine (imagine “bio-printing” of body parts) to agriculture (consider bio-printed beef). Not to mention its rapid emergence as source of home entertainment and amusement, as in the printing of 3D toys, gadgets, and gimcracks. It was with these capabilities in mind that, last year, Microsoft introduced the 3D Builder app, which allows users to make 3D prints easily from a Windows 8.1 PC.

Now, 3D Builder has taken things to the next level with the incorporation of the Kinect v2 sensor. “The v2 sensor generates gorgeous 3D meshes from the world around you,” says Kris Iverson, a principal software engineer in the Windows 3D Printing group. “It not only provides precise depth information, it captures full-color images of people, pets, and even entire rooms. And it scans in real scale, which can then be adjusted for output on a 3D printer.”

Nayi Disha, YAKiT, and 3D Builder represent just a thin slice of the potential for Kinect apps in the Windows Store. Whether the apps are educational, entertainment, or tools, as in these three vignettes, or intended for healthcare, manufacturing, retailing, or other purposes, Kinect v2 and the Windows Store offer a new world of opportunity for both developers and users.

image

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/03/18/windows-store-provides-new-market-for-kinect-apps.aspx

Key links                                                              




Kinect 2 Hands On Labs - Labs as in 14!

$
0
0

I'm not sure how I came across this, but I am glad I did as this is a great new series of labs (and source) to get you started building great Kinect for Windows v2 apps. This is now on the top of my Kinect Resources list... :)

Kinect 2 Hands On Labs

Welcome to the Kinect 2 Hands on Labs!

This series will show you how to build a Windows 8.1 store app which uses almost every feature of the Kinect 2. The lessons in this series work the best when completed in order.

You can download a master copy of the complete app and all labs and referenced libraries through the github links on the left.

Or if you know a bit about development with the Kinect 2 already, you can skip to a particular lab by navigating to it at the top of the page. The running codebase is available through a link at the bottom of each page, which is complete and runnable as if you have just finished that lab.

If you have any suggestions or would like to report any bugs, please leave some feedback on the Kinect Tutorial GitHub Issues page.

Enjoy the labs and have fun!

image

image

...

Project Information URL: http://kinect.github.io/tutorial/

Project Source URL: https://github.com/Kinect/tutorial




Kinect for Windows v2 Speech Recognition Sample

$
0
0

If you've been following this blog for any length of time, you know how much I like the Kinect voice recognition and its potential. Every single time I use my Xbox One, it's just so natural to "Xbox Pause" or "Xbox Turn Off"...

Zubair Ahmed has just shared a simple sample, but simple in that it's easy to understand and to learn from and build on...

Kinect v2 Speech Recognition sample–open sourced!

If you are new to Kinect for Windows v2 development, I posted my Speech Recognition Sample code on Github

The sample demonstrate Kinect for Windows v2 Speech Recognition capabilities. It Shows how to set up Kinect Speech Recognition Initializers, Add Grammar and how to perform an action when a speech is recognized.

Project Information URL: http://www.zubairahmed.net/?p=2031

Project Source URL: https://github.com/zubairdotnet/KinectSpeechColorWPF

Contact Information:




Small Basic gets Kinect

$
0
0

It's not often we get a new development environment that gets Kinect Dev support, so when we do, and when it's focused on getting to coding quickly, well we have to highlight it!

Kinect for Small Basic

1, Download Small Basic 1.1 http://www.microsoft.com/en-us/download/details.aspx?id=46392

2. Download the Kinect for Small Basic installer here http://bit.ly/Kinect4SmallBasic

3. See Small Basic tutorials and samples here: http://smallbasic.com/

4. Get a Kinect for Windows Sensor: http://www.microsoft.com/en-us/kinectforwindows/purchase/default.aspx

Programmers Guide

Kinect for Small Basic is a set of extension object for Small Basic which allow anyone to program with the Microsoft Kinect Sensor and the information that it captures.  Here are examples of what you can do with Kinect for Small Basic:

  • Show the color, infrared, depth, body index, and body sensor data
  • Capture images from the color, infrared, depth, and body index sensors
  • Replace the background behind people in the foreground with another image.  This is similar to chroma key compositing or “green screen” processing.
  • Get the position and orientation of 26 different “joints” in up to 6 human bodies in both 3D space and on the screen
  • Gets the open/closed state of the hands of up to 6 humans in front of the sensor
  • Gets the lean angle of up to 6 humans in front of the sensor
  • Gets the position and orientation of the faces of up to 6 humans in front of the sensor

Programmers Reference

You will notice that three new objects now appear in the IntelliSense object list: KinectBodyList, KinectFaceList, and KinectWindow.  All of the Kinect capabilities available in Small Basic are accessed through these objects.  There are some capabilities in the Kinect sensor that are not available in the Kinect for Small Basic at this time which are available to developers who use Visual Studio and the full Kinect for Windows SDK.

image

Project Information URL: http://bit.ly/Kinect4SmallBasic

Project Download URL: http://bit.ly/Kinect4SmallBasic

Our recent Small Basic posts;




There can be only one...

$
0
0

Last week the Kinect for Windows team made an important announcement, one that is actually great news.

Since the first Kinect device came out there's been a great deal of confusion about which device, Xbox or Windows, does what. Which one can be "officially" used, etc. With the step, the confusion will hopefully fade away...

Microsoft to consolidate the Kinect for Windows experience around a single sensor

At Microsoft, we are committed to providing more personal computing experiences. To support this, we recently extended Kinect’s value and announced the Kinect Adapter for Windows, enabling anyone with a Kinect for Xbox One to use it with their PCs and tablets. In an effort to simplify and create consistency for developers, we are focusing on that experience and, starting today, we will no longer be producing Kinect for Windows v2 sensors.

Over the past several months, we have seen unprecedented demand from the developer community for Kinect sensors and have experienced difficulty keeping up with requests in some markets. At the same time, we have seen the developer community respond positively to being able to use the Kinect for Xbox One sensor for Kinect for Windows app development, and we are happy to report that Kinect for Xbox One sensors and Kinect Adapter for Windows units are now readily available in most markets. You can purchase the Kinect for Xbox One sensor and Kinect Adapter for Windows in the Microsoft Store.

The Kinect Adapter enables you to connect a Kinect for Xbox One sensor to Windows 8.0 and 8.1 PCs and tablets in the same way as you would a Kinect for Windows v2 sensor. And because both Kinect for Xbox One and Kinect for Windows v2 sensors are functionally identical, our Kinect for Windows SDK 2.0 works exactly the same with either.

Microsoft remains committed to Kinect as a development platform on both Xbox and Windows. So while we are no longer producing the Kinect for Windows v2 sensor, we want to assure developers who are currently using it that our support for the Kinect for Windows v2 sensor remains unchanged and that they can continue to use their sensor.

We are excited to continue working with the developer community to create and deploy applications ...

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/04/02/microsoft-to-consolidate-the-kinect-for-windows-experience-around-a-single-sensor.aspx




PyKinect2

$
0
0

Vladimir Kolesnikov, Microsoft employee and part of the This Week on Channel 9 host team has released a new wrapper that will make the Python Kinect 2 Dev's smile... :)

PyKinect2

Enables writing Kinect applications, games, and experiences using Python. Inspired by the original PyKinect project on CodePlex.

Only color, depth, body and body index frames are supported in this version. PyKinectBodyGame is a sample game. It demonstrates how to use Kinect color and body frames.

Prerequisites

image

Project Information URL: https://github.com/kinect/PyKinect2

Project Source URL: https://github.com/kinect/PyKinect2





One Kinect v2, Multiple Devices

$
0
0

Tango Chen, Friend of the Galley, provides an amazing inspirational post... I don't think I've seen anything like this!

Motion Server – One Kinect v2 on Multiple Devices

One day I felt that using one Kinect on one PC isn’t enough. There could be more interesting things to do with multiple screens.

So I consider sharing the Kinect data with multiple devices. You may know that I created an app called Kv2 Viewer that can access Kinect data on the phone. With the similar technology, I successfully send Kinect data to multiple devices.

Firstly I write a server program to make a Win 8.1 PC as a server. Then all other clients that want the Kinect data would just need to connect to this PC.

Here are the simple demos I made in the video:

Body views

...

They all get the joint positions, joint states first and generate the body views on screen, not reciving body view images from the server.

Girls staring at target

image

The girls from different directions would stares at you (According to your head position). And if you act like pointing at something, they would stare at that (According to your hand position.)

Project Information URL: http://tangochen.com/blog/?p=2073

Contact Information:

Related past posts you might find interesting;




GesturePak v2

$
0
0

Kinect MVP Carl Franklin has done it again. He's not only released his GesturePak v2, but he's also released the source for it too.

As you can see, Carl is a Friend of the Gallery;

Recently the Kinect Team highlighted the GesturePak.

GesturePak v2 simplifies creation of gesture-controlled apps

What do you do after you’ve built a great app? You make it even better. That’s exactly what Carl Franklin, a Microsoft Most Valuable Professional (MVP), did with GesturePak. Actually, GesturePak is both a WPF app that lets you create your own gestures (movements) and store them as XML files, and a .NET API that can recognize when a user has performed one or more of your predefined gestures. It enables you to create gesture-controlled applications, which are perfect for situations where the user is not physically seated at the computer keyboard.

image
GesturePak v2 simplifies the creation of gesture-controlled apps. This image shows the app in edit mode.

Franklin’s first version of GesturePak was developed with the original Kinect for Windows sensor. For GesturePak v2, he utilized the Kinect for Windows v2 sensor and its related SDK 2.0 public preview, and as he did, he rethought and greatly simplify the whole process of creating and editing gestures. To create a gesture in the original GesturePak, you had to break the movement down into a series of poses, then hold each pose and say the word “snapshot,” during which a frame of skeleton data was recorded. This process continued until you captured each pose in the gesture, which could then be tested and used in your own apps.

...

Another big change is the code itself. GesturePak v1 is written in VB.NET. GesturePak v2 was re-written in C#. (Speaking of coding, see the green box above for Franklin’s advice to devs who are writing WPF apps.)

Franklin was surprised by how easy it was to adapt GesturePak to Kinect for Windows v2. He acknowledges there were some changes to deal with—for instance, “Skeleton” is now “Body” and there are new JointType additions—but he expected that level of change. “Change is the price we pay for innovation, and I don't mind modifying my code in order to embrace the future,” Franklin says.

He finds the Kinect for Windows v2 sensor improved in all categories. “The fidelity is amazing. It can...

Carl Franklin offered these words of technical advice for devs who are
writing WPF apps:

  • If you want to capture video, use SharpAVI
    (http://sharpavi.codeplex.com/
  • If you want to convert the AVI to other formats, use FFmpeg
    (http://ffmpeg.org/)
  • When building an app with multiple windows/pages/user controls that use the Kinect sensor, only instantiate one instance of a sensor and reader, then bind to the different windows
  • Initialize the Kinect sensor object and all readers in the Form Loaded event handler of a WPF window, not the constructor

Project Information URL: http://blogs.msdn.com/b/kinectforwindows/archive/2015/04/01/gesturepak-v2-simplifies-creation-of-gesture-controlled-apps.aspx

GesturePak v2

Gesture Recording and Recognition Toolkit

For Kinect for Windows v2.0

GesturePak is both an app that records you making gestures and an SDK for WPF (.NET 4.5) that determines when user has made those gestures.

image

Features:

  • Easily record yourself or someone else performing a complex gesture.
  • GesturePak saves gestures as xml files.
  • Allows you to edit and tweak your gestures.
  • Options to track each of 25 joints.
  • Options to track X, Y, and/or Z axis.
  • Limit the required time in which to perform the gesture.
  • Increase or decrease accuracy required.

Documentation:

GesturePak Recorder Documentation
The GesturePak Recorder app lets you record and edit gestures.

GesturePak API Documentation
The GesturePak API lets you recognize Gestures in your .NET applications..

Project Information URL: http://www.franklins.net/gesturepak2.aspx

Project Download URL: http://www.franklins.net/gesturepak2.aspx

Project Source URL: https://github.com/carlfranklin/GesturePak2V1

Contact Information:




"Coding for Kinect with Scratch" MVA Course

$
0
0

It's been a long time, to long, since we've covered Kinect2Scratch from Stephen Howell.

You've all heard of Scratch?

Scratch is a programming language that makes it easy to create your own interactive stories, animations, games, music, and art -- and share your creations on the web.

As young people create and share Scratch projects, they learn important mathematical and computational ideas, while also learning to think creatively, reason systematically, and work collaboratively.

And you'll remember the times we've highlighted Kinect2Scratch;

Now what if I were to tell you Stephen has created and shared a full, free, course on it? Woot!

Coding for Kinect with Scratch

Would you like to know how to build natural user interface (NUI) programs using Microsoft Kinect and the Scratch programming language? Check out this course, and explore how NUI applications can respond to users' movements and gestures and how the Kinect motion-sensing camera enables you to build cool body-tracking software.

Even if you can't program in an advanced language yet, you can use Scratch, the programming environment (from MIT) for beginners. Learn how to set up your computer to use Scratch and Kinect, and then see how to build NUI applications with ease. Start with tracking a single point, like a user's hand, and end by building motion-sensitive multiplayer games for Kinect using Scratch. Don't miss it!

(NOTE: To follow along with the course, you should have Kinect v1 or Kinect v2, along with Windows 7, Windows 8, or Windows 8.1, and associated SDKs, plus Scratch.)

Instructors | ​ ​Stephen Howell - Microsoft Ireland Academic Engagement Manager

image

image

Project Information URL: http://scratch.saorog.com/, http://www.microsoftvirtualacademy.com/training-courses/coding-for-kinect-with-scratch

Contact Information:




Handpose - Look Ma, No Keyboard!

$
0
0

Today's inspirational post shows something that could become awesome... We all gesture at our PC's, would be be awesome if our PC's understood them (then again, maybe not... lol :)

All hands, no keyboard: New technology can track detailed hand motion

Or, let’s say you speak sign language and are trying to communicate with someone who doesn’t. Imagine a world in which a computer could track your hand motions to such a detailed degree that it could translate your sign language into the spoken word, breaking down a substantial communication barrier.

Researchers at Microsoft have developed a system that can track – in real time – all the sophisticated and nuanced hand motions that people make in their everyday lives.

The Handpose system could eventually be used by everyone from law enforcement officials directing robots into dangerous situations to office workers who want to sort through e-mail or read documents with a few flips of the wrist instead of taps on a keyboard.

It also opens up vast possibilities for the world of virtual reality video gaming, said Lucas Bordeaux, a senior research software development engineer with Microsoft Research, which developed Handpose. For one thing, it stands to resolve the disorienting feeling people get when they’re exploring virtual reality and stick their own hand in the frame, but see nothing.

Microsoft researchers will present the Handpose paper at this year’s CHI conference on human-computer interaction in Seoul, where it has received a Best of CHI Honorable Mention Award.

Handpose uses a camera to track a person’s hand movements. The system is different from previous hand-tracking technology in that it has been designed to accommodate much more flexible setups. That lets the user do things like get up and move around a room while the camera follows everything from zig-zag motions to thumbs-up signs, in real time.

The system can use a basic Kinect system, just like many people have on their own Xbox game console at home. But unlike the current home model, which tracks whole body movements, this system is designed to recognize the smaller and more subtle movements of the hand and fingers.

It turns out, it’s a lot more difficult for the computer to figure out what a hand is doing than to follow the whole body.

...

In the long run, the ability for computers to understand hand motions also will have important implications for the future of artificial intelligence, said Jamie Shotton, a principal researcher in computer vision who worked on the project.

That’s because it provides another step toward helping computers interpret our body language, including everything from what kind of mood we are in to what we want them to do when we point at something.

In addition, the ability for computers to understand more nuanced hand motions could make it easier for us to teach robots how to do certain things, like open a jar.

“The whole artificial intelligence space gets lit up by this,” Shotton said.

Project Information URL: http://blogs.microsoft.com/next/2015/04/17/all-hands-no-keyboard-new-technology-can-track-detailed-hand-motion/

Accurate, Robust, and Flexible Real-time Hand Tracking

Abstract

We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis

image

image

image

Project Information URL: http://research.microsoft.com/apps/pubs/default.aspx?id=238453



Kinect 2 Unity 5

$
0
0

Kinect and Unity, Peanut Butter and Chocolate (or you'd think so, given how often we cover them here... ;).

Now James Ashley has put together something just a yummy with his Unity 5 and Kinect 2 tutorial...

Unity 5 and Kinect 2 Integration

image

Until just this month one of the best Kinect 2 integration tools was hidden, like Rappuccini’s daughter, inside a walled garden. Microsoft released a Unity3D plugin for the Kinect 2 in 2014. Unfortunately, Unity 4 only supported plugins (bridges to non-Unity technology) if you owned a Unity Pro license which typically cost over a thousand dollars per year.

On March 3rd, Unity released Unity 5 which includes plugin support in their free Personal edition – making it suddenly very easy to start building otherwise complex experiences like point cloud simulations that would otherwise require a decent knowledge of C++. In this post, I’ll show you how to get started with the plugin and start running a Kinect 2 application in about 15 minutes.

(As an aside, I always have trouble keeping this straight: Unity has plugins, openFrameworks as add-ins, while Cinder has bricks. Visual Studio has extensions and add-ins as well as NuGet packages after a confusing few years of rebranding efforts. There may be a difference between them but I can’t tell.)

1. First you are going to need a Kinect 2 and the Unity 5 software. If you already have a Kinect 2 attached to your XBox One, then this part is easy. You’ll just need to buy a Kinect Adapter Kit from the Microsoft store. This will allow you to plug your XBox One Kinect into your PC. The Kinect for Windows 2 SDK is available from the K4W2 website, though everything you need should automatically install when you first plug your Kinect into your computer. You don’t even need Visual Studio for this. Finally, you can download Unity 5 from the Unity website.

...

image

9. To build the app, select File | Build & Run from the top menu. Select Windows as your target platform in the next dialog and click the Build & Run button at the lower right corner. Another dialog appears asking you to select a location for your executable and a name. After selecting an executable name, click on Save in order to reach the final dialog window. Just accept the default configuration options for now and click on “Play!”. Congratulations. You’ve just built your first Kinect-enabled Unity 5 application!

Project Information URL: http://www.imaginativeuniversal.com/blog/post/2015/03/27/Unity-5-and-Kinect-2-Integration.aspx

Contact Information:




Viewing all 446 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>