Recently, I’ve been messing around with Microsoft’s Kinect Sensor (v2) within Unity3D to see what interesting results I can achieve. I’ve only really done large scale projects with the previous version of their sensor. But I must say, this new sensor is a huge improvement! And its tracking is on point!
For such a complex device, their developer API is crystal clear, and very supportive (in terms of multi-platforms). The only down side is that their SDK is Windows only (so I couldn’t develop on my Mac) – which isn’t the end of the world, but just an inconvenience.
Depth & Infrared Point Cloud
I wanted to experiment with the depth and infrared sensor and to get a point cloud representation of my room within Unity3D.
I achieved this by accessing both the depth and infrared sensor, and creating a reader for them both. A reader is what actually contains the data from the sensors, think of it like a ‘feed’ of data.
Once I had the data coming into Unity, I needed to actually do something with that. I dynamically built a 65,000 vertices Mesh with a MeshTopology of a Point – to give me the point cloud mesh. Each point represents a pixel from the depth sensor, whereby each pixel represents the distance in millimetres. This gave me a very accurate representation of my room in 3D via point cloud.
I took this one step further by colouring each point cloud with the pixel data given from the IR sensor.
The Mesh is then recalculated every frame (Update) to give me a real time scan of the room I was sat in.
Incase you can’t make out what’s going on, what you’re seeing is my room – 3D scanned into Unity in real time. You might be able to make out me, sitting on my chair in the middle. And as you can see, each point cloud has it’s own colour value given from the IR sensor.
This is a front isometric view – it almost looks like an image, but it’s not! This is just all the point clouds aligning up perfectly to give the impression that it’s a 2D flat image.