For those of you dismissing the Kinect as a toy, make sure you watch this weeks' video blog. Tinkerers from Microsoft Research Cambridge, along with some UK and Canadian university types, demonstrated some pretty amazing real-time 3D modeling capabilities last week at Siggraph. Do I know exactly how this might be applied usefully in the future? No. Is it patently obvious to me that someone will figure it out? Absolutely.
My full thoughts on these developments, plus some other notes from the show in Vancouver, are in this week's video blog:
Reference materials for this week's video blog
First, here's the full video presentation from Siggraph. No, there isn't any sound:
Here's the EnGadget write up with their thoughts on the Kinect presentation.
And here's EnGadget's write up of the face scanning done with Kinect
You can see a protype of AutoCAD integrated with Kinect - including real time point clouds and creating "pipes" as if you were playing a Kinect game like football or tennis.http://through-the-interface.typepad.com/through_the_interface/kinect/Where is this going? Don't know - but its been quite a bit of fun seeing what is possible with cheap hardware and just a few hours of software development.
Sam: Excellent video!One comment though is that what many clients usually want out of the pointcloud are the 3D models of the individual elements in the building. In other words, they want a 3D model of the pipes, the beam, the columns, the pumps, the pressure vessels, and etc. Those parts need to be segmented somehow from the pointcloud before you can model them. What the Kinect video shows is a continuous surface model (in 3D) of the environment. This may be good for things like volume determination or if you're imaging individual parts, but not as useful when you want to know where the centerline of a 6" pipe is in relation to the floor or ceiling.Also, you mention the geo-referencing issue so that you can walk through a building and when you exit you have a complete model. You've probably already seen this, but just in case, check out http://www.mantis-vision.com/ They have developed a camera, very similar to the Kinect in concept (albeit with much better performance and a higher
Hi Kamel - I completely get what you're saying vis a vis the individual elements. Clearly, the kinect example is not there yet. But how long until you can extract elements in real time just by highlighting them with your finger or something? I think it's only a matter of time.I have seen Mantis Vision, as well. Here are my thoughts on them: http://www.sparpointgroup.com/Blogs/Head-in-the-Point-Clouds/Mantis-Vision-starts-to-make-some-noise/It's my impression, though, that they're not geo-referenced. Maybe I'm wrong?
range, n— the distance, in units of length, between a point in space and an origin fixed to the 3D imaging system that is measuring that point. DISCUSSION— (1) In general, the origin corresponds to the instrument origin.