April 20, 2014

Head in the Point Clouds

Sam Pfeifle

Sam Pfeifle was Editor of SPAR Point Group from September 2010 - February 2013.

The coolest use of scanning for art ever

I've written about some cool 3d-data-based art, from Geodigital's Lidar as Art competition and real-time point cloud movies to an art exhibition using lidar to remember 9/11, but this work I've come across by Jonty Hurwitz trumps everything to date.

Just look at it:

01.25.13.hurwitz1 

That's a piece called "Rejuvenation," and is representative of a slew of his works that combine these chrome tubes with distorted sculptures to produce these images that are only "normal" when reflected. This combination of precision and seeming chaos, this incorporation of the viewer as a vital piece of the art, itself, just floors me. Not only aesthetically gorgeous, they have an innate quality about them that speaks to that analytical side of the brain that many pieces of art just don't engage. 

Nor is that an accident. 

As outlined by the This Is Colossal blog here, Hurwitz says he has "always been torn between art and physics, and so was particularly struck by the anamorphic movement in art-making that goes back more than 500 years. For these types of work, he describes his workflow this way:

For the anamorphic pieces its an algorithmic thing, distorting the original sculptures in 3D space using 2πr or πr3 (cubed). Much of it is mathematical, relying on processing power. There is also a lot of hand manipulation to make it all work properly too as spacial transformation have a subtle sweet spot which can only be found by eye. Generally I will 3D scan my subject in a lab and then work the model using Mathematica or a range of 3D software tools. I think the π factor is really important in these pieces. We all know about this irrational number but the anamorphic pieces really are a distortion of a “normal” sculpture onto an imaginary sphere with its centre at the heart of the cylinder.

I like the cut of this guy's jib. Of course, he's also a pretty visionary internet entrepreneur, creating things like micro-loan sites and all sorts of other pretty brilliant uses of the Web, and so his use of laser scanning is probably something he doesn't even think about much, but he stands to be a really nice evangelist for the technology, since he seems to be able to use it to extract beauty from the world while making people think. 

This painting ought to look familiar to those of you converting point clouds to meshes: 

 

Yes, a painting. Acrylic on canvas, a meter square. Pretty crazy. But none of this is even the most mind-blowing stuff he's produced. My absolute favorites are the ones that only come together when you look at them from a very specific perspective. Check out this one, which he now calls "Co-Founder," after originally calling it "The Thinker":

 

There's something there that really speaks to me of problem solving and how 3D data can find solutions to problems people didn't even realize they had. Sometimes, everything just seems disjointed, impossible, until someone comes along and gets you to look at the problem from just the right angle, and then everything comes together.

Maybe you're not making art, but, if you're doing it right, I bet you're using 3D to get people to look at something from just the right angle. 


Permanent link

Sense and sensibility

One of the great selling points for 3D data acquisition is that it allows people to know their environments more intimately. To have an exact digital replica of your plant, or your crime scene, or your construction site is to know its every millimeter in a new and different way. How big is that space? How far away is that shell casing? Those are questions to which you can know the answer in seconds.

But we don't think often that 3D data capture can allow your digital devices, the machines and buildings around you, to know YOU more intimately. 

Such is the line of thinking triggered by a great blog put together following CES by Robert Scoble (I recommend following him on Twitter, as well), Rackspace's startup liaison officer, whose job it is to basically be constantly seeking out the new and exciting. He's completely geeked-up by the new PrimeSense sensor, which is more accurate than the one they licensed to Microsoft for the Kinect and yet far smaller and able to be implanted in just about any electronic device anywhere. He called it a "world changing 3D sensor." 

"World changing" is big talk. Here's a video where he gets pretty enthusiastic:

 

And, no, it's not just for consumer applications. Get a load of this:

At CES I had dinner with execs from GM and Ford and they are thinking about how to use these sensors in cars. Both to personalize the car (with a sensor like this they could tell you are sitting in drivers seat) but also to do things like wakeup alarms if you are falling asleep while driving. Also, hand gestures will be more efficient in many ways than voice systems, particularly for moving around user interfaces.

What about an alarm that goes off if someone is about to turn the wrong valve out in the field? What about a sensor system that has the design as its database and is constantly checking what's being built against the design and alerts upon deviation? What about a sensor on every vehicle in the yard that alerts upon proximity to a human-shaped object? 

Many of these new applications may also rely on having an accurate picture of reality from which to start. The active sensors might need to rely on that digital representation of truth in order to notice when something's going wrong. And the data they collect can be stored and interrogated in any number of ways. 

Scoble puts it this way: "The sensual, contextual age of consumer electronics is here, ready or not." But why would it stop at consumer electronics?


Permanent link

Live on stage: 3D projection mapping

There were no shortage of 3D-related stories coming out of CES, and I've covered a few of them (see my most recent entries on the blog), but none of them caught the ear the way the band Love in the Circus did. They found themselves front and center at CES as guinea pigs for the use of 3D projection mapping as part of the live band experience, thanks to a little donation from Sony.

While I've written about 3D projection mapping a few times before, and have mentioned already the opportunity here for laser scanner professionals, this is the first time I've seen live performance incorporated alongside, and it got my wheels turning as the subject of this week's SPARVlog. Take a look:

 

Additional resources:

Billboard article about Love in the Circus

Video made by the band about how they did the projection mapping


Permanent link

Lidar: All up in your grill

In my first CES round-up, I pointed out the buzz Lexus was getting for its self-driving vehicle, which features the well-known Velodyne scanner spinning around on the roof. How I missed Audi's much-sleeker entry into the lidar-based auto-driving market I'm not sure. It's pretty rad.

Car and Driver does a great job of summing up the user experience in their blog entry here detailing the test drive:

Traffic-jam assist combines the sensory information from Audi’s existing radar-based adaptive cruise control system and the car’s lane-keeping assist camera (which monitors lane markers) with a compact, bumper-mounted LIDAR (Light Detection and Ranging) laser. Audi is particularly proud of this LIDAR unit, not the least because it packs all of the sensory ability of the giant, roof-mounted LIDAR towers seen on Lexus’s autonomous LS, Google’s fleet of self-driving Priuses, and the Pikes Peak autonomous Audi TT into a brick-sized piece of hardware. Software allows the system to steer the car; if any significant torque is applied to the steering wheel, control is relinquished back to the driver.

Well, I don't know about "giant, roof-mounted lidar towers." That seems a bit overblown. But there's no arguing that Audi's version is far sleeker:

 01.11.13.audi 

(For those still not understanding my headline, go here. But only if you have a sense of humor.)

I'd consider this a pretty impressive development. They've got 145 degrees of visibility with the lidar sensor, so they neither smack into the car ahead of them nor miss cars trying to cut in front of your car in traffic. All of that while not looking ugly. Well done, Audi.

Engadget got a nice interview from CES in the Audi booth, which was by all descriptions very impressive. It outlines Audi's vision for the "piloted car" and how the company thinks people will use the functionality:

 

Am I the only one slightly amused by the constant assurances that the piloted function doesn't work on the highway because of how fun it is to drive an Audi on the highway and no one would want to give that up? I think they're just not quite ready to promote it at highway speeds. Try my commute to work in the Hyundai - I don't think an Audi would make 25 miles on the Maine Turnpike into a laugh riot. 

Regardless, all of this activity shows that carmakers are making a real commitment to the capture of 3D data to inform the performance of their vehicles and I think it's only a matter of time before lidar is built into vehicles of all sorts on construction sites and in industrial facilities for safety purposes. 

But think about what else could be done here. What if, while these vehicles were scanning for obstructions, they were also constantly replenishing your as-built point cloud documentation? Like a Viametris unit in the front grill of your bucket loader. Couldn't you just do a data dump every night and then let it register overnight and, voila, updated point cloud?

Something to consider while you count your pennies in advance of the release of this new A6.


Permanent link

3D wanted dead or alive at CES

Coverage of 3D's presence at the Consumer Electronics Show in Las Vegas this week as been pretty schizophrenic: One one hand, the mass media is all over autonomous car news with lidar a key component; on the other, people are declaring 3D "dead." What's going on?

Well, I think it's certainly true, as the above-linked article declares, that the latest round of buzz surrounding 3D display has burned itself out. On one hand, 3D display is no longer all that impressive or futuristic. On the other, the consumer experience still pretty much sucks. So it's not surprising that "every big TV maker at CES has waved a clear white flag on trying to sell 3D TV as an important feature."

But that's good news for us! If it's true that "The 3D TV won its tortured, protracted war — you can buy a 3D TV anywhere and at any time — and nobody could care less," then it's likely the 3D display will no longer be premium priced and those of you who actually enjoy looking at your 3D models and point clouds with a 3D display should be able to get one more affordably. If it's soon just another feature, like the SAP button, that would be a great thing for the 3D data industry.

As for the autonomous cars, well, people are pretty captivated by them. I was listening to sports talk radio last night and the hosts were going on and on about the news that Lexus has their own lidar-equipped prototype (particularly appealing is the fact that you can have the car drive you home from the bar, apparently - and, actually, I can get with that). Pretty sure this is a Velodyne on the roof here:

 

A world where lots of people are rocking Velodyne-enhanced driverless cars is still far, far away, but it seems likely that even the 2017 or 2018 of my crappy Hyundai Elantra (not actually crappy at all) will have some kind of real-world sensor (lidar or otherwise) that will assist with driving to improve safety. 

But for 3D data acquisition, the company probably making the biggest impact at CES is a company that's largely been out of the limelight, despite having technology in many of your hands: PrimeSense, developer of the technology behind the Kinect. 

They've got themselves a big "World of 3D Sensing" booth at the event and they've released a futuristic video to demonstrate some of the solutions they think are coming down the pipeline thanks to their 3D sensing device. You kinda have to watch it. It's alternately really cool and really cringe-inducing, but it's definitely well done:

 

I know, I know: Where's the point cloud?!?! Obviously, it's the back-end somewhere, with software churning away to process the data being collected. We've heard often from some quarters about the point cloud taking the place of the model or attaching more intelligence to the point cloud, but what if, as is happening in the close-range scanning field, the point cloud starts disappearing as the data processing happens in the middle to produce the desired deliverable (such as with Creaform's handheld scanners, etc.)? 

One of the majorly featured companies in PrimeSense pavilion is Matterport, whom we've written about previously and who do that very thing: They ingest the data from the Kinect-like collection device and immediately begin to produce a mesh of the environment as you scan. 

Just print it out!

Finally, 3D printing continues to make an impact with the mainstream at CES. Business Insider was pretty impressed by MakerBot's new Replicator 2X, which can print in two colors now and is faster. Plus, it looks like he's got a book I'm going to have to review alongside Ping Fu's. As for 3D Systems, they're not making the hardware impact they made last year, where they really wowed people with the Cubify, but the CubeX is getting some love this year, as it allows for bigger objects to be printed faster (and it comes in pretty colors).

Rather, the big news is the announcement that 3D Systems has unveiled the beta version of what we've all been waiting for since the buy of Hypr3D (and, sort of, Rapidform and Geomagic): Cubify Capture, a service that allows you to upload photos and video that are turned immediately into 3D models that are suitable for printing.

Yes, it's scan-to-print for the consumer market!

Details from the press release:

The company plans to expand the services of its Cubify Capture portal to include a full suite of thematic scan-to-print web and mobile apps. Users can capture on the go and upload pictures or video to Cubify.com where a 3D model is generated automatically and saved in the user's Cubify account. These 3D models can be used for further modeling, customizing or fusing with other elements and readied for printing at home or through Cubify cloud printing, in monochrome, durable plastic or full color.

The company plans to develop a series of Cubify Capture apps starting with Cubify Capture: Faces, designed specifically to capture facial features and seamlessly turn them into customized 3D printable memorabilia. Cubify Capture: Faces for mobile will also be demoed at CES.

"We're thrilled to invite users and educators to explore and experiment with the beta release of Cubify Capture, the first true real-world-to-print capture tool," said Cathy Lewis, CMO, 3D Systems. "We are excited to see what our growing Cubify community will capture and print."

I'm not entirely sure what they are yet, but I'm certain there are commercial applications for this kind of service and we'll be hearing about them in short order.


Permanent link

Should you listen to the VCs in 2013?

One thing I know about news like 3D Systems' buy of Geomagic (and Rapdiform...), or the many other acquisitions that have rippled through the 3D data acquisition space in the last couple of years, is that it's attracting the attention of folks with money to invest. When start-ups start cashing out, that smells like opportunity to people who know how to make money work. 

That should be a good thing for our market. Money breeds innovation and is likely to spur more technologists to consider start-ups that work with 3D data, hardware or software (but probably software), and that should lead to better products that will help lots of people do their jobs better - which will lead to more money flowing in... You get the idea. 

And where does that money come from? Often, the venture capitalists (the "VCs," for those of you not big in the investment marketplace). There are other ways to raise capital, of course: "angel investors," like friends and family who happen to have a few million extra bucks lying around; grants; even traditional loans. But you don't see that much in the technology space, since a lot of these new ideas are risky and don't make a lot of sense to conservative banker types who need to make sure they get their money back (especially in this lending environment). 

So, I thought this article on Venture Beat was pretty interesting and possibly instructive for how people should be approaching growth in our market this year. Essentially, VB got a bunch of VCs to talk about what they see as "hot" for 2013, and some of it relates to our little corner of the business universe.

Take Mike Maples, founding partner at Floodgate Capital, which got in early on Twitter:

In 2013, people will expect to be hyperconnected on the hyperweb: They will want to manage content on any device. We will see user experiences that are no longer assumed to be windows on a computer screen or a smart phone. Some of my favorite examples are Google’s self-driving car or Nest‘s thermostat, which learns the temperature you like and turns it down when you’re away. 

Sure, the self-driving car has some lidar going on, but think about the big picture here. Fundamentally, what you're doing with laser scanning and photogrammetry is capturing the real world and making it digital; you're allowing for interaction with things and places that may be otherwise completely out of reach for most people - rare objects and artifacts, remote places (or at least places that are expensive to travel to), and, yes, offshore platforms that are a pain to get to. How can the user experience of 3D data, the way that people interact with a point cloud or 3D information, be improved so that people can't get enough of it? Or so that early adopters can quickly and easily convey the utility of the information to those who are a little more skeptical?

Think about what Ross Fubini, partner in Canaan Partners, is talking about when he's talking about the potential in "big data":

“Data has been a big deal and a big market for years — SQL, hello!. But ‘big data’ is a big deal because of the sheer volume of trackable data and because it’s cheaper than ever before to build an application to make that data valuable. By the end of the year, we will see some big winners emerge leading up to some splashy 2014 and 2015 IPOs.” 

No one can create more data faster than a guy with a laser scanner. So much data can be captured that the possibilities are really just starting to be explored (and what Bipul Sinha, investor at Lightspeed Venture Partners, says about storage getting sexier bodes well for managing and moving all that data), but it's those "application[s] to make that data valuable" that are the crux of the situation. People want to interrogate their world more than ever before. 3D data allows them to do that in very new ways. How does that become a business?

Maybe in a way that's new and different. Look at the way that LeapMotion is entering the market: By giving away their device to 10,000 developers and letting them create applications on their own before the device is even available to the public so that when it hits, there will already be a suite of apps that people can use to take advantage of the new gesture-control system. Should Faro's app store have been launched two years BEFORE the Focus3D? Should Leica's P20 have hit the market alongside 10 new custom-oriented software packages that targeted 10 specific verticals so that the P20's advantages could immediately and obviously be capitalized on? Heck, should Leica just make Cyclone free and charge for the plug-ins and for access to the SDK? I'm making things up, but the distribution of developers, and the network of people looking to use their coding ability in new markets, is something 3D data acquisition needs to be taking advantage of more.

Heck, Google thinks indoor location is the next big thing, and what can contribute to that better than laser scanning and new products like the Zebedee/Viametris/TIMMS/Allpoint solutions? 

“Indoor location will be bigger than GPS, which only works outdoors. We spend 90 percent of our time indoors, whether it’s in shopping malls, offices, schools, restaurants, and so on, where GPS doesn’t work or is inaccurate. In 2013, you’ll use your smartphone to find the exact store aisle location for every item on your shopping list.

So says Don Dodge, developer advocate at Google. (You might want to check out Venture Beat's round-up of indoor location companies, too.)

“With indoor location, you can find people, products, or services plotted exactly on a floor plan with walking directions to get there. You could receive coupons, advertisements, or free offers for products based on where you are in a store. Imagine playing indoor location games like capture the flag, tower defense, or other games based on real-life indoor locations. There are thousands of applications in many different market segments that will be built using accurate indoor positioning technology.”

Can you be a part of rapidly acquiring accurate indoor data and serving that back up to folks who'd like to navigate it? Seems like folks like Matterport, etc., could get in on that, no? What are the trade-offs between precision and speed, between density and usability?

Of course, Brian Singerman, partner at Founders Fund, has the most level-headed outlook (even if it's solidly cliche):

We are always highly suspicious of trends. The best investments are often in companies and industries that others do not consider hot or trendy. Therefore, a theme for 2013 will be to not invest in trends, but rather long-lasting value. Trends come and go, but the best companies will be the ones that buck the trends and don’t look like all the others, companies that don’t appear to have much competition. 

Talk about opportunity! I can't tell you how many times has someone asked me, "who's their competition?" about a company in our space and I've sat there and thought about it and then said, "well, no one really does EXACTLY what they do." Laser scanning and high-end photogrammetry are right there on the cutting edge, creating new possibilities to deliver things that people will value once they find out about them.

How are you spreading the word and who might be able to help you along the way? 

Actually, though, I'm reminded about a question I get from bands all the time because of my music writing: "How do we get an agent?" My answer? "Just be awesome, and the agent will find you."

If you're looking for funding, the answer is probably pretty similar: "Just be awesome."

That shouldn't be so hard.


Permanent link

3D leads the top tech breakthroughs of 2012

Last year I thought it was a big deal when Faro's Focus3D landed on PopSci's list of the "100 Best Innovations of the Year" issue, but Popular Mechanics has an even more exclusive list than that, and 3D data capture takes up a good 30 percent of the "Top 10 Tech Breakthroughs of 2012" (with a little printing thrown in).

• First up is the Lytro camera, which is really a 3D data capture device, in that it's constantly capturing light from all directions. I haven't yet seen someone harness this power for a major advancement in photogrammetry or some other jump forward, but it's only a matter of time, in my opinion. This ability to refocus after the fact has to have an application and there are smarter guys than me working on it, surely.

• Second up is 123D Catch, and that's got to be a nice validation for Autodesk. I 100 percent agree that this is a major step forward for the field of 3D data capture. I mean, this is FREE. Anyone can start creating 3D models from the world around them. This is Star Trek stuff come to life, people. I know many of us take this kind of technology for granted, since we work with it and talk about it all day, but to the world at large this is mind-blowing. I've seen minds blown just from what I show people on my phone. Good on Popular Mechanics for noticing.

Cubify also gets a nod, and for good reason. I'm not personally sure the price point is low enough to create the DIY impact that many think the device will cause, but it's a pretty good start. 3D printing in the home will absolutely drive the desire for 3D data. I'm sure of it. I'm just not sure a $1,300 printer gets into the home very often. But maybe I'm wrong.

• Finally, there is Leap Motion, and I think this is going to be the next big game changer in our little community. Just as the Kinect brought 3D data to a whole new world of developers who hacked the device and created all manner of solutions (including Matterport's handheld device and MIT's new indoor mapper), the new Leap stands to create even more amazing advances. 

Why? Well, it's 200 times more accurate than the Kinect and it costs even less. About $70. Essentially it creates an area of about eight cubic feet where you can have a live point cloud accurate to about .01mm (well, that's what they say, anyway).

Here's a video that might get your mouth watering:

 

Pretty amazing, right? Sure, they're primarily interested as a company in gesture control as a way to operate your computer, but you don't garner $12.75 million in Series A funding from Highland Capital Partners because you might make the mouse obsolete (well, ACTUALLY, that's probably a pretty good reason to get $12.75 million in funding all by itself, now that I think about it). You get that money because people see a brand-new platform that people can crack open and take a whack at. Already, even though they've only shipped 30 units to NDA-protected developers, they've had 26,000 applications from developers looking for the SDK and a free developer unit.

That's amazing. Check out what potential developers have already pitched for ideas:

Leap applications are full of potential, and software developers are eager to push Leap’s technology towards new and exciting directions. Here is a list of the popular application categories Leap software developers would like to create for:

Games – 14%

Music and video – 12%

Art and design – 11%

Science and medicine – 8%

Robotics – 6%

Web and social media – 6%

Education – 4%

Other popular ideas for the Leap include computer-aided software design, translating sign language, using the Leap to drive a car or plane, supporting physical rehabilitation and physical disabilities and special needs, manipulating photos and videos, and creating new types of art.

And I know what you're thinking: "Great, eight cubic feet..." But you can daisy chain these things! They're $70 and they're smaller than my iPod Classic. Someone is going to come up with something that even 3D data capture pros are clamoring for.

 

 

 


Permanent link

Giving 123D Catch a test spin

Well, 123D Catch is finally available on a platform that I can test. The release late last week of the free app for the iPhone took the product from news sensation to actual toy for me, and I can now give you some preliminary results.

09.10.12.123dPreviously, it was available for my iPad, as well, but for some reason I could never successfully download and test it (it may have had something to do with me not upgrading to iOS5 on the iPad before I tried it - it's unclear). And there was a desktop version for PC, but that really wasn't happening. I don't even know how to work a PC anymore.

The app for the iPhone, though? Yeah, that's pretty easy to work.

First, you simply search the App Store, which delivers exactly one result. You click on that and get the free download installed to the iPhone within about 30 seconds if you're in a wifi environment (and, really, I'd recommend being in a wifi environment whenever you're doing anything with 123D Catch, though it does work okay in 3G if you've got a strong signal).

Then, when you fire it up, there's an immediate tutorial you have to watch before you can even play with it. And, each step of the way, you're forced to either watch a short video or scroll through a slide show explaining how to accomplish the next step.

Here's the basic gist of it, though:

1. Click the "New Capture" button.

2. Take up to 40 pictures of any object from a variety of different angles, making sure to have images that overlap each other at least a little (this is not hard).

3. Review the pictures you've taken and eliminate any that are especially bad (this can happen because the shutter button isn't always entirely responsive).

4. Upload the photos to the Autodesk 123D cloud. This takes about five minutes, depending on whether you used up all 40 photos or not.

5. Check out the model you've created.

At this point you can check out your model on your phone, playing around with it, taking screen grabs and emailing the model to your friends (who will need to also have the app on their phones to actually look at it).

But, and this is more fun and interesting, you can also click "share to community" and then go check out your model at your free account (mine is linked to my SPAR_Editor Twitter account) at 123dapp.com in "My Corner." There, you can make your model public and let other people play around with it and they can even download .stl file for "fabrication" (none of my models is even close to water tight), a "mesh package file" (with an .obj, an .mtl, a .jpg, and a .png), or a zipped folder of all the photos you used to create the model in the first place. 

This is my 123dapp profile if you want to check out my sample models. I'll talk about how I got my results down below.

Playing with the models on the phone is a blast. As you tip the phone, there's a "gyroscope" mode so that the model tips with your phone. If you find that annoying, you can turn that off and just manipulate with you fingers, making it bigger and smaller and rotating, etc. Great way to impress friends, really, and the kids love it.

Further, you can do all of that online as well, on that 123dapp site, and you can play with other people's models in the same way (all as long as you've got a modern browser - Firefox, Safari, or Chrome all worked for me). And having that export option actually makes them useful - you can bring them into something like blender or what have you and be able to have a starting point, whether you're trying to make a 3D animation or a printable object. The workflow is very simple.

The only disappointment in the 123dapp site is that you're not given an embed code to embed your models in other sites.

However, you can download that .stl file and then upload that to Sketchfab and, voila, you've got something you can embed. Unfortunately, you don't get the imagery layered on, so you just have a suface mesh, no texture. Still, it's pretty cool, and you can do a pretty good job of evaluating how good the app is by looking at them.

So, let's look at my intial testing and the results I got.

As soon as I downloaded the app, I headed to downtown Portland and Monument Square (mostly because I was hungry, but that's another matter). In the middle of the square is the Lady Victory statue, one of my favorite in the city. Here's how the capture came out:
 

 09.10.12.victory 

I was pretty pleased. But that's the good side I'm showing you. Here's the mesh for you to play around with, and you'll see the app's (and photogrammetry's) limitations:

 

Notice how one side looks great, but the other is all lumpy and formless? That's because on one side I was shooting photos with the sun at my back, and the images came out crystal clear, while on the other side the sun was in my face and everything got washed out. That's the peril of shooting outside. (It should also be noted that I'm using an iPhone 4. I believe the 4S would get better results, as the camera is far superior - my wife has one, and her photos destroy mine side to side.)
 

This app is not going to replace in any significant way archaeological documentation workflows, that's for sure, unless it's always cloudy and they get much better results from their iPhone camera than I do.
 

Next up, I thought I'd try something I could get the top of and see if a more contained object would show up better. So, I tried one of the local free newspaper boxes. Unfortunately, for a reason I can't quite figure, it came out upside down. Like so:
 

 09.10.12.box 

Looks pretty sharp, actually, though the mesh file was a little disappointing, with a huge hole in one side. Still, the bricks look great!

 

Finally, I thought I'd try a discreet object, with lots of variation on a surface with an identifiable grid. Namely, the 67 Mustang toy I keep in my office, placed on a yoga mat. I worked out pretty cool. Here's the export from my phone (it's lower res than the other screen captures above, but I wanted to show you what it looked like:
 

 09.10.12.mustang 

It definitely came out pretty nicely, but we lost a lot of the windshield, and I was disappointed with the way the back driver's side corner got all caught up with the yoga mat. Probably the most fun of the models to play with, though:

 

I'm not sure why all of those mountains in the mat popped up. It was lying flat. Maybe something about the grid confused the algorithm?

Regardless, I'm sure with some practice I can get some better results, but the initial feedback is probably pretty accurate to the app's limitations. This is not in any way a commercial tool. Nor, of course, was it intended to be. But, as a way to get the imagination racing with 3D modeling, it's pretty great. Most people I've shown this to, even when I've shown them what's on Sketchfab, have been pretty blown away that it's even possible.

That's worth something. To know that the limits of possibility have been pushed is an important thing. If that's possible, what else might we be able to do? The answer to that question is the seed of valuable innovation.


Permanent link

McKayla is not impressed with your 3D data

It's Friday, and sometimes when your job is to work on the Internet, you can get, well, a little distracted by the silliness of the day. Today, it's a little trend going around the Internet whereby people transpose U.S. Olympic champion gymnast McKayla Maroney (looking slightly put out by her silver in the vault during the medal ceremony) on images to show that "McKayla is not impressed." Armed with a bunch of 3D data images and pixlr.com, I couldn't resist.

Really, McKayla is not impressed with this point cloud Stantec put together for an arts center in Saskatoon:

 08.10.12.mckayla 

Nor is McKayla impressed with Historic Scotland's point cloud that includes thermal imagery, for that matter:

 08.10.12.mckayla-thermal 

Heck, she's not even impressed with DARPA's initiative to push the envelope in automatic feature recognition:

 08.10.12.mckayla-darpa 

Oh, you've got a portable laser scanning system, Sam Billingsley? Well, McKayla is not impressed:

 08.10.12.mckayla-sam 

 Did SPAR conference programmer Linda McLaughlin attend the Esri User Conference? McKayla says, "whoopedy-doo!":

08.10.12.mckayla-linda 

And, finally. McKayla is certainly not impressed with SPARVlog:

 08.10.12.mckayla-sparvlog 

Like I said, I couldn't help myself. Should you have a few minutes to kill, I recommend the McKayla Is Not Impressed tumblr site highly. If you scroll down far enough, you'll see where to get your own McKayla cut out. Put her in a good 3D data image and I'll post yours, too.


Permanent link

Why the Kinect matters

Yesterday, I met my new neighbor for the first time. His name is Kevin. He's maybe 22 years old. Just got married and moved in next door in our little portion of the great middle of nowhere that is rural Maine with his new wife. He wants to be a preacher. (Or maybe a pastor. I can't remember the difference.) He goes to Bible school. He obviously knows nothing about 3D data capture. 

When I start to explain to him what I do - talking about lasers that bounce off of stuff and create something called a point cloud, blah, blah - he stops me and says, "Oh, you mean like what the Kinect does?"

EXACTLY.

For those of you who slander the Kinect as a "toy" and wonder why anyone would care about the data it collects, since the accuracy ain't exactly going to get that bridge built, this is why anyone would care about it. The Kinect is an ambassador of 3D data capture. It provides an incredibly low-priced entry point to the science of gathering information about the world around us, digitizing it, and then making some use of it. 

And everyone knows what it does. It introduces into the minds of kids and young adults all over the world the possibility that 3D data can be cheaply captured and made to power systems. I move my arm, stuff happens on my television screen, thanks to the Kinect. What else can moving my arm make happen? 

For just one example of how students are using Kinect technology that's incredibly timely (even though it was posted in 2011), check out this amazing video made by the folks at UC Davis (if you've got 3D glasses, put them on):

 

You know that the Curiosity just landed in Gale Crater, right? Well, UCDavis' Dawn Summer (what a great name), was co-chair of the Landing Site Working Group, and you can see in this video why Gale was chosen. But just look at the video! To quote from the information provided, "This video was filmed using a Virtual Globe program called Crusta written by Tony Bernardin at UCDavis, which runs on the VR library Vrui written by Oliver Kreylos, both of UCDavis' KeckCAVES (http://keckcaves.org). Oliver used two Kinects to capture Dawn as she described the Gale site in front of a 3D TV system with head and wiimote tracking with an optical tracking system. Oliver then re-rendered Dawn's interaction with Crusta and the Kinect reconstruction of Dawn together into one movie, including the sound track as well. The result is the merging of Dawn and Mars into a virtual world. (See http://youtube/okreylos for more on Kinect wiimote hacking.)"

The Kinect allows for inspired creativity using 3D data. It's far less likely people are just going to play around with a $100k laser scanner. Creativity leads to technological advances, even if those advances eventually lead to $100k devices. That's why the Kinect matters.


Permanent link