July 24, 2014

Head in the Point Clouds

Sam Pfeifle

Sam Pfeifle was Editor of SPAR Point Group from September 2010 - February 2013.

Even board games are using 3D data capture now?

Anybody used to play Stratego? I loved that game. Basically, I'd always just put the flag in the corner surrounded by bombs and the lowest-numbered guys I could spare and play a totally defensive game, hoping the other player would get impatient (but, wait, why am I telling you that? So, dumb...). Anyway, you may be interested to know that, nowadays, even Stratego is using 3D data capture.

No, not for hyper-realistic board pieces (though that would be cool), but rather as part of a cool marketing campaign that Spin Master toys commissioned this year. As part of the 140-second piece of animation that Brain Zoo Studios created to hype the game, full of a futuristic battle between the red and blue armies, the company actually scanned their own faces with a NextEngine scanner to create hyper-realistic facial texture and expressions:

The trailer features a new addition to our repertoire – Next Engine‘s 3D scanning technology – which captures a wide range of facial expressions to enable the on-screen characters to speak and interact with each other in a realistic and emotional way. We held internal auditions among our own artists and TDs to decide who should ‘appear’ in the trailer (with a special guest appearance from one of Spin Master’s own producers as the scout).

I assume at this point I'm not the only one thinking about how cool it would be to have my face on the scout... Brain Zoo provides some pretty interesting detail about their work-flow, actually:

In addition to compiling multiple scans into a series of blendshapes, the process involved capturing multiple HDR images to record skin textures. Our textures were captured at 2,048 x 2,048 resolution, with 10 HDRI references created per model to ensure our modelers and riggers had much more accurate reference material than normal to work with. Our ultimate goal was to give our characters a more human quality, with imperfections in their skin textures and facial structure.

After capturing the raw data, the software bundled with the Next Engine scanner easily imported the data into Maya for clean up. One of the biggest problems we faced was the sensitivity of the camera, since even breathing or small vibrations altered the resulting data when brought into Maya, and needed to be fixed on the spot.

The characters have bone-based facial rigs, supplemented with corrective blendshapes. Variants were created of key models, further enabling us to refine the data: for example, the Marshal was retextured to increase and decrease the apparent age of the original facial scan, enabling the team to create a keyframable ‘age’ blendshape slider.

Here's the resulting piece of animation. Pretty impressive, if you ask me. 

 

Okay, so the voice-over is a little corny. But how about those faces, right? 

In the same vein, that John Carter movie might have totally bombed at the box office (I haven't seen it - but I really wanted it to be good, so I'm holding out hope that it's just over the heads of too many moviegoers or something), but it gets points here at SPAR for its use of Spheron's HDR capture device. The Disney production employed a company called Double Negative to do the visual effects and they gave a pretty glowing endorsement of what Spheron can do:

"It addresses many onsite FX data gathering issues that had previously slowed down productions. The time it takes to survey sets and locations as well as shoot reference lighting data is always an imposition on the live-action crew as well a less precise practice. By using our Spheron Cameras, this process became a much more efficient and a more accurate all-round solution," explains Ryan Cook ... "The camera is also excellent at recording spatial data, using its 3D immersive measurement technology; it enables CG objects to be placed into any location - bridging the gap between the real world and computer graphics environments.

Ken McGaugh said, "Onset we captured both HDR light fields as well as making use of the cameras photogrammetric technology. This Photogrammetry element, which also records 3D points, has now been adapted into Double Negative's workflow, by the in-house development of a unique 3D photogrammetric pipeline, using both the 32-bit image data and measurement features, so assisting further in the Virtual Set re-Construction process."

See if you can catch the Spheron in action in this behind-the-scenes video:

 

Did ya see it? There's a glimpse of it at about 3:20...

Regardless, just further proof that 3D data capture is infiltrating the world of Hollywood productions. Even if the movie is (uh, how did that guy at the BBC put it? Like baking your brain into a loaf of bread?) not the best, at least we know they were employing the latest technology in giving it a go.


Permanent link

So that IS a Velodyne lidar unit on that Google car

We here at SPAR have obviously been covering driverless cars (or autonomous vehicles) for some time now. We had a presentation titled "Autonomous Vehicles from NIST's Intelligent Systems Division" at SPAR 2005. It was pretty cool for the time, using Sick lasers and flash lidar and multi-camera systems to create some working prototypes and at least figure out what the issues were. 

By 2007, though, things had moved along quite a bit. SPAR was on the scene for the final round of the DARPA Challenge, which pitted 11 organizations against one another to figure out who had the best-performing autonomous vehicle navigating city streets with intersections and human drivers in the mix. Carnegie Mellon scored the top $2million prize, followed by guys at Stanford and Virginia Tech. Just three years prior to that, in the first Challenge event, 15 vehicles started the race, and not a single one finished. And that was a desert course without any traffic.

It's likely some of the ideas developed in those challenges wound up in Lockheed Martin's Squad Mission Support System, which we profiled last year, and is out in the field right now, following soldiers around and acting as a mule. 

Pretty clearly, the technology is moving forward at a rapid pace. Still, I was a little surprised last fall when Sergey Brinn went public in a major way about Google's driverless car initiative, and I was similarly surprised this week to read the MAMMOTH article in Wired about driverless car technology and its looming commercial availability. 

Or should I say current commercial availability. Even being in the business, I guess I wasn't aware of just how far the use of lidar and 3D data acquisition has come in the motor vehicle industry. 

Maybe it's because I'm not often in the market for Mercedes and BMWs, but their high-end cars are already making it so you can't veer out of your lane or hit a pedestrian or have to cut off your cruise control when you roll up on some grandma going 50 on the highway. And those of you living in the San Francisco area have apparently been dealing with driverless cars pretty frequently lately - Google cars had logged some 140,000 miles already by 2010. It's certainly well into the millions now, and Tom Vanderbilt, author of the Wired article, does a fantastic job of projecting the wonder and desirability of these lidar-shooting driverless vehicles.

Vanderbilt had been in Stanford's car in 2008, doing 25 mph on closed off roads. Now:

“This car can do 75 mph,” Urmson says. “It can track pedestrians and cyclists. It understands traffic lights. It can merge at highway speeds.” In short, after almost a hundred years in which driving has remained essentially unchanged, it has been completely transformed in just the past half decade.

It really is amazing. Just this video alone kind of blows my mind:

 

Pretty awesome for a 15-second video, right?

But perhaps the clearest indication to me, as a technology writer for the past seven years, is that this stuff is starting to become less and less secretive and more and more matter of fact. The first time I interviewed Velodyne president Bruce Hall, in January of 2011, he would only hint that the giant spinning soda cans on top of the Google cars were his. Now, we get this matter-of-fact statement from the Wired article:

Google employs Velodyne’s rooftop Light Detection and Ranging system, which uses 64 lasers, spinning at upwards of 900 rpm, to generate a point cloud that gives the car a 360-degree view.

I certainly couldn't put that in my article at the time. But then maybe if I'd been writing for Wired... Just kidding. This happens all the time. Usually, the big company doesn't want the little company bragging that the big company just bought a bunch of its stuff. And that's usually because the big company isn't sure the little company's stuff is going to work exactly right and they might want to scrap the whole project. At this point, with Brinn doing his crowing, this project isn't going anywhere. 

Nor is it clear, though, that lidar is going to be the data capture method of choice for autonomous cars. Check out what Mercedes is working on:

That’s why Mercedes has been working on a system beyond radar: a “6-D” stereo-vision system, soon to be standard in the company’s top models ... As we start to drive, a screen mounted in the center console depicts a heat map of the street in front of us, as if the Predator were striding through Silicon Valley sprawl. The colors, largely red and green, depict distance, calculations made not via radar or laser but by an intricate stereo camera system that mimics human depth perception. “It’s based on the displacement of certain points between the left and the right image, and we know the geometry of the relative position of the camera,” Barth says. “So based on these images, we can triangulate a 3-D point and estimate the scene depth.” As we drive, the processing software is extracting “feature points”—a constellation of dots that outline each object—then tracking them in real time. This helps the car identify something that’s moving at the moment and also helps predict, as Krehl notes, “where that object should be in the next second.”

That's videogrammetry at work, folks, and it may very well be that the cameras and software are cheaper to use, or more effective, than the lidar. Maybe the lidar is creating too much information. Only the real guys in the trenches know the answer to that. 

When things will get really cool is when all the cars are also rocking wifi and can communicate with each other in real time, so they can give each other virtual eye contact and virtual waves to let their autonomous counterparts go ahead. It's really ancillary to the 3D question, but I had to pass along this vision of the intersection of the future that was brought to us by Atlantic Cities, riffing on Wired's article: 

 

Pretty hard not to just stare at that forever, right?

As for whether people actually WANT autonomous cars, I'll leave that to the writers of those above linked-to articles. I have my own ideas, but they're probably not worth a whole lot. I will say, however, that Vanderbilt did get himself a great quote from Anthony Levandowsi, business head of Google's self-driving car initiative:

“The fact that you’re still driving is a bug,” Levandowski says, “not a feature.”


Permanent link