July 22, 2014

Head in the Point Clouds

Sam Pfeifle

Sam Pfeifle was Editor of SPAR Point Group from September 2010 - February 2013.

Odds and ends from the first Riegl show

As both Faro (I'm going to go with lowercase Faro from now on - that's AP style and I'm sticking to it) and Riegl launched user events a week apart from one another in the same city, it's hard not to do a little comparing and contrasting, but I'll keep that part of this short. I tweeted yesterday that Faro was like attending a rock show, while Riegl was like attending a string quartet, and I think that metaphor basically holds.

First and foremost, Faro's was simply a larger event, with a bigger and more elaborate stage for presentations, more intricate A/V, and they had a go at an exhibit hall. With the different companies following one another up on stage, and the exhibit hall in between, it was a little like going to Coachella, actually, where band after band come up to rock the stage and in between you're meant to go buy merchandise and trinkets. Some of the bands are more impressive than others, but generally the curation is pretty good. 

Riegl was a little more than a third the size of Faro, attendance-wise, plus, they only focused on their airborne and mobile technology. Terrestrial scanning was for another day (even if more than one attendee was using terrestrial scanning technology to jerry-rig mobile scanning on the backs of things like four-wheelers and boats). The presenters were sizably more subdued, but also much more intricate. To listen to Riegl CTO Andreas Ullrich outline the mathematics behind Riegl's echo-digitizing lidar technology, and the innerworkings of waveform processing and multi-target returns, was to listen to an intricate piece by Mozart (he's Austrian, after all), where at least a good portion of it was sailing right over my head. 

Riegl made the most effort I've seen at an industry event to assure its users of the quality of its data and its attention to quality and accuracy (and I saw Leica's presentation last year at Hexagon on their calibration efforts). I can't count how many times I heard the word "ruggedness." Bobby Tuck, of Tuck Mapping, made a special point during his presentation to say that he's been flying Riegl scanners on helicopters for 10 years plus and has never had an equipment failure. 

Clearly, they were making their play to those who care most about getting in the hundredths of inch for accuracy. This was very much pitched to a surveying crowd. There were no looky-loos or folks just entering the scanning business from an ancillary field. 

However, there were airborne people with a background in orthophotos curious about diving into lidar. I can't imagine they were anything but assured and excited by the technology (if not the investment necessary to get in the door - these systems ain't cheap). 

There was a modest display area where people could check out the equipment, as seen here:

  

There were also a couple of companies with table tops in the entrance way to the event, as you can see here:

 

Like I said, modest, and I took the picture after most people had headed off to lunch, so don't take this as being indicative of attendance. If anything, the 120 people at the event were incredibly attentive. The crowd sustained for the entirety of the long day of presentations and were remarkably engaged. While at the Faro event there was literally zero time for question and answer after each presenter, at Riegl there was likely to be a highly technical question after just about every presenter. Some were definitely of the "gotcha" variety, but most were honestly looking to move the conversation forward. 

The user presentations, too, were highly technical, always concerned with very precise display of accuracy achieved and methods undertaken. 

In the Faro recap, I said that I didn't think the majority of attendees were Faro loyalists, and that most were either coming new to scanning or had other scanners in the inventory. Here at Riegl, these were definitely Riegl devotees. They LOVED Riegl, almost to a person. Maybe there was a grumble here or there about price, but in general they really liked the product and wanted to spread the gospel. 

I'm not surprised by that, considering the new people Faro are bringing to the market with their marketing and unit price, but it definitely made for a different conference experience and set of conversations. 


Permanent link

Are your scans worth money?

Well, of course your scans are worth money. They help you create your deliverables for your clients, they create efficiencies, they solve problems. Or you use them to better understand your facility/organization/what have you. But, I mean, could you sell them to other people?

That's the premise behind the new 3DScanHub.com, which is an online store based in Oakland where you can actually download, for a price, scans of all sorts of things: sculptures, objects, body parts, fossils, etc. It was created by artist Andrew Werby, who was an early 3D and scanning enthusiast. 

From the About Us page: 

As enthusiasts and early adopters of 3D scanning technology ourselves, we noticed that while there were many sites selling 3D scanners and offering to make custom scans for clients, there wasn't anyplace for someone to purchase them ready-made, like stock photographs, or to sell scans they may have produced. We hope these sites will go some way toward remedying this deficiency. 

Well, Arius3D is sort of doing that already, aren't they? I guess the difference might be that Arius3D seems to just be creating a way for you to display your own objects or a way for you to display someone else's object for a fee. 3DScanHub is actually allowing you to download an STL or OBJ file that would be suitable for 3D printing, in addition to display of whatever kind you can figure out. (Plus, ScanHub has the ability to create a community of users uploading scans for sale and for those people to, in turn, get paid.)

Except this is where I get a little confused. No simple scan is going to be print-quality. No one's printing out a point cloud in 3D. You're always going to have to do some kind of processing to get from scan to printable model. So why isn't this 3DModelHub.com? I guess because everything on here started out as a real thing, and there's already plenty of places like 3D ContentCentral and SolidComponents where you can get 2D and 3D model files for all of your CAD needs. This is pretty clearly different from that. 

Nor can you just take these "scans" and use them however you'd like. There's a license agreement that lets you print out for personal use, blah, blah. I'm sure the EULA is solid. 

And here's the basic description of how to do business with these folks if you're interested:

We'll take care of the business side: running the site, publicizing it, and collecting and distributing the proceeds from selling the scans you provide. You allow us to sell licenses to your scans to people anywhere in the world and agree that they can do pretty much anything legal they want with them, including making and selling derivative works with them or altering or distorting them. You will also let us use your content for marketing purposes. But don’t worry, you'll still own the copyright to your scans. All you're selling is a limited, non-exclusive license to use them for certain purposes. 

Right now there are only a few scans up for sale, five in each category. It's more of a concept than an operating store. Does it have potential? Well, places like Cubify and Shapeways certainly think there's a market for people to download/upload designs for 3D printing and sale, but almost all of those things are either jewelry or utilitarian, and designed from scratch. 

It seems somewhat impractical for people to be laser scanning objects like fossils and their statues they've made or own copyrights to, then processing them for watertight print-quality, then uploading them for sale at a price like $50. Nor does it seem like there would be many people yet capable of, and/or interested in, downloading these objects of various sizes and displaying them in any way or printing them.

There's clearly a market, but the size of that market isn't apparent to me. A couple thousand people in the U.S. who might be interested? It's hard to say. 

Regardless, it's something to think about. Could your scanner be put to a different use? Does it have revenue potential you didn't consider. 3DScanHub thinks so. 


Permanent link

Odds and ends from the first FARO show

Well, that's not entirely accurate, actually. FARO did put on a 3D Documentation Conference last year, in Europe, just like the one held here in Orlando this week, but this is their first large-scale event here in North America and the first one I was able to attend (for a wrap-up of last year's Euro-event, see Miguel's write-up here).

Frankly, I was pretty impressed. FARO is reporting roughly 275 attendees and it seems to have caught them a little by surprise, judging by the lack of sufficient chairs for the keynote speeches in the morning and the very full lunch and dinner tables yesterday. The rooms for the break-out hands-on workshops (which weren't actually overly hands-on) were often standing room only, the networking events were pretty lively, and the exhibit hall wasn't totally barren (yes, that's a back-handed compliment, but this was pretty obviously a FARO user-group meeting - and people were still interested in speaking with a few of the other vendors). 

Here are a few shots of the exhibits taken during the first exhibit break-out opportunity:

02.22.12.faro-show  

 02.22.12.faro-show2 

 02.22.12.faro-show3 

Really, not bad at all. And the content was quite good. Sure, it was vendor-heavy on the first day, with presentations by representatives from Autodesk, 3D Systems, Innovmetric, Rapidform, and Geomagic, but it wasn't too sales-pitchy and especially the talks by Geomagic and Innovmetric head Marc Soucy were substantive and had take home points that weren't overly related to their own products. 

And the workshops definitely delivered the training that these FARO users were likely looking for most. The how-to-scan workshop that I attended definitely left me feeling like the scan process had a pretty simple workflow and I was ready to dive in and start playing with the new FARO Scene 5.0 software that made its debut at the show.

Here's a shot of me looking serious as I learn how to scan (not the SRO crowd behind me):

 02.22.12.faro-show3 

Not bad for an awkwardly held iPhone, right?

Anyway, here are some of the major discussion points and takeaways that dominated discussion in the halls and over meals:

1. Democratization of 3D data is real, and it's coming. The reduction in price of the FARO Focus3D is just the beginning. 123D Catch is just the beginning. There is going to be a significant push by FARO, the likes of printer maker 3D Systems, Autodesk and many others to really get the masses acquiring and using 3D data. FARO CEO Jay Freeland openly wondered why the entirety of the world hasn't been captured in 3D and he said it was essentially his mission to see that happened. Will all of it be survey-grade data? Unlikely. But it will be more and more common for people to capture reality and use it to make decisions or create new products and services.

2. The Focus3D is STILL really new. I spoke with more than one surveyor who owned a Focus but hadn't yet really put it through its paces. What with the six-month delay in delivery for many people (that's not a problem anymore, more than one FARO rep took the time to let me know), and the fact that business is picking up, it's been difficult for many Focus owners to really explore the unit's capabilities. 

3. Small stuff scanning and big stuff scanning is clearly coming together. What were Rapidform and Geomagic and Polyworks doing at a show that's basically for people who use laser scanners with 120 meter range? They're not just in the reverse-engineering and metrology space anymore, and much of that is thanks to the fact that long-range scanners are being used more and more often to scan discreet objects and do resultant modeling. The scan-to-surfaces workflow for small items that has been used in manufacturing and other spaces for years is coming quickly to the AEC field, forensics, and other verticals that are using long-range scanners.

4. People like that FARO is coming out guns blazing. The attendees here weren't necessarily FARO sycophants by any stretch. Most attendees had scanners made by other manufacturers as well and were the well-known industry-types whose faces I often see at SPAR International. They liked that FARO was being aggressive, though, and was forcing more competition into the marketplace and working to get the market moving forward. I didn't get the impression that these folks were FARO do-or-die types, but rather were interested in learning more about a company that's still somewhat new to many who've been in the space a long time. Are they serious? Do they have substance? Is this a company that's in it for the long haul or just looking to be acquired? I think this event likely assuaged any doubts people might have had about a company that had originally been focused on the fine metrology marketplace. They're certainly serious and making a commitment to the marketplace as a whole. 


Permanent link

3D scanning and printing hits the world of comedy

Before today, I had never heard of Jeff Dunham. You might think this strange, considering that by many accounts he's the most popular comedian alive, but I'm also not much of a fan of comedy in general, so that whole industry is decidedly off my radar (no - I don't like comedies. I find them off-putting, in large part - it's strange, get over it). But when a comedian starts to dabble in laser scanning and 3D printing, my ears perk up. 

And that's just what Jeff Dunham is doing. Today I was alerted that during Jeff's Biography Channel two-hour special, which is apparently the highest-rated biography they've ever aired (this makes no sense to me, but perhaps every other biography has been about Grover Cleveland or something), he goes into pretty great detail about how he makes his puppets. Yes, he's a ventriloquist (but you knew that), and he uses 3D scanning, modeling, and printing to make his props. Further, he has scanned old puppets and then turned them to CAD so that he can remake them when they inevitably break. 

Brilliant. That's some serious creativity.

The Stratasys corporate blog noticed, too, and here's a write-up from Full Access that explains a bit:

And he delights in taking his craft where no comedian has ever gone before. So to open his fourth standup special, with a wow, Dunham went to his workshop, where his advanced NextEngine 3D scanner and Dimension 3D printer came in handy as the shooting deadline loomed. "Controlled Chaos" makes an unprecedented multi-territory debut on September 25, and is sure to set TV ratings records and add even more millions to his DVD sales.

The challenge? Convert a custom hot rod into the Achmed Mobile by topping its engine with a giant animated skull of Dunham's best-known character, Achmed the Dead Terrorist. Craftsman and car buff Dunham has made his own characters by hand since his youth and builds and flies his own helicopters. Always at the digital cutting edge as well, he fashioned his largest Achmed skull yet by scanning in a full 3D image of the character's head from one of his many best-selling merchandise items. The printer then output a 3D plastic mold onto which Dunham sculpted and painted the vivid skull face known to many millions worldwide.

Okay, so the comma usage and general writing there isn't the best, but you get the idea. Laser scanning is cool! The world's most-famous comic does it!

For evidence, here's how I found out about all of this. One of the women here in the office who works for our commercial marine department was watching the Dunham bio-pic at home with hubby and as she's watching she says to him, "oh my god! That's what SPAR does!" Then she came in this morning and tells me all about it. Before too long, I'm showing her Cubify and Shapeways and she's saying, "I want one!"

The days of a 3D printer in every home are coming. They're 2018's microwave oven/DVD player/smartphone. And with 3D printers comes 3D data capture, hand in hand. 

Finally, since it's Friday, how about a bit featuring Achmed Junior, who is apparently the character scanned and printed in the Biography piece (apologies in advance for the squeamish among you - if you don't like off-color humor, do not watch this):

 


Permanent link

Hey, this Treemetrics thing might be working out

In the very last session of SPAR Europe 2010, we heard from a guy named Enda Keane, who had some great ideas about how to use laser scanners to massively improve the documentation of forest resources. He was a forester who'd been turned onto 3D imaging back in 2005 and he was taking the technology and running with it. 

IBM paid attention. Woodlot owners began to pay attention. Now, a year and a half later, some big technology names are starting to pay attention. 

 
If a laser scanner falls in the woods...

Irish Times wrote up last week the new involvement in Treemetrics of Dylan Collins, who has joined the company as chairman (which, we'll all assume, means he made a significant investment). Collins is the guy who founded and then sold Demonware and Jolt Online Gaming, so he's a technology guy who knows how to monetize these kinds of things. 

You know, like trees.

Seriously, though, good investment is about recognizing opportunities. The Internet made possible massively social online gaming and anyone who'd ever met a teenager knew that might make some money along the way. Similarly, 3D laser scanning makes possible the rapid, very accurate counting and assessment of tree stands, and anyone who's ever heard a scientist talk about carbon sinks or an environmentalist talk about deforestation knows there might be some money to be made in updating the way a $75 billion global industry.

Or, as Keane puts it:

“People talk about the big data industry, well this is literally some of the biggest data in the world,” said Mr Keane. “The world’s forests are critical to our well-being; they keep us alive (by absorbing carbon dioxide) and they pay for pensions (through government ownership), yet we lose over €10 billion in revenue each year because most measurement is still done using 19th century technology.” 

Currently, Treemetrics is at 12 employees. But don't be surprised to see that rapidly grow. Saving customers $30 million in a single year is likely to attract more business.

We talk here at SPAR about new applications for laser scanning being discovered every day. You ever think you'd hear about a laser scanner being used to count trees? Me neither. What will someone think of next?


Permanent link

Getting deeper for the National Parks and coral reefs with bathymetric lidar

One of my favorite shows to watch lately with the kids is Blue Planet, what has to be the best television production ever to focus on the world's underwater life. To say I learn something with every show is an understatement. Last week, we watched their episode on coral reefs, and I was more than a little fascinated (did you know coral actually attacks other coral that infringes on its territory, using this crazy acid attack?). But also disheartened. Rising sea levels and water temperatures are causing widespread bleaching events and as much as 16 percent of all coral on the planet can be lost in a single year. 

So, I'm glad the USGS is working with the National Parks Service to keep track of these sorts of things and work to slow coral loss (among other environmental efforts). One of the problems with mapping shallow water is that the water is often too deep for bathymetry and too shallow for vessels with multi-beam sonar to safely navigate (safe for the coral and the vessels, both). 

Over the past decade, however, the USGS, Marine Geology Program, and NASA have been collaborating on ways to solve this problem. An article posted not long ago documents those efforts. Essentially, with the Experimental Advanced Airborne Lidar system, plus some innovative processing techniques, researchers are able to use lidar to gather both normal terrestrial data and submerged terrain data in the same flyover, making the data acquisition much more efficient. 

Operating in the blue-green portion of the electromagnetic spectrum, the EAARL is specifically designed to measure submerged topography and adjacent coastal land elevations seamlessly in a single scan of transmitted laser pulses.

Further, they're able to get deeper than you might think - up to 20 meters deep. This allows the National Parks to quickly inventory and observe big swaths of coral reefs:

One objective of the USGS-NPS collaborative research is to create techniques to survey coral reefs for the purposes of habitat mapping, ecological monitoring, change detection, and event assessment (for example: bleaching, hurricanes, and disease outbreaks).

Plus, they can map beach erosion, vegetation growth and encroachment, and all kinds of other things that need monitoring on the land side. It's pretty easy to see how this data would be hugely valuable in efforts to maintain some of the United States' most precious and spectacular environments. 

It's not easy, though. It's not like you can just hand over the lidar data to researchers. Currently, the process involves taking the lidar data and running it through a specially designed conversion process called ALPS to create GIS-compatible map products. That means exporting as ASCII, then converting using GEOID03 and creating 2 km x 2 km GeoTIFF map tiles. This seems laborious and processor-time consuming. 

I think it's incumbent, going forward, that we create quicker paths from lidar to useful information that the managers of these habitats can use to make better care-taking decisions. That won't be easy, either, but I've seen the coral reefs in person, as well, and it's certainly worth the effort.

 


Permanent link

So you love BIM - How about some as-built data?

On the face of it, this interview from Railway-Technology.com of Neil Sharpe of design studio Weston Williamson is great. Neil says things like, "I think 3D is a must" and "however many 2D drawings you get, there's still a fairly good chance that you might miss something."

Exactly. The benefits of BIM and 3D are nearly countless. 

But. 

I'm frankly shocked that there's exactly zero mention of real-world data capture in the interview (this is a recurring BIM theme, though, I suppose). At this point, it's like people are being willfully obtuse about it. Check out this Q&A:

CL: How does 3D BIM improve a project in the construction phase?

NS: Well there's the co-ordination aspect. However many 2D drawings you get, there's still a fairly good chance you might miss something. We do clash checks on the model at each design stage, so we pick up when there are clashes within the model. There's a further use of the model which contracts up to now haven't demanded from the models that we've produced, but with our own in-house BIM development we're pushing it a lot more - to get automatic scheduling out and go on into construction programming using the BIM model.

Well, that's not exactly the "construction phase" is it? Sure, you do clash detection at each design stage, but how about doing 3D data capture at each construction stage to see where there are clashes between the real-world and the design? Wouldn't that help the construction phase? I'm sure BIM helps with the construction programming quite a bit, but think how laser scanning could make sure the programming was being adhered to in practice. 

Later, Sharpe talks about the use of tablets to take BIM out into the field. Good idea:

I think we're also starting to see people break away from the office with the use of tablets to track models, particularly services and facilities maintenance. If you've got a tablet with your model and it knows where you are you can see one particular area needs maintaining every three years or whatever. Taking it out on-site and in the completed building would be useful. But the basic framework is pretty simple and we can add all sorts of stuff on to it.

But wouldn't it be nice to scan the building periodically to see if there's increasing deviation from the model? Is it sinking? Listing? Warping? Sure, you can input notes and such to document things that need improvement over time, but how much better to input real-world as-built data to show what you're talking about? 

For whatever reason, it doesn't seem like the circle is getting closed here. If 3D design data is better, wouldn't it make sense that having 3D real-world data to compare it to would be ideal? There's still a lot of education to be done out there (as Kevin Brederson at Pepper can attest).


Permanent link

Using 3D to help ease pain

It's hard to be my usual full-of-levity self when I come across a story like Capt. Sam Brown's. His profile in GQ, a tale of how he was burned terribly after an IED attack in Afghanistan and his grueling recovery stateside, is an absolute must read. Moving through it's five pages, I found myself alternately incredibly sad, writhing with empathy, and highly encouraged.

No, there isn't a great reason why he had to sacrifice so much. Yes, his story will rip your heart out. But the way that a group of researchers is using 3D technology to help him battle the excruciating pain is a testament to the ways in which human ingenuity really can make a difference in people's lives. 

And, really, the solution here isn't that complex. In order to help Capt. Brown and others battle the residual pain of burn injuries, researches at the University of Washington devised Snow World, an immersive virtual reality video game where soldiers blast snowmen and penguins with snowballs while they're undergoing physical therapy, dropping their experience of pain significantly.

This video from ScienCenter does a great job of explaining how it works:

 

And the applications don't stop there. They're using virtual spiders to help arachnophobes; virtual bars to help alcoholics; virtual casinos to help gamblers - the applications are just beginning to be explored. At the moment, most of these virtual environments appear to be created from whole cloth. But just imagine the power these VR "games" could have if created using real-world data. 

In the story they mention the possibility of allowing rehabbing soldiers to continue the fight by virtually piloting drones and performing surveillance and bombing runs. To train for that, they'd need real-world 3D data to help them pilot into those real areas of Afghanistan and Iraq where the enemy is hiding out. 

Or if you're treating a fear of heights - how could any data be better than a real cliff at the edge of a real drop off? It only stands to reason that the more realistic the environment in which the treated patient is immersed, the more real the treatment might be and more effective. 

Of course, it's impossible to know. Many video game creators have discovered that real-world scenarios are sometimes less interesting to players than imaginary worlds into which they're looking to escape. It's apparently more fun to kill aliens on other planets than in your own city streets. But that hasn't stopped iRacing from making their tracks as realistic as possible. 

Obviously, the training market demands 3D that's as realistic as possible, and you don't have to tell VRContext that. Their Walkinside product allows for completely immersive training that incorporates a real-life, say, plant environment. Take a look:

 

Some have long mocked virtual reality for never living up to its hype (and, really, what ever does live up to the hype of today's 24/7 media?), but maybe some of the delay in adoption stemmed from a lack of great reality capture tools. Now that those tools are starting to become more and more available, will we see a renewed VR interest? I'm thinking so.

Especially if the American military gets behind it.


Permanent link

New free way to do auto feature extraction in Revit

There was a fair amount of buzz when Autodesk announced during last year's SPAR International that you'd be able to work with Point Clouds in Revit 2012. Then there was a fair amount of disappointment when working with point clouds wasn't all peaches and whipped cream. Then there was some more excitement when Leica announced a Revit plug-in

So, are we in for some disappointment now?

Almost certainly not. I'm guessing that most Revit users would be excited about the news that Autodesk Labs released last week a free "technology preview" called Point Cloud Feature Extraction for Autodesk Revit, which "allows you to work with point clouds more easily in Revit" and "automatically exracts useful geometry features from point clouds of buildings and creates basic Revit elements to aid the building modeling in Revit."

(Actually, they used the singular "point cloud" in both of those quotes originally, but I changed it because "work with point cloud more easily" just doesn't roll off the tongue for me, personally.)

So, what kind of features are we talking about extracting here? Autodesk provides some demos. Here's auto wall extraction:

 

Here's how you extract both level and orthogonal grid:

 

Here's wall extraction:

 

You get the idea. No, there's no door extraction or light fixture extraction or window extraction as of yet, but it would seem like this is a pretty good first step for the scan-to-BIM marketplace. Will this technology preview make it into Revit 2013? That's still to be determined. I'm guessing the feedback they receive on the value of this kind of technology will have a pretty big impact on that. So download it. Try it out. Tell them if it stinks. It certainly can't hurt (assuming you've got the time on your hands).


Permanent link