April 17, 2014

Continental View

Justin Avatar Justin Toland

Continental View, written by Justin Toland, covers the European 3D data capture and imaging marketplace, looking at the many ways organizations are using 3D data to make better decisions about their businesses, create efficiencies, and reduce risk. Justin is a long-time business journalist, who speaks Dutch, French, and English. You can find him at justin@justintoland.com.

Finding your point in the products and applications cloud

Scratch the surface of the 3D data acquisition industry and you soon realise it's not just one surface, rather it's a mesh of related industries as dense as a point cloud. The sheer diversity of new 3D products and applications on show earlier this month at SPAR Europe illustrates this well - all part of the same sector, yet each targeting a specific niche, from laser scanning in the harsh conditions of a gassy mine to providing an immersive 3D environment for geologists to plan fieldwork, the variety is almost endless.

Thus, when I caught the Tuesday afternoon session on New 3D Technologies and Applications, it ran the gamut from the photogrammetric 'gigatextures' of xRez Studio to Mantis Vision's handheld scanners, via the point cloud CAD add-ons of Lithuania's InfoEra.

 
InfoEra's Undet point cloud processing software.

As co-founder Egidijus Zilinskas explained, the latter company has developed point cloud processing software called Undet, that works with AutoCAD to increase 2D drafting and 3D modelling productivity at an entry-level price.

A significantly higher price tag (the figure of 60,000 euros was mentioned) is attached to the handheld scanners developed by Israel's Mantis Vision 3D Technology. VP Shabtay Negry spoke about the company's F5 3D handheld camera at SPAR Europe 2010, describing it as a breakthrough technology designed for field use in dynamic scenes by non-experts, whose active triangulation provokes sub mm accuracy at a range of less than 1 m. Use and acceptance of the product is growing he said: "We have customers in Sweden, Netherlands, Japan and Australia," noted Negry. Further developments are also in the pipeline: "We are working on bringing to market a wide angle field-of-view product," said Negry. This could be used for mapping large environments, such as crime scene objects and specified areas. "In six to seven months we will have a short range system with embedded color," he added.

Multiple camera synchronisation has been successfully demonstrated. Mantis Vision is also working on increasing scanning range, from 4.5 m at present to "maybe 10-15 m," said Negry. Another aim is "capturing static and dynamic data to process together." 

According to Negry, the uniqueness of Mantis Vision's active triangulation is the flexibility of the coding, which means you can extend the number of characters and therefore the resolution. "You can have 50,000 coordinates on a single plane - we would like to go up to 500,000 … We are going to increase the resolution by a factor of five to 10 more," he added.

That would make an already impressive piece of technology even more so, but when it comes to high resolution 3D, the photogrammetric gigapixel textures of Greg Downing and his company xRez Studio really take the cake (fittingly the company name is short for 'extreme resolution'). Downing, who has a background in visual effects for feature films started xRez with a partner about six years. What the company has come to specialize in are so-called 'gigatextures': gigapixel photographs overlaid onto 3D digital terrain models. "We do a lot of work in natural science and cultural heritage," explained Downing. He then proceeded to tell the fascinating story of how xRez got asked to produce a gigapixel image of Yosemite National Park in the US to help the park's geologist work out where are the most dangerous places for rockfalls. Unfortunately, the park had no budget for this, so xRez came up with the idea of getting their friends to help: "The only way we could do something this ambitious was to crowdsource it," said Downing. So, with support from Microsoft Live Labs and Google and the help of 20 photographic teams with Gigapans (shooting 10,000 images/hr), they were able to photograph the entire park from 20 locations and generate a final image approximately 300,000 pixels wide.

xRez was also able to use a 1 m resolution aerial lidar model (captured thanks to the one daily hang glider flight allowed by the park authorities) and overlay the gigapixel photo onto the 3D digital terrain model. As well as providing a beautiful, easily searchable visual record of the park, the gigapixel image enabled the park's geologist to assess the possibility of rockfalls near Curry Village, leading to the relocation away from danger of 300 campsites. "The park's search and rescue team is also excited by the possibilities of the gigapixel photo," added Downing. (You can see the complete 45 gigapixel online and read more about the Yosemite Extreme Panoramic Imaging Project here).

Here's another example of the firm's work, this time with Glacier Works, a non-profit organization that uses art, science and adventure to raise public awareness and encourage mitigation of the consequences of climate change:

 

Khumbu Glacier Approach from xRez Studio on Vimeo.

I guess what this cornucopia of applications and services shows is that it's not about the point cloud so much as it's about the point. (Or, in other words, what are you using the data for?). As Dr. Ivo Milev reminded everyone during the Mobile Scanning for Railways session: "A point cloud is a measurement, not a product." Amen to that.


Permanent link

SPAR Europe demos bring 3D to life

 
Firms were asked to present information that would help in removing a piece of equipment from the boiler room of the World Forum.

As a relative newbie to the industry, one of the most eagerly awaited parts of SPAR Europe for me was the live demonstrations taking place at a variety of locations in and around the World Forum. This was the first chance I had had to see how the experts conduct scan work in the field and it was highly educational. I was able to check out three of the demos as they took place (Faro at the Murder Scene in the exhibit space, Leica in the Boiler Room and Riegl at the Roadway Accident Scene – the original scene replaced by an old Jeep Wagoneer, after a bit of miscommunication about how shot-up cars might be received in a UN zone…).

Something that really stood out for me was the speed of the scanning process in all three cases - with the latest technology, such as the Faro Focus 3D, Leica HDS7000 and Riegl VMX-250 mobile laser scanning system – scan time has been significantly reduced, a development that is definitely helping to drive the growth of the industry.

Unfortunately, the demands of editorial coverage and session moderation meant I wasn't able to see the results of any of those three live demos (look for coverage in coming weeks from Sam Pfeifle). However, I was able to catch the results from the scans of the world-renowned Gemeentemuseum, conducted by Faro’s Wolfgang Hesse, Erich Buttner of Z+F, Andrew Evans of Topcon and Steven Ramsey of Lecia.

 
Investigators are relying more often on scans to help them notice clues they may have overlooked in real time.

All four companies achieved excellent results, but the most interesting thing from my point of view was the different ways they went about the task, for which each team was allotted half an hour. Hesse, for instance, made four scans without targets because of the need for speed. Despite not using target spheres, he was able to achieve an accuracy of between 6 mm and 1.8 cm, with a standard deviation of 1.25 cm using a FARO Focus 3D. “It would be less than 1 cm if I took a little longer, and 4 mm overall using spheres,” he said.

By contrast, Z+F's Buttner took scans from eight positions (high resolution and normal quality, including one scan with color information), whilst Andrew Evans from Topcon scanned from three positions (15 minutes per scan, using up slightly more time than was allotted) and then used tie points to marry the scans. Leica’s Steven Ramsey only scanned from one point as attendance was light for his demonstration, but made up for this by taking scans at a variety of resolutions. “The Leica C10 has the ability to do multiple scans from the same location at different resolutions for different needs – that flexibility is very important,” he said. According to Ramsey, “a complete architectural scan of the front of the building would take around 2 hours, then 10 minutes processing in the office and it is ready to draw.” Impressive stuff.

 
Riegl waits to cross the driveway to the accident scene.
 
Topcon displays the results of their scan of the Gemeentemuseum.

 

 


Permanent link

3D headset has the wow factor, but data capture promises to keep heads turning

This week sees the launch in Japan of Sony's Tron-inspired 3D headset the HMZ-T1. Now, as well as looking hella cool, with its separate high-definition OLED display panels (doing away with 'crosstalk' between left and right images) and "Virtualphones" surround sound, early reviews suggest it also packs enough technological punch to become a much sought-after new way of watching movies and playing video games.

A 3D headset that makes you look like Geordi from Star Trek certainly has the wow factor, but, at the end of the day, it is just another playback technology. What I think is much more exciting is the possibility of using consumer technology to capture 3D data and create new applications in entertainment, shopping, security and even medicine. That is what third-party developers are doing with the Microsoft Kinect controller-free interface for Xbox 360, using the Kinect for Window SDK beta programming toolkit, creating everything from a virtual piano (just tap your fingers on an empty desk to play) to a virtual way of trying on dresses in a store, to 3D videoconferencing and video surveillance tools, to applications for manipulating an MRI to guide cancer surgery. 

Solutions Architect and Technical Manager, Microsoft Research Connections EMEA (UK), Kenji Takeda, will be highlighting a whole heap of 3D scanning and Natural User Interface applications for Microsoft Kinect at SPAR Europe this week. 

One of the most exciting ones I have seen is demonstrated by this new music video from the Canadian duo, New Look

 

Created by Tim & Joe, the video for "Nap on the Bow" takes place in a wonderful underwater 3D world made entirely from data captured using Kinect. To see exactly how they made the video, click here

Of course, 3D data capture isn't only useful for building virtual aquatic worlds, it can also be used for as-built measurement of industrial equipment and installations below the waves in real life. Marc Daeffler and Arnauld Dumont's presentation on Underwater Photogrammetry at SPAR Europe promises to be highly instructive in this regard, particularly concerning developments in image and data processing and underwater camera calibration. One not to be missed.


Permanent link