July 28, 2014

Head in the Point Clouds

Sam Pfeifle

Sam Pfeifle was Editor of SPAR Point Group from September 2010 - February 2013.

And the Sandy help happens

I'm not one to tell you "I told you so," so I'll just say I'm happy to see that Woolpert has been profiled with a nice story in the Dayton paper about how their mobile lidar technology is helping with the Sandy clean-up effort. As I predicted would happen

Sorry. I can't help myself sometimes. 

02.12.13.woolpertAnyway, Woolpert is "taking part in a Rutgers University project to collect 3D visual data of hurricane-ravaged areas using geospatial mapping technology ... [and that data] will be used as a resource to identify damaged objects and plan the reconstruction."

Could you just drive through the actual affected areas and identify damaged objects? Obviously. But I can see how it would be much more efficient to identify them all in the comfort of an office, which would allow you to database them much more quickly. 

However, I might quibble with this statement: 

“This is the first time anywhere that this technology has been used for disaster recovery,” said Jeff Lovin, Woolpert’s director of geospatial services.

I mean: Ahem (2006, Katrina). And, even if you allow he only means mobile lidar technology, well, further ahem (2011, Japan). 

But let's not quibble. Woolpert are to be commended for doing the scanning at cost and helping with the clean-up and restoration effort. There's no doubt that valuable information can be gleaned from contrasting where structures wound up vs. where they actually used to be. There ought to be all manner of engineering lessons learned there that can inform future building and architecture efforts. 

Further, kudos to their PR team. The article is a great education effort for the industry as a whole, with a thorough description of the technology for the layman and a description of its overall practical applications. 

And there are plenty of people still hurting down there in the New York/New Jersey area. Let's hope this helps them get back on their feet a little quicker.


Permanent link

More wind in the lidar sails

We've written previously about lidar's use in the wind-energy sector, as it's been fairly extensively used to determine where to place wind turbines and a general area's potential for creating power via wind energy.

Now comes news the U.S. Department of Energy will be goosing researchers' ability to study how best to use lidar to determine wind-farm feasibility. By all accounts, the Chesapeake Light Tower, a former Coast Guard lighthouse about 13 miles off the coast of Virginia Beach, will be converted into a the Reference Facility for Offshore Renewable Energy (read: not oil), which will be organized by Pacific Northwest National Laboratory, with renovation help from National Renewable Energy Laboratory. The facility should be operational by 2015.

Germane to this space is that they'll be testing lidar devices quite a bit. To wit:

With these instruments, researchers measure wind strength and direction by emitting light and then observing when and how some of that light is reflected off of tiny bits of dust, sea spray or other particles blowing in the breeze. Lidar devices for offshore wind measurement would be placed on buoys in the ocean. However, ocean waves move buoys up and down, which would also send the device's light beams in multiple directions. So scientists have developed methods to account for a buoy's frequently changing position to collect the wind data they need.

01.18.13.windbuoy2That's where the reference facility comes in. Mathematically corrected data from buoy-based lidar is a new ballgame for the wind energy industry. To prove that the data they collect are both reliable and accurate, wind assessment lidar devices would be placed both on buoys floating near the facility and also on the facility itself. Wind data would be collected from both sources and evaluated to determine the buoy-based technology's accuracy.

It wouldn't surprise me if they were using the wind-lidar buoys developed by Fugro, as announced this past summer. The Fugro buoy was quite the advancement, with R&D minds from Norwegian universities, Statoil, and Fugro Oceanor all chipping in together to figure the technology out. As you can see in the accompanying picture, it's a fairly compact device, which should be relatively easy to deploy and maintain, though it may be a bit difficult to see in bad weather conditions. 

And the results they got in testing are pretty amazing: 

Validation of the Wind Lidar Buoy took place at an exposed location off the coast of Norway. The tests were designed to compare wind data collected by the buoy to data from a similar lidar located on land and from a fixed met tower. Wind velocities up to 20 m/s and wave heights up to 5 metres were recorded. The average deviation in wind speed measurements between the Wind Lidar Buoy and the reference stations was less than 2 percent.

One wonders how else that research might be applicable. Could similar software allow for better mobile scanning operations? Maybe even hurricane and tornado warning systems? It will be interesting to see what comes of the research done at the new facility.


Permanent link

Lidar: All up in your grill

In my first CES round-up, I pointed out the buzz Lexus was getting for its self-driving vehicle, which features the well-known Velodyne scanner spinning around on the roof. How I missed Audi's much-sleeker entry into the lidar-based auto-driving market I'm not sure. It's pretty rad.

Car and Driver does a great job of summing up the user experience in their blog entry here detailing the test drive:

Traffic-jam assist combines the sensory information from Audi’s existing radar-based adaptive cruise control system and the car’s lane-keeping assist camera (which monitors lane markers) with a compact, bumper-mounted LIDAR (Light Detection and Ranging) laser. Audi is particularly proud of this LIDAR unit, not the least because it packs all of the sensory ability of the giant, roof-mounted LIDAR towers seen on Lexus’s autonomous LS, Google’s fleet of self-driving Priuses, and the Pikes Peak autonomous Audi TT into a brick-sized piece of hardware. Software allows the system to steer the car; if any significant torque is applied to the steering wheel, control is relinquished back to the driver.

Well, I don't know about "giant, roof-mounted lidar towers." That seems a bit overblown. But there's no arguing that Audi's version is far sleeker:

 01.11.13.audi 

(For those still not understanding my headline, go here. But only if you have a sense of humor.)

I'd consider this a pretty impressive development. They've got 145 degrees of visibility with the lidar sensor, so they neither smack into the car ahead of them nor miss cars trying to cut in front of your car in traffic. All of that while not looking ugly. Well done, Audi.

Engadget got a nice interview from CES in the Audi booth, which was by all descriptions very impressive. It outlines Audi's vision for the "piloted car" and how the company thinks people will use the functionality:

 

Am I the only one slightly amused by the constant assurances that the piloted function doesn't work on the highway because of how fun it is to drive an Audi on the highway and no one would want to give that up? I think they're just not quite ready to promote it at highway speeds. Try my commute to work in the Hyundai - I don't think an Audi would make 25 miles on the Maine Turnpike into a laugh riot. 

Regardless, all of this activity shows that carmakers are making a real commitment to the capture of 3D data to inform the performance of their vehicles and I think it's only a matter of time before lidar is built into vehicles of all sorts on construction sites and in industrial facilities for safety purposes. 

But think about what else could be done here. What if, while these vehicles were scanning for obstructions, they were also constantly replenishing your as-built point cloud documentation? Like a Viametris unit in the front grill of your bucket loader. Couldn't you just do a data dump every night and then let it register overnight and, voila, updated point cloud?

Something to consider while you count your pennies in advance of the release of this new A6.


Permanent link

Should you listen to the VCs in 2013?

One thing I know about news like 3D Systems' buy of Geomagic (and Rapdiform...), or the many other acquisitions that have rippled through the 3D data acquisition space in the last couple of years, is that it's attracting the attention of folks with money to invest. When start-ups start cashing out, that smells like opportunity to people who know how to make money work. 

That should be a good thing for our market. Money breeds innovation and is likely to spur more technologists to consider start-ups that work with 3D data, hardware or software (but probably software), and that should lead to better products that will help lots of people do their jobs better - which will lead to more money flowing in... You get the idea. 

And where does that money come from? Often, the venture capitalists (the "VCs," for those of you not big in the investment marketplace). There are other ways to raise capital, of course: "angel investors," like friends and family who happen to have a few million extra bucks lying around; grants; even traditional loans. But you don't see that much in the technology space, since a lot of these new ideas are risky and don't make a lot of sense to conservative banker types who need to make sure they get their money back (especially in this lending environment). 

So, I thought this article on Venture Beat was pretty interesting and possibly instructive for how people should be approaching growth in our market this year. Essentially, VB got a bunch of VCs to talk about what they see as "hot" for 2013, and some of it relates to our little corner of the business universe.

Take Mike Maples, founding partner at Floodgate Capital, which got in early on Twitter:

In 2013, people will expect to be hyperconnected on the hyperweb: They will want to manage content on any device. We will see user experiences that are no longer assumed to be windows on a computer screen or a smart phone. Some of my favorite examples are Google’s self-driving car or Nest‘s thermostat, which learns the temperature you like and turns it down when you’re away. 

Sure, the self-driving car has some lidar going on, but think about the big picture here. Fundamentally, what you're doing with laser scanning and photogrammetry is capturing the real world and making it digital; you're allowing for interaction with things and places that may be otherwise completely out of reach for most people - rare objects and artifacts, remote places (or at least places that are expensive to travel to), and, yes, offshore platforms that are a pain to get to. How can the user experience of 3D data, the way that people interact with a point cloud or 3D information, be improved so that people can't get enough of it? Or so that early adopters can quickly and easily convey the utility of the information to those who are a little more skeptical?

Think about what Ross Fubini, partner in Canaan Partners, is talking about when he's talking about the potential in "big data":

“Data has been a big deal and a big market for years — SQL, hello!. But ‘big data’ is a big deal because of the sheer volume of trackable data and because it’s cheaper than ever before to build an application to make that data valuable. By the end of the year, we will see some big winners emerge leading up to some splashy 2014 and 2015 IPOs.” 

No one can create more data faster than a guy with a laser scanner. So much data can be captured that the possibilities are really just starting to be explored (and what Bipul Sinha, investor at Lightspeed Venture Partners, says about storage getting sexier bodes well for managing and moving all that data), but it's those "application[s] to make that data valuable" that are the crux of the situation. People want to interrogate their world more than ever before. 3D data allows them to do that in very new ways. How does that become a business?

Maybe in a way that's new and different. Look at the way that LeapMotion is entering the market: By giving away their device to 10,000 developers and letting them create applications on their own before the device is even available to the public so that when it hits, there will already be a suite of apps that people can use to take advantage of the new gesture-control system. Should Faro's app store have been launched two years BEFORE the Focus3D? Should Leica's P20 have hit the market alongside 10 new custom-oriented software packages that targeted 10 specific verticals so that the P20's advantages could immediately and obviously be capitalized on? Heck, should Leica just make Cyclone free and charge for the plug-ins and for access to the SDK? I'm making things up, but the distribution of developers, and the network of people looking to use their coding ability in new markets, is something 3D data acquisition needs to be taking advantage of more.

Heck, Google thinks indoor location is the next big thing, and what can contribute to that better than laser scanning and new products like the Zebedee/Viametris/TIMMS/Allpoint solutions? 

“Indoor location will be bigger than GPS, which only works outdoors. We spend 90 percent of our time indoors, whether it’s in shopping malls, offices, schools, restaurants, and so on, where GPS doesn’t work or is inaccurate. In 2013, you’ll use your smartphone to find the exact store aisle location for every item on your shopping list.

So says Don Dodge, developer advocate at Google. (You might want to check out Venture Beat's round-up of indoor location companies, too.)

“With indoor location, you can find people, products, or services plotted exactly on a floor plan with walking directions to get there. You could receive coupons, advertisements, or free offers for products based on where you are in a store. Imagine playing indoor location games like capture the flag, tower defense, or other games based on real-life indoor locations. There are thousands of applications in many different market segments that will be built using accurate indoor positioning technology.”

Can you be a part of rapidly acquiring accurate indoor data and serving that back up to folks who'd like to navigate it? Seems like folks like Matterport, etc., could get in on that, no? What are the trade-offs between precision and speed, between density and usability?

Of course, Brian Singerman, partner at Founders Fund, has the most level-headed outlook (even if it's solidly cliche):

We are always highly suspicious of trends. The best investments are often in companies and industries that others do not consider hot or trendy. Therefore, a theme for 2013 will be to not invest in trends, but rather long-lasting value. Trends come and go, but the best companies will be the ones that buck the trends and don’t look like all the others, companies that don’t appear to have much competition. 

Talk about opportunity! I can't tell you how many times has someone asked me, "who's their competition?" about a company in our space and I've sat there and thought about it and then said, "well, no one really does EXACTLY what they do." Laser scanning and high-end photogrammetry are right there on the cutting edge, creating new possibilities to deliver things that people will value once they find out about them.

How are you spreading the word and who might be able to help you along the way? 

Actually, though, I'm reminded about a question I get from bands all the time because of my music writing: "How do we get an agent?" My answer? "Just be awesome, and the agent will find you."

If you're looking for funding, the answer is probably pretty similar: "Just be awesome."

That shouldn't be so hard.


Permanent link

From prototype to product in six months

While I do my best not to just be a SPAR cheerleader, it's hard not to be pleased when it really seems like what we're doing at our conferences actually makes sense. For instance, when someone shows up at a show with new technology, then shows up at the next show with a commercial product. That's the way it's supposed to work, right?

I could make this point about Allpoint Systems, actually, which went from a point-to-point terrestrial laser scanning robot at SPAR International in April to a more lightweight and portable terrestrial scanning tripod and software package last week at SPAR Europe. But the point of this post is to make the point about CSIRO, the Australian research agency that showed up at SPAR International in April with a weird, floppy indoor scanning device, gathered a bunch of folks around various lunch tables to show it off, and then showed up at SPAR Europe with a brand-new product announcement alongside 3D Laser Mapping. That's how to move from point A to point B (yikes, that's as many "points" than your average laser scanner produces...). 

They were certainly one of the more buzz-inducing products at SPAR Europe, and I made sure to grab a video interview with Eliot Duff from CSIRO and Jon Chicken from 3D Laser Mapping to talk about the technology and how 3D Laser Mapping plans to bring it to market. You can catch it in the following piece, where I make sure to ask Eliot about the floppy thing he has in his hand. How polite of me:

 

Did I then ask him why it's better to swing your floppy thing around? Why yes I did. Perhaps I could have been a little more articulate, but hopefully you get the general idea. I'll be on the lookout for a good real-world case study using the product in the near future.


Permanent link

How lidar can help in the Sandy clean-up

There is a big job in front of those who have to clean up after hurricane Sandy. The damage is massive and continuing as flood waters recede to reveal just how powerful mother nature can be. I can barely imagine what the people of New York and New Jersey, especially, are going through. It sounds trite to even say, but it was just a couple months back that I was enjoying a concert on an old Atlantic City airfield that is now completely underwater and walking a boardwalk that is now almost completely washed away.

Of course, it's my job to look at the world's news through a 3D data lens, and it's unquestionable that lidar, both mobile and airborne, have a real role to play in the clean up, as do the many other forms of 3D data capture.

Certainly, it's good news that as recently as 2010 NYC got a full airborne lidar treatment. Ostensibly, it was so solar maps could be created as part of a city-wide environmental push, but that data can no doubt help establish where the water is most likely to flow and which areas of the city will be underwater the longest. Already, in March of 2010, New York State agencies were talking about how best to use lidar and other elevation data, and the tracking of storm surges was one of the mentioned uses. It's interesting to note, though, that the Department of Environmental Conservation mentioned that "LiDAR itself is not enough for drainage analysis – must enforce the hydrography and carve it into the DEM," but they're also using lidar for "Dam emergency management, state forest management (using LiDAR with and without vegetation); Impacts of sea level rise; Infrastructure risk with flooding."

Hopefully, this means the city is already armed with much of the data it needs to respond appropriately to Sandy, and it definitely seems in the initial days after the storm that the city had a good plan of attack for evacuation and response. Yes, the damage has been immense, but there has been relatively little loss of life from such a large storm and its property impact - especially when you consider the number of lives lost in Hurricane Katrina (over 1,800) as a comparison.

Now, as the clean-up commences, perhaps organizers can learn from those who worked in New Orleans after Katrina and in Japan after the 2011 tsunami. You can see a summary of the post-Katrina lidar work here and we recounted some of the lessons learned in Japan by the USGS, who presented at SPAR 2012. It's clear that mobile and terrestrial lidar can make damage assessment safer and provide vital information as the rebuilding begins. Asia Air Survey and StreetMapper provided this very kind of data not long after that March earthquake. Which structures will need to be completely demolished? Which can be spared and rebuilt? 3D data can certainly help with these kinds of decisions.

It may even be that they should laser scan the New York subways for posterity before they're renovated, as it's possible some of the infrastructure will have to be gutted in a way that makes them almost unrecognizable, and there are certainly historic engineering feats in those tunnels under the city.

The recovery effort will most certainly be, as we say here in Maine, a tough row to hoe, but here's hoping that the use of 3D data can make it more efficient and get people back to normal as quickly as possible.

If anyone out there is doing any lidar and laser scanning work in the Sandy-impacted areas, make sure to drop me a line so I can spread the word.


Permanent link

Three ways to inject 3D into the infrastructure debate

Here in the United States, I think it's safe to agree we're all immersed in political season, with debates and political discourse top of mind for many. That means it's a great time to inject 3D data capture into the conversation and get it in front of lawmakers (and wannabe lawmakers) as they seek your vote.

For me, the slam dunk is infrastructure. It's crumbling here in the United States, and how to fund and fix much of it is a talking point for both sides of our political aisle. So, in this week's SPARVlog, I offer up three ways in which those of you in the 3D data capture community can inject yourselves into the debate, for the good of the country, the industry, and for the good of your individual businesses and operations.

 


Permanent link

Are you sure that Space Shuttle will fit?

Leave it to NASA to demonstrate another perfect application for the use of laser scanning - let's call it clash detection on a very grand scale. You've probably all heard about how the retired Space Shuttle Endeavour was flown on the back of a jumbo jet from Florida's Kennedy Space Center to LAX. Sure, that took some planning. But that was the easy part. In mid-October comes the very difficult part: Getting the Space Shuttle through 12 miles of LA-area streets to its final resting place at the California Science Center.

See, as outlined in this story from NPR, the Space Shuttle is enormous: a wingspan of 78 feet, a tail piece that reaches five stories into the air, and a girth of some 170,000 pounds. There are only so many paths from LAX to the Science Center that can accommodate that. And to decide just the right path, they laser scanned the paths they had in mind then virtually took the shuttle through the streets to make sure it would fit - or not:

The route was selected after a team of engineers donated hundreds of hours to figuring out the best way to get the intact shuttle from the airport to the science center. They had to find streets that were wide enough, not too steep, and able to bear the weight of the 170,000-pound spacecraft. They used computer simulations and lasers to precisely measure distances to possible obstructions, like buildings to traffic lights. A lot of stuff has to be moved.

"For almost seven months now, we've been elevating power lines so that the tail can clear the power lines," notes Phillips.

Some 500 trees had to be sacrificed so that the wings wouldn't get caught up on them.

Even so, they're only planning to drive 1 mph. At top speed. The whole trip is going to take two days. There will be times when the tips of the wings are just inches from buildings. Talk about a nerve-wracking drive.

By comparison, the flight out was pretty simple - and definitely delivered some amazing video. Until we get a look at the trip through LA, we'll just have to settle for a look at the trip over LA. Talk about cool:

 

Anyone out there actually help out with the scanning? Drop me a line. I've got some calls out in the meantime.


Permanent link

How MIT hacks a Kinect

Considering all the ways the Kinect has been hacked so far, it was probably only a matter of time before the minds at MIT put the toy to good use. And their first shot out of the gate is pretty impressive.

Like a number of organizations, they've attacked the challenge of indoor mobile mapping (in a GPS-denied environment). While others have used quadcopters and carts, MIT has gone with the human-carried platform, as outlined in this paper, prepared for the International Conference on Intelligent Robots and Systems 2012, coming up next month in Portugal. They developed the system with money from the Air Force and Navy, so it's pitched as being good for search and rescue and other security/safety operations. Worn over the torso, the package includes:

A Microsoft Kinect RGB-D sensor, Microstrain 3DM-GX3-25 IMU, and Hokuyo UTM-30LX LIDAR. The electronics backpack includes a laptop, 12V battery (for the Kinect and Hokuyo) and a barometric pressure sensor. The rig is naturally constrained to be within about 10 degrees of horizontal at all times, which is important for successful LIDAR-based mapping (see Section V-B). An event pushbutton, currently a typical computer mouse, allows the user to ‘tag’ interesting locations on the map. 

The Hokuyu is more of a rangefinder than a "laser scanner." It's got a range of 30 meters, though, and a field of view of 270 degrees, so, like the Sick scanners, people are using them for 3D data capture even though the manufacturers might not market them that way (note, I love the disclaimer at the bottom of the UTM-30LX page: "Hokuyo products are not developed and manufactured for use in weapons, equipment, or related technologies intended for destroying human lives or creating mass destruction." That's comforting).

The MIT team also use SLAM technology to piece all the information together. However, unlike the Viametris solution, for example, there are no wheels that could provide odometry, so that can't be used to figure out where the wearer of the little backpack is (the IMU is being used to account mostly for pitch and roll, from what I can tell). Also, their solution was tasked with dealing with multiple floors and the fact that the wearer might tilt his or her body all the time, making the pose less than fixed. 

How did they solve these problems? "Our system achieves robust performance in these situations by exploiting state-of-the-art techniques for robust pose graph optimization and loop closure detection."

Well, obviously. Seriously, though, you'll have to read the paper if you want the gory details. They're all there, and I'll readily admit some of it's over my head. One of the cool things, though, is how they're using the Kinect (which you might be wondering about, since they've also got the lidar on board). Basically, they're using the Kinect to drive a feature-recognition system. The Kinect builds a database of images that the system then checks against all the time so that it knows when a user is traversing the same terrain over again. If so, the map is updated with more accurate information. In this way, the maps get better with repeated travels.

And what about that barometer? Well, since stairwells are relatively featureless and users tend to whip around corners, the system uses the barometer to tell it when the user has traveled to another floor.

I haven't seen that before.

Want to see the data in action? Here's a brief video MIT put together to tout the system. You'll get the general idea:

 

Not bad, right? It's more of a 2D map they're creating, but there are other images in the paper of exports that have multiple floors and look more 3D when you navigate them. This kind of multi-sensor integration is the wave of the future, in my opinion. By combining more (though possibly less expensive) sensors, the data will be improved while bringing the cost down, hopefully, making the technology more accessible.


Permanent link

Looking back at the Esri User Conference

With a couple of weeks to decompress, I thought it would be a good idea to try to round-up all of the many and varied news pieces related to lidar and 3D data capture that came out of the Esri User Conference this year in San Diego. While I didn't attend myself, we did have our conference programmer, Linda McLaughlin, in attendance, scouting for great speakers who can bridge the gap between GIS and robust 3D data capture, realizing the promise of what's being called 4D GIS in some circles. 

By all accounts, there was more 3D at Esri than ever before. More sessions with 3D in the title, more attendees coming up to the booth to talk about how the end users of the data are looking for 3D deliverables, and more releases from partner software firms touting their abilities with 3D data. 

Linda mentioned that the topics most resonating with the people at the event were airborne lidar (obviously, I guess), but especially that data that could be gathered via UAVs, and, somewhat surprisingly, the Kinect, if only on a personal-interest level.

With all of that said, here are some stories and reports that came out of the event that may interest you:

• Maybe most interesting is this video report from Glen Lethem at gisuser.com where he interviews Optech's North American sales head Jim Green. In the video, Green announces a new partnership with Esri to integrate airborne and mobile mapping systems with the new ArcGIS 10.1 software. I can't find any other report of this partnership, so Green and Lethem would appear to be breaking news here. Take a look:

 

• Safe Software released an update to its ArcGIS data interoperability extension that will help take advantage of 10.1's new support of lidar. GeoPlace has a great interview with COO Dale Lutz on how lidar can be leveraged with the new software capabilities.

• For those of you who were interested in the Survey Summit, which acted as ACSM's annual meeting, Eric Gakstatter has a great write-up over at Geospatial Solutions. He also picks up on the UAV interest, along with the opportunities potentially offered by 3D rendering technology like that provided by Esri acquisition Procedural. And, yes, the cloud. I personally think bandwidth and file-size issues will hold back huge adoption of 3D GIS in the cloud for the foreseeable future, but that's obviously where everything is headed at some point. 

• Directions Magazine's Adena Schutzberg has some good takeaways from Esri UC 2012, but she doesn't seem to have been overwhelmed by 3D, as she makes near zero mention of data capture or 3D in her write up

• Perhaps the best perspective is provided by Michael Frecks, who, as CEO of Terrametrix, is obviously intimately involved in mobile and terrestrial scanning. He notes that GIS and surveying seem to be miles apart, and as scanning is still largely seen as a surveyor's tool, perhaps it shouldn't be surprising that lidar and laser scanning has yet to really penetrate the GIS world, other than early adopters. We keep hearing about how mobile scanning should be great for populating GIS, but the hard evidence of that frequently happening is slim. Perhaps this will change as more and more users upgrade to 10.1 and have native support for lidar built into the software they're using every day. 

• Finally, coinciding with our article this week on volumetric display, Infinite Z made some buzz at the User Conference with their demonstration of zSpace, a "virtual-holographic" digital platform. Here's the press release about what they were up to. Most importantly, they teamed with partners in the 3D GIS space to show how you might go about using the technology. This certainly sounds promising: "In its Zephyr simulator, TerraEchos partners with Infinite Z to demonstrate the ability to visualize streaming complex objects moving in three dimensional space without the normal performance and rendering limitations associated with true 3D-geospatial visualization." As does this:

"Sanborn's GEOINT Intelligence Programs believes zSpace will enhance the analytic experience for mission visualization of high resolution city 3D modeling, planning tactics and preparedness," said Jessica King, vice president of intelligence programs at the Sanborn Mapping Company.

I'll keep looking around for more on-the-ground coverage of the Esri event, and I'm sure there are more relevant product releases, but, all-in-all, it seems like significant steps toward integration with 3D data capture are being made, but there is still some significant steps that need to be made to bring the two communities together.


Permanent link