RAPPLS data samples

 

This page shows an overview of how data gathered by the RAPPLS system is put to use. It will be updated over time.

Skip to: RADAR :: aerial photos :: LiDAR

RADAR

From time to time the RAPPLS package has flown a frequency-modulated continuous wave (FCMW) RADAR system, aimed at detecting snow depth coincident with airborne LiDAR and imagery. This poster (PDF) describes the system and shows some initial results.

Aerial photos

1. Digital reconstruction of sea ice from aerial photos

Since 2007 aerial photography captured by RAPPLS should contain enough overlap for terrain reconstruction from aerial imagery. In 2008 we looked at using Structure-from-Motion techniques to reconstruct sea ice and potentially help constrain camera positions. In 2010-11 our work was overtaken by a rush of commercial geospatial applications to generate 3D models from unorganised image collections. We were given access to Agisoft Photoscan through links with the University of  Tasmania’s School of Geography and Environmental Science, and here are some results. We made the models on a dual-xenon HP z800, with 128GB RAM and an Nvidia Quadro 2000 GPU. This is plenty of processing power for our work, but the GPU has trouble displaying large dense point clouds.

A sort of  ‘first cut’ is shown below, from imagery collected on the SIPEX 2007 cruise. There is a lot of overlap between images, but is comparatively low resolution (5mp,  15-20cm ground sample distance (GSD) for the images used here). The surface model is still quite good – the sea is flat, ridges are bumpy, and not too many artifacts exist.

3D reconstruction of sea ice from aerial imagery

3D reconstruction of sea ice from aerial imagery captured in 2007, Nikon D1X, ~15-20cm/pixel

Fast-forward to 2012. On the SIPEX II voyage the helicopter was equipped with a Hasselblad H3D-II 50MP medium format digital camera. At 6um/pixel it has plenty of light capturing ability, and its entire processing flow is in 16-bit colour. This camera was made possible by collaboration with the AAD mapping group.

The next image shows a point cloud (not a textured model or photograph) ice station from SIPEX II, modelled using 50MP imagery at roughly 7cm GSD. Ridiculously dense, and amazing in its capture of snow dunes, ridges, and other ice features. It is registered to local coordinates using the two GPS/total station reference sites shown as little flags. Aside from that, the geometry comes only from image matching and estimation of camera parameters. We are coregistering this cloud with airborne LiDAR and Terrestrial LiDAR  for a comparative study, to be presented in early 2014.

digital_seaice

3D reconstruction of sea ice from SIPEX2012, Hasselblad H3D-II 50, ~7cm/pixel

The next image shows an elevation difference map between overflights ~24 hours apart over the site pictured above. In general the differences should be close to zero, since no snowfall occurred between flights. We can see the helicopter appearing on the rear deck of the ship (lower left corner), the progress of a snow trench for drill hole measurements, and the development of an igloo built by the snow trenching team. Other things in the picture are interesting but not real – the mysterious circular features seen right of centre are an artifact of the modelling process, currently in the ‘to be resolved’ box. There’s also cloud -> cloud slope difference from left to right. This, again, can be introduced during processing depending on how the ground control parameters are set up. In this case it is most likely due to the minimal control set used (3 control points each, plenty of room for movement there). Unfortunately, the ridge systems didn’t move between flights.

comparing two 3D models from photos...

Elevation differences between two 3D models from flights roughly 24 hours apart. Some differences are clear and real, others are not.

 The overhead view below gives an idea of what we’re looking at (roll over to switch between flights).

flight 13 overhead shotflight 14 overhead shot

Moving away from sea ice for a moment, the next example shows the worst mismatch between an orthophoto made with no camera positioning data and only three ground control points. It is overlaid with survey points shot with a total station (orange dots). We made this orthophoto to test how Photoscan behaved in a minimally-constrained environment, and the results are encouraging.

photoscan_3gcp_worst_mismatch

Worst mismatch between a Photoscan-derived orthophoto and building corners at Davis Station, Antarctica. The orthophoto was made under tough conditions – no camera positions and only three ground control points, to simulate an ad-hoc survey in cases where camera positioning is not feasible.

There’s plenty more exploring to do with this method. Insights gained so far include:

  • Use the best camera you have available.
  • If you don’t have a great GPS to fly with, or can’t trust your camera positions, a well-organised sparse set of control points on the ground will help a lot (in fact, to match reality some kind of scale needs to be included – you can also add control lines if you know the size of an object in an image)
  • Strip mapping using a single camera is not well suited to this method, even with plenty of fore-lap (overlapping along-track images). But it can be done… if you have some form of ground control.

Work underway includes a comparative study of aerial image reconstruction and LiDAR, and adding dense point clouds to ridged regions of sea ice to fill in detail where LiDAR is too sparse. We hope it works out! Our journey into this technology is so far brief, but promising.

2. Ice properties from aerial image classification

The AAD/ACE CRC Nikon D1X aerial camera has actually been around since the ARISE cruise in 2003. Some of the imagery from the ARISE cruise was used to estimate the relative area of smooth and rough ice, using a texture-based (local binary pattern) quadtree segmentation algorithm to determine rough and smooth regions.

It was also flown on the Ice Station Polarstern (ISPOL) campaign in 2004, and the data were used to look at changes in ice floe size distribution using a custom-made routine employing intensity thresholding and dilation/erosion to separate ice floes. Imagery from ISPOL was also used to identify the changes in relative area of different ice types in conjunction with ice kinematics -this time using an object-oriented classifier which includes texture and band ratio information when deciding which regions of an image belong to a given ice class.

What of the Hasselblad imagery? Its promising, and on the way…

Airborne LiDAR

In the case of sea ice, LiDAR is a very efficient method for mapping very large areas very quickly. It is also easy to control – its error sources are more straightforward, and there is relatively little guesswork. And it is still amazingly good at picking up very detailed information about sea ice. While LiDAR systems are relatively complex and expensive, their place in close-range remote sensing is firmly cemented – especially with recent advances in point density and spectral analysis of returned waveforms. In our view LiDAR and photogrammetry are assistive, not competitive technologies. For science outcomes, both are required!

Below are some of the first LiDAR images generated from RAPPLS on the SIPEX 2007 voyage…

jll_slide1

jll_slide2

jll_slide3

Again, with LiDAR, we face the problem of turning amazing images into science. The NASA Icebridge program has led the way in making science out of LiDAR over sea ice, and we borrow heavily from their work.

But, we have our own set of advantages and disadvantages. Firstly, we collect many more points – roughly 2mx2m grid spacing at the limits of laser range. If the aircraft is flown lower, point density increases. This is great for sea ice mapping but hard on compute time. The near-infrared laser used by RAPPLS is fairly unambiguous in its range over ice – it does not penetrate the snow cover beyond its engineered noise limits. However, it is almost completely absorbed by water (as seen in the last image above) – making our task in finding a reference surface substantially more challenging!

For an airborne LiDAR, estimating the accuracy of each laser shot is also very important. This applies especially when the LiDAR shot is one part of an equation system for modelling sea ice thickness. For RAPPLS, we can do exactly that. Below is a sample map showing the 3D error estimate for each LiDAR shot in the point cloud (scale in metres). Based on this work we can set extremely confident error margins around the geophysical products resulting from the LiDAR instrument.

turnaround_3derr

3D error estimate for RAPPLS LiDAR shots. The turn-around is shown to point out that the experimental code used to make this map is robust. Processing, programming and visualisation by Adam Steer.

This map shows a predictable outcome – uncertainty about where points really are in space is greatest near swath edges, where laser range is the greatest and small angular errors have a larger impact. However, elevation error is the smallest component – which is great for sea ice! A more complete picture can be found in this PDF document, showing the map above, with error components separated out. This clearly leads directly into rigorous error estimates for sea ice thickness modelled at each and every LiDAR point you see in the image, by propagating a time-and-space dependent altimetry error through the sea ice hydrostatic equation.

Along with customised programming to look at errors and noise mitigation, we employ Terrasolid to determine small calibration offsets for the RAPPLS LiDAR on a season-by-season basis. The ability to rigorously determine and account for the RAPPLS LiDAR error budget has been a long process, and we’re now looking forward to publishing results.