Featured post

GUEST POST: A Very Brief History of Optical High Resolution Satellite Imaging

The history of the optical high resolution satellite images starts from classified military satellite systems of the United States of America that captured earth’s surface from 1960 to 1972. All these images were declassified by Executive Order 12951 in 1995 and made publically available (Now freely available through the USGS EarthExplorer data platform under the category of declassified data). From 1999 onward, commercial multispectral and panchromatic datasets have been available for public. Launch of Keyhole Earthviewer in 2001, later renamed as Google Earth in 2005, opened a new avenue for the layman to visualize earth features through optical high resolution satellite images.

A comparison of declassified Corona (1974) vs. GeoEye-1 (2014) image. Image credits: EarthExplorer (Corona) and Google Earth (GeoEye-1).

In the current era, most high resolution satellite images are commercially available, and are being used as a substitute to aerial photographs. The launch of SPOT, IKONOS, QuickBird, OrbView, GeoEye, WorldView, KOMPSAT etc. offer data at fine resolutions in digital format to produce maps in much simpler, cost effective and efficient manner in terms of mathematical modeling. A number of meaningful products are being derived from high resolution datasets, e.g., extraction of high resolution Digital Elevation Models (DEMs) with 3D building models, detailed change assessments of land cover and land use, habitat suitability, biophysical parameters of trees, detailed assessments of pre and post-disaster conditions, among others.

Both aerial photographs and high resolution images are subject to weather conditions but satellites offer the advantage of repeatedly capturing same areas on a reliable basis by considering the user demand without being restricted by considering borders and logistics, as compared to aerial survey.

Pansharpening / resolution merge provides improved visualization and is also used for detecting certain features in a better manner. Pansharpening / resolution merge is a fusion process of co-georegistered panchromatic (high resolution) and multispectral (comparatively lower resolution) satellite data to produce high-resolution color multispectral image. In high resolution satellite data, the spectral resolution is being increased and more such sensors with enhanced spectral sensitivity are being planned in the future.

List of the Spaceborne Sensors with <5 m Spatial Resolution

Sensors Agency/Country Launch Date Platform altitude (km) GSD Pan/MSS (m) Pointing capability (o) Swath width at nadir (km)
IKONOS-2 GeoEye Inc./USA 1999 681 0.82/3.2 Free View 11.3
EROS A1 ImageSat Int./Cyprus (Israel) 2000 480 1.8 Free View 12.6
QuickBird DigitalGlobe/USA 2001 450 0.61/2.44 Pan and MSS alternative Free View 16.5
HRS SPOT Image/France 2002 830 5X10 Forward/left +20/-20 120
HRG SPOT Image/France 2002 830 5(2.5)x10 sideways up to ±27 60
OrbViw-3 GeoEye Inc./USA 2003 470 1/4 Free View 8
FORMOSAT 2 NSPO/China, Taiwan 2004 890 2/8 Free View 24
PAN (Cartosat-1) ISRO/India 2005 613 2.5 Forward/aft 26/5 Free view to side up to 23 27
TopSat Telescope BNSC/UK 2005 686 2.8/5.6 Free View 15/10
PRISM JAXA/Japan 2005 699 2.5 Forward/Nadir/aft -24/0/+24 Free view to side 70 35 (Triplet stereo observations
PAN(BJ-1) NRSCC (CAST)/China 2005 686 4/32 Free View 24/640
EROS B ImageSat Int./Cyprus (Israel) 2006 508 0.7/- Free View 7
Geoton-L1Resurs-DK1 Roscosmos/Russia 2006 330-585 1/3 for h = 330km Free View 30 for h = 330km
KOMPSAT-2 KARI/South Korea 2006 685 1/4 sideways up to ±30 15 km
CBERS-2B CNSA/INPE China/Brazil 2007 778 2.4/20 Free View 27/113
WorldView-1 DigitalGlobe/USA 2007 494 0.45/- Free View 17.6
THEOS GISTDA/Thailand 2008 822 2/15 Free View 22/90
AlSat-2 Algeria 2008 680 2.5 up to 30 cross track Free view 17.5
GeoEye-1 GeoEye Inc./USA 2008 681 0.41/1.65 Free View 15.2
WorldView-2 DigitalGlobe/USA 2009 770 0.45/1.8 Free View 16.4
PAN (Cartosat-2, 2A, 2B) ISRO/India Cartosat 2-2007 Cartosat 2A-2008 Cartosat   2B-2010 631 0.82/- Free View 9.6
KOMPSAT-3 KARI/South Korea 2012 685 0.7/2.8 ±45º into any direction (cross-track or along-track) 15
WorldView-3 DigitalGlobe/USA 2014 617 0.3/1.24/3.7/30 13.1

 Conflicts of Interest: The findings reported stand as scientific study and observations of the author and do not necessarily reflect as the views of author’s organizations.

 About this post: This is a guest post by Hammad Gilani. Learn more about this blog’s authors here

How to Export SAR Images with Geocoding in ESA SNAP

I’ve been using ESA Sentinel-1 Toolbox (S1TBX) and SNAP (Sentinel Applications Platform) since a long time for SAR processing, since back when it was known as NEST. One issue that I’ve faced is that when any SAR intensity data is exported from SNAP into some other format, e.g. ENVI or GeoTIFF format, the coordinates are not exported. I found the solution on ESA Sentinel Toolbox Exploitation Platform (STEP) Forum, and want to share it with others too.

What’s actually going on is that S1TBX, being a focused toolset for SAR, automatically interprets the geocoding information of the SAR metadata, while the data itself is mostly not projected. However, SNAP does not carry forward this geocoding interpretation to the export function. Therefore, we need to tell SNAP to attach / imbed the geocoding information before exporting.

I’ve worked with Envisat ASAR and ALOS-1 / 2 PALSAR intensity data, and here are the solutions for both (thanks to ESA STEP forum):

  • For Envisat ASAR intensity Image Mode IMG / IMP .N1 data, the simple way to make sure coordinates are exported is to apply the Radar > Ellipsoid Correction > Gelocation-Grid function. Export the output in ENVI or GeoTiff format, and opening it in any external software will now show the data with coordinates.
  • For ALOS-1 / 2 PALSAR Level 1.5 CEOS data, things are a bit more complicated. For some reason, S1TBX / SNAP do not give full support for this data. The Geolocation-Grid function does not work, as it says “Source product should not be map projected.” Attempts to use the Radar > Geometric > Update Geo Reference function instead, which requires DEM to be provided, works in terms of generating the exported output, but the coordinates are not carried over. The Raster > Geometric Operations > Reproduction tool, but that doesn’t work either. The “trick” to solve this issue was discovered in an ESA STEP forum post: Use Radar > Geometric > SAR-Mosaic with only 1 image as input; the output will be geolocated correctly.

I think the geolocation issue for intensity images from other SAR sensors like TerraSAR-X and Radarsat would be quite the same. Once again, thanks to the ESA STEP Forum and its members for providing the solution.

For details and background discussion, see these ESA STEP Forum posts:

http://forum.step.esa.int/t/coordinates-disappear-when-exporting-sar-data/3788

http://forum.step.esa.int/t/alos-1-l1-5-terrain-correction/2927

 

Using Synthetic Aperture Radar (SAR) Imagery to Look Beneath Dry Soil Surfaces

One of the unique characteristics of Synthetic Aperture Radar (SAR) satellite remote sensing is that at smaller frequencies, the SAR signal can penetrate sand under dry conditions. The electromagnetic (EM) wave penetration in soil depends upon a parameter called the “relative permittivity”, which is actually a “complex” quantity, with real and imaginary parts. The real part is called “dielectric constant” and the imaginary part is called “loss factor.” The study of penetration of EM waves in materials is based on some mathematics and physics, which we will not discuss here (relax!). These theoretical foundations are mostly covered in any undergrad / grad course or relevant book on EM waves.

Anyway, to cut a long story short, scientists define the “penetration depth” of EM wave in any material as (beware, scientific jargon coming up): “the distance at which the power density of the electromagnetic wave drops to 1/e of its value at the immediate sub-surface.” Here, e is the base of the natural logarithm, with a value of 2.72, and therefore 1/e has a value of 0.37. So, in more layman terms, we can think of penetration depth as follows: If the incoming EM wave has a power density of 1 units at the surface, then the depth at which it is reduced to 0.37 units, is the penetration depth.

Under certain approximations, such as uniform material properties with depth, the penetration depth d can be defined mathematically as:

penetrationdeptheqn

This equation is very interesting; a quick analysis shows us the following:

  • Larger wavelength (smaller frequency) means more EM wave penetration
  • EM wave penetration increases as dielectric constant dielectric constant increases
  • EM wave penetration decreases as loss factor increases

So, to penetrate any material, the frequency should be small, the dielectric constant should be large, and the loss factor should be small. As moisture content in an object increases, the loss factor generally increases. Therefore, penetration depth decreases with increase in moisture content: More water molecules cause more EM wave observation at the microwave frequencies. Incidentally, this is the same principle on which the microwave oven works.

Summarizing the above passage in the context of soil surfaces, we can now state: Low-frequency SAR signals can penetrate in dry soil. In the case of very dry and arid regions, e.g. Sahara desert, low-frequency SAR signals can penetrate sand down to a depth of a few meters. In the figure below is a simulation of penetration depth with respect to volumetric moisture content in sand, at L-band frequencies.

lbandsarpenetration_richards2005

Simulation for SAR penetration depth in sand as a function of moisture content, at L-band wavelength of 23.5 cm. Taken from Richards (2009) – Remote Sensing with Imaging Radar.

I hope this blog post serves as a good introduction to the material penetration properties of SAR, which works for not only soil, but also for forests and snow / ice studies, among others.

In my next post, I will describe a research study we have conducted to detect a paleochannel in the Cholistan Desert in Pakistan using both SAR and optical remote sensing data.

Above-Ground Biomass & Remote Sensing

Nice blog post giving a good introduction to AGB estimation using remote sensing. Wish it would have also mentioned the upcoming ESA BIOMASS radar mission.

GEOSPATIAL CLUB

Forest and Greenhouse Effect – Sink or Source?

Forests play an important role in maintaining the global carbon as they are the primary source of biomass which in turn contains a vast reserve of carbon dioxide, an important greenhouse gas. Of all terrestrial ecosystems, forests contain the largest store of carbon and have a large biomass per unit area. The main carbon pools in forests are plant biomass (above- and below-ground), coarse woody debris, litter and soil containing organic and inorganic carbon (Nizami et al, 2009). The ability of forests to both sequester and emit greenhouse gases coupled with ongoing widespread deforestation has resulted in forests and land-use change.

Since we were kids, we were all told in biology class that vegetation can absorb carbon dioxide from atmosphere, stored as organics and release oxygen through photosynthesis. In fact, there are other processes that we are not familiar with. The carbon which forest…

View original post 1,158 more words

DG Launches SpaceNet, Opening Access to Hi-Res Satellite Imagery for Deep Learning Research

DigitalGlobe has recently launched SpaceNet, an online repository of satellite imagery and associated training data, for users to experiment with machine learning and deep learning algorithms. Spacenet has been launched as a collaboration between DigitalGlobe, CosmiQ Works, and NVIDIA, and is available as a public dataset on Amazon Web Services (AWS). As a first step, SpaceNet will contain DigitalGlobe’s high resolution multispectral imagery from their premier WorldView-2 satellite at its industry-leading full 8-band spectral resolution and over 200,000 curated building footbrints across Rio de Janeiro, Brazil. This is unprecedented: Never before has satellite imagery at such high resolution of 50 cm been released publicly with building annotations. The released dataset contains over 7000 images over Rio de Janeiro. The satellite imagery is being delivered in GeoTIFF format while the building footprints are in GeoJSON format.

spacenet

True color WV-2 high resolution imagery sample from the SpaceNet repository, along with corresponding building footprints. Source: NVIDIA

According to SpaceNet:

This dataset is being made public to advance the development of algorithms to automatically extract geometric features such as roads, building footprints, and points of interest using satellite imagery.

Scripts are already cropping up on GitHub for manipulating and using the satellite imagery data on SpaceNet: see code examples from Development Seed here and from CosmiQ Works here. NVIDIA has also released a detailed case study of analysis of SpaceNet data using their Deep Learning GPU Training System (DIGITS) platform, demonstrating the power and capability of GPU-based deep learning algorithms applied over high resolution satellite imagery. Application examples include detection of each building as a separate object and determining a bounding box around it, and semantic segmentation to partition the image into regions of pixels that can be given a common label, such as “building”, “forest”, “road”, or “water”.

SpaceNet plans a massive increase in both images and labeled features to be made available over the platform in the future. Incidentally, the name SpaceNet is inspired from ImageNet, a similar database of images created to help spur early advancements in computer vision.

To read more about the launch of SpaceNet, see coverage on GISCafeTechCrunch, MIT Technology Review, and Popular Science.

SpaceNet datasets can be accessed on AWS here.

 

MODIStsp: R Package for Analysing MODIS Time Series Data

An R package for automating the processing and analysis of MODIS Land Products raster datasets has recently been released by Lorenzo Busetto and Luigi Ranghetti at the Institute for Electromagnetic Sensing of  Environment, National Research Council of Italy (IREA-CNR). This package, called MODIStsp, is available for download on GitHub. It provides a user-friendly GUI, batch processing utilities, and access to source code for user modification and customisation. MODIStsp has the capability of performing several preprocessing steps (e.g. download, mosaicing, reprojection and resize) on MODIS products, and on-the-fly computation of time series of Spectral Indexes

For more details and package release information, please visit Spatial Processing in R blog and see the paper published in Computers & Geosciences journal.

 

High-Resolution Population Density Mapping by Facebook and DigitalGlobe

Few months back, we all read the news that Facebook has utilized satellite imagery to generate an estimate of population density over different regions of the Earth. This task was accomplished by Facebook Connectivity Lab, with the goal to identifying possible connectivity options for high population density (urban areas) and low population density (rural areas). These connectivity options can range from Wi-Fi, cellular network, satellite communication, and even laser communication via drones.

Facebook Connectivity Lab found that current population density estimates from censuses are insufficient for this planning purpose, and resolved to make their own high spatial resolution population density estimates from satellite data. What they did was take their computer vision techniques developed for face recognition and photo tagging suggestions in images and applied the same algorithms to analyzing high-resolution satellite imagery (50 cm pixel size) from DigitalGlobe. DigitalGlobe’s Geospatial Big Data platform was made available to Facebook, along with their algorithms for mosaicking and atmospheric correction. The technical methodology employed by DigitalGlobe and Facebook Connectivity Lab, is detailed in this white paper by Facebook. DigitalGlobe’s high resolution satellite data from the past 5 years or so (imagery from high-resolution WorldView and GeoEye satellites), were utilized, and they only used cloud-free visible RGB bands. For cloudy imagery, third party population data was used to fill in the gaps. On this big geospatial dataset from DigitalGlobe, the Facebook team analyzed 20 countries, 21.6 million square km, and 350 TB of imagery using convolutional neural networks. Their final dataset has 5 m resolution, particularly focusing on rural and remote areas, and improves over previous countrywide population density estimates by multiple orders of magnitude.

 

 

Augmented Reality Sandboxes

Scientists have put Microsoft / Xbox Kinect sensors to great use over the years. One of these uses is in simulation of physical effects in terms of geography and mapping, ranging from topography to water flow. By now, many of these augmented reality interactive sandboxes are in action.

One of the most popular of these toolboxes is the aptly-named Augmented Reality Sandbox built by the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences for an NSF-funded project on informal science education. Learn the latest updates, and keep up with developments on the project page here and here. This sandbox works with a Kinect 3D camera and a projector to project a real-time colored topographic map with contour lines onto the sand surface. Mathematical GPU-based simulations govern the virtual water flow over the surface. This sandbox is already an interactive display at the University of California Davis Tahoe Environmental Research Center (TERC) and the Lawrence Hall of Science at University of California, Berkeley, among many other places. There are some cool demo videos for this sandbox, depicting real-time water flow simulation with respect to topography and virtual dam failure simulation, among others.

See a detailed article on Wired about this sandbox here.

A company from the Czech Republic offers their SandyStation augmented reality sandbox. Two good lists and descriptions of other virtual reality sandboxes all over the world are available here and here.