The devastating earthquake that recently hit Nepal has personal relevance to me as I have been to Kathmandu twice and it is horrifying to see distinct places of the city reduced to utter rubble. Furthermore, I have a close friend and many professional contacts in Nepal, who spent their nights out in the open. We hope to see Kathmandu and Nepal rebuild and rise up again soon.
As relief efforts are in full swing on the ground, here are some significant volunteer contributions we can make as remote sensing and mapping experts. If you know of any further resources or near-real-time applications for such skills, please mention in the comments.
OpenStreetMap (OSM) and Humanitarian OpenStreetMap (HOTOSM) are leading an effort to identify and map the earthquake damages. They have divided the enormous task into several sub-tasks, explained clearly on the webpage. Furthermore, HOTOSM have formally requested that anyone with FOSS (Free and Open Source Software) experience in automatic image classification and feature extraction should contact on HOTOSM email directly. Links below:
DigitalGlobe (the company that owns QuickBird, GeoEye, and WorldView series of satellites) has its own crowd-sourcing image analysis platform that is activated in cases of disasters. You can identify and map damaged locations and building using this platform which uses DigitalGlobe’s imagery:
Digital Globe Imagery:
DigitalGlobe has made available imagery regarding the Nepal earthquake publicly for analysis. Get the imagery here:
In a recently published paper in Forest Ecosystems, we evaluated forest sustainability of two forests (state and community/private owned) through quantification methods, utilizing SPOT-5 remotely sensed images for the years 2005 and 2011. This study was conducted in a sub-watershed area covering 468 km2, of which 201 km2 is managed by the state and 267 km2 by community/private ownership in the Murree Galliat region of Punjab province of Pakistan.
The results show that between the years 2005 to 2011, a total of 55 km2 (24 km2 in state-owned forest and 31 km2 in community/private forest) was converted from forest to non-forest. The study concludes that state-owned forests are better than the community/private forests in terms of conservation of forests and management. The findings of this paper may help to mobilize community awareness and identify effective initiatives for improved management of community/private forest land for other regions of Pakistan.
Article web link: http://www.forestecosyst.com/content/2/1/7
Conflicts of Interest: The findings reported stand as scientific study and observations of the author and do not necessarily reflect as the views of author’s organizations.
About this post: This is a guest post by Hammad Gilani. Learn more about this blog’s authors here.
Of late, I have been reading some scientific articles and opinion pieces regarding one of the biggest fears that I, and I am sure other scientists in the field too, face often. The fear that there may be bugs left in the trail of code somewhere that we depend upon so dearly to analyse and interpret datasets, making scientific judgements and conclusions. Here is a recent article in Nature which should force us all to re-think and re-evaluate our coding practices.
In my view, and I tell this to my students also, scientists should try to adhere to the following basic guidelines when writing their code:
– Mention the name of the coder and date along with any versioning
– If the code is based on or derived from some other code, mention that too
– Use extensive commenting, for your own and others’ sake
– Check and double-check your code
Recently, there has been a lot of advocacy on making the code public and sharing it in public; however, I do think only well-documented code should be published, because a non-documented or badly-documented code can actually cause more confusion. Here is an old article in Nature which argues to the contrary. Let us know your coding practices and recommendations in the comments.
Note: This post is inspired from this fascinating article in Nature, which I encourage you all to read: Computational science: …Error
The Central Limit Theorem (CLT) is a fundamental theorem in probability and statistics which tells us that the sampling distribution of the mean is asymptotically Gaussian as long as the sample size is sufficiently large, no matter what distribution is followed by the population. The sampling distribution of the mean has a mean equal to the population mean (μ) and variance given by σ2/N, where σ2 is the population variance and N is the sample size. Generally, the sample is considered sufficiently large for sample size greater than or equal to 30 (N ≥ 30). The variance of the sampling distribution of the mean is reduced by the factor N as the number of samples increases.
The ab initio proof of the CLT is rather complicated and requires strong knowledge of the underpinnings of probability theory1. However, the CLT can be explored and understood empirically, through observations. Here is a MATLAB code I wrote to explore the CLT in a graduate class I am teaching on Data Analysis for the Earth Sciences.
MATLAB code for exploring the CLT
Population distribution (Rayleigh)
Sampling distribution of the mean with various sample size. Population distribution is Rayleigh.
1Stark & Woods (2001) – Probability and Random Processes with Applications to Signal Processing (3rd Edition)