Everything on the planet has a unique spectral signature, reflected or emitted by the chemical bonds holding its atoms together. Human eyeballs see some of this signature, which we perceive as color. But, visible light is tiny part of the electromagnetic spectrum, and from a sensing perspective, tells scientists very little about an object. Scooping up huge swaths of the electromagnetic spectrum requires things called hyperspectral sensors.
Mounted on satellites or aircraft, these sensors have the potential to collect a running inventory of the state of the earth’s surface. But hyperspectral data has been difficult to tame computationally without the help of our awesome, pattern-detecting brains. The graphic above is from a study published last week in the Journal of Photogrammetry and Remote Sensing, that describes an algorithm that can classify land cover types with minimal nudging from humans.
In single band data, each pixel has a single value (typically, its color). Hyperspectral sensors collects such a wide frequency of data that each pixel has many values. Stacked on top of one another, the pile of spectral bands is usually referred to as a data cube. Arbeck/WikipediaThe problem, from a computational standpoint, is that hyperspectral sensors are too good at their jobs. Where most visual data assigns a single value (like color) to each pixel, hyperspectral data pixels each have hundreds, even thousands of values (see image to the left). Statistically, this makes each pixel seem unique to the computers tasked with classification. This is known as the Hughes effect, and it’s a huge problem because it cripples the potential of using hyperspectral data to rapidly update our knowledge about the condition of the earth’s surface.
Even if they can’t label the land cover types, hyperspectral imaging algorithms are usually able to put like pixels into groups based mostly on their proximity to one another. In the new study, the authors combined this clustering method with another technique that uses a small number of training samples to label each group of pixels.
In the middle image of the graphic at the top, you can see the mosaic that the algorithm from the current study created of the University of Pavia in Italy. At this stage, the algorithm thinks each tiny blob in that image is a unique land cover type. To help it classify them into nine categories, the researchers fed the algorithm five to 15 samples of each land cover type.
The difference between having no training samples and having some is pretty dramatic, and the algorithm was able to successfully classify about 50-80 percent of the land cover types after the training. The variation in ranges depended on how many samples of each land cover type the researchers used to train the algorithm. Of course, that might not seem super impressive in the example above, given that the algorithm was only able to successfully label less than half of the topmost graphic (the rightmost image shows the successfully labeled data).
However, the number of land cover types on earth is finite, and given enough images and enough time, the amount of human nudging would progressively decrease. Because land features change over time, semi-automated hyperspectral monitoring could help everyone from building engineers to conservationists keep tabs on the state of the earth’s surface.
Below is the second image the researchers used in their study, taken in 1992 over Indian Pines in northwestern Indiana. The agrarian landscape has a much more diverse catalog of land cover classes.
Kun Tan et al./Journal of Photogrammetry and Remote SensingThis post was made using the Auto Blogging Software from WebMagnates.org This line will not appear when posts are made after activating the software to full version.
No comments:
Post a Comment