[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Browse::
Journal Info::
Guide for Authors::
Submit Manuscript::
Articles archive::
For Reviewers::
Contact us::
Site Facilities::
Reviewers::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Volume 8, Issue 1 (9-2018) ::
JGST 2018, 8(1): 221-237 Back to browse issues page
Classification of Aerial Visible-Thermal Data Based on Deep Learning Models
Gh. Abdi * , F. Samadzadegan
Abstract:   (3605 Views)
Multi-sensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. In this paper, a deep learning decision fusion approach is presented to perform multi-sensor urban remote sensing data classification that contains deep feature extraction, logistic regression classifier, decision-level classifier fusion and context-aware object-based post-processing steps. In this context, a deep architecture is designed to progressively learn invariant and abstract feature representations of the input data comprised of spectral, spatial or joint utilization of spectral-spatial features. In the last one, the raw spectral data are firstly used to take advantage of the available contiguous spectral bands for classification applications. The spatial feature descriptors are secondly made by a local window considering that a set of neighboring pixels for many cases belong to one class. A hybrid set of joint spectral-spatial information is finally made by stacking the above-mentioned feature descriptors. Then, a logistic regression classifier is applied to train high-level features at the top layer and to optimize the deep learning framework by enforcing gradient descent from the current setting of the parameters to minimize the training error on the labeling samples. In the next step, an enhanced classification map is estimated by integrating multiple classifier outcomes, followed by a context-aware object-based post-processing refining the obtained pixel-based land cover classification map. In this case, a multiresolution image segmentation method is employed to split the image data into multiple spatially non-overlapping regions and then, the final label of each segmented region is made by majority voting and considering relationships among spatial objects. A series of comparative experiments are conducted on the widely-used dataset of 2014 IEEE GRSS data fusion contest. It enables a challenging multi-resolution and multi-sensor image analysis and data fusion opportunity; the visual data contains a series of color images associated with different strips, and the thermal hyperspectral image was acquired by the 84 spectral bands Hyper-Cam airborne sensor over Thetford Mines in Québec, Canada, with 874×751 pixels (spatial resolution of 1-m) and comes with a seven class labeled ground truth map. In the above-described datasets, the training samples are randomly detached as one hundred of each ground truth label and the rest are used as the testing samples. A set of comparative experiments are carried out on the above-mentioned datasets to quantitatively investigate the significant advantages of the proposed classification frameworks over the conventional classifiers containing decision tree, discriminant analysis, naïve Bayes, k-nearest neighbor and support vector machine. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers. In this context, the proposed method provides 3.91, 6.65, 2.81 and 5.52 percent enhancements for visual, thermal, combination and context-aware object-based post-processing data respectively in terms of OA/Kappa metrics. On the other side, the joint use of imaging systems improves the performance of classification up to 7.57 and 22.22 percent with respect to the visible and thermal hyperspectral data, respectively.
Keywords: Convolutional Neural Network, Decision-Level Fusion, Deep Learning, Stacked Sparse Autoencoder, Thermal Hyperspectral
Full-Text [PDF 1621 kb]   (1674 Downloads)    
Type of Study: Research | Subject: Photo&RS
Received: 2017/12/13
Send email to the article author

Add your comments about this article
Your username or Email:

CAPTCHA


XML   Persian Abstract   Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Abdi G, Samadzadegan F. Classification of Aerial Visible-Thermal Data Based on Deep Learning Models. JGST 2018; 8 (1) :221-237
URL: http://jgst.issgeac.ir/article-1-709-en.html


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Volume 8, Issue 1 (9-2018) Back to browse issues page
نشریه علمی علوم و فنون نقشه برداری Journal of Geomatics Science and Technology