[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Browse::
Journal Info::
Guide for Authors::
Submit Manuscript::
Articles archive::
For Reviewers::
Contact us::
Site Facilities::
Reviewers::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Volume 8, Issue 1 (9-2018) ::
JGST 2018, 8(1): 71-84 Back to browse issues page
Object Extraction from Urban Area based on Simultaneously Use of RADAR, Multispectral and LiDAR Data
B. Bigdeli *
Abstract:   (3757 Views)
 Recently, Synthetic Aperture Radar (SAR) data are of high interest for different applications in remote sensing specially land cover classification. SAR imaging is independent of solar illumination and weather conditions; it is not affected by rain, fog, hail, smoke, or most importantly, clouds. It can even penetrate some of the Earth’s surface materials to return information about subsurface features. However, SAR images are difficult to interpret due to their special characteristics, i.e., the geometry and spectral range of SAR are different from optical imagery.  In addition, the exhibition of the property of speckle caused SAR image is visually difficult to interpret. Consequently, optical data can be applied in fusion of SAR data to improve land cover classification. On the other hand, Light Detection and Ranging (LiDAR) data provide accurate height information for objects on the Earth, which makes LiDAR become more and more popular in terrain and land surveying. Regarding to the limitations and benefits of these three remote-sensing sensors, fusion of them improved land-cover classification. For this purpose, it is necessary to apply data fusion techniques. In recent years, significant attention has focused on multisensory data fusion for remote sensing applications and, more specifically, for land cover mapping. Data fusion techniques combine information from multiple sources, providing potential advantages over a single sensor in terms of classification accuracy. In most cases, data fusion provided higher accuracy than single sensors. Furthermore, fusion of sensors with inherent differences such as SAR, optical and LiDAR data need higher level of fusion strategies. Ability to fuse different types of data from different sensors, independence to errors in data registration step and accurate fusion methods could be mentioned as the benefits of decision-level fusion methods rather than other level fusion (pixel and feature level fusion methods).
This paper presents a method based on the simultaneously using of RADAR, multispectral and LiDAR data for classification of urban areas. First, different feature extraction strategies are utilized on all three data then a feature selection method based on Ant Colony Optimization (ACO) is applied to select optimized features. Maximum Likelihood (ML), Support Vector Machine (SVM) and K-nearest neighbor (KNN) are applied to classify optimized feature space as three classification methods. Finally a decision fusion method based on Weighted Majority Voting (WMV) is applied to provide final decision. A co-registered TerrraSAR-X, WorldView-2 and LiDAR data set form San Francisco of USA was available to examine the effectiveness of the proposed method. The results show that the proposed method based on the simultaneous use of three radar, optical and LiDAR data can increase the accuracy of some classes more and some other classes less. Also, the results of the data fusion can provide a different improvement compared to the results of the classification of each data individually. Generally, the results show that the use of multisensor imagery is worthwhile and the classification accuracy is significantly increased by such data sets. There are also several practical issues to be considered in future studies. Note that only a decision ensemble system was explored in this study. Additional research is needed in areas such as further study on more powerful feature spaces on each data, further processing of DSM of LiDAR, novel fusion strategies such as rule-based and object based
classification strategies.
Keywords: RADAR Data, Multispectral Data, LiDAR, Data Fusion, Object Classification
Full-Text [PDF 1465 kb]   (1441 Downloads)    
Type of Study: Research | Subject: Photo&RS
Received: 2016/02/7
Send email to the article author

Add your comments about this article
Your username or Email:

CAPTCHA


XML   Persian Abstract   Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Bigdeli B. Object Extraction from Urban Area based on Simultaneously Use of RADAR, Multispectral and LiDAR Data. JGST 2018; 8 (1) :71-84
URL: http://jgst.issgeac.ir/article-1-437-en.html


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Volume 8, Issue 1 (9-2018) Back to browse issues page
نشریه علمی علوم و فنون نقشه برداری Journal of Geomatics Science and Technology