[Home ] [Archive]   [ فارسی ]  
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
Browse::
Journal Info::
Guide for Authors::
Submit Manuscript::
Articles archive::
For Reviewers::
Contact us::
Site Facilities::
Reviewers::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
:: Search published articles ::
Showing 36 results for Sharif

M. Mohammadzade Shadmehri, M. A. Sharifi, V. Ebrahimzade Ardestani, A. Safari, A. Baghani,
Volume 0, Issue 0 (Special Issue for Geomatics 94 2017)
Abstract

Abrupt changes in gravity data anomaly is edges. Edge detection methods in gravity data can be found in the basement of the location and boundaries of a mass crime. The purpose of this study, Improvement of uniqueness and avoiding premature convergence in inverse modeling of gravity data using boundary position information resources. To evaluate the proposed method, firstly gravimetric data modeling with irregular geometry stepped on a synthetic model were implemented without edge information. In the second phase edge information in inverse modeling gravimetric data were used artificial model. The results showed that using these constraints, we were able to limit the search space, As a result ant colony optimization algorithm for solving this problem increased and less time to reach a reliable conclusion. Finally the method on data Gotvand area were also implemented.


S. Zaminpardaz, M. A. Sharifi, A. R. Amiri-Simkooei,
Volume 2, Issue 4 (5-2013)
Abstract

Our purpose in this contribution is to compare different outlier detection methods as far as time series are concerned. In fact, three methods, namely wavelet analysis, Baarda data snooping and thresholding are investigated. In order to make reasonable comparisons among the performance of these three methods in detecting the outliers, we used 4-month synthesized time series based on real tidal data. When the functional model of observations is known, Baarda data snooping, in comparison with other two methods yields the best results since its outlier rate of success and outlier rate of failure are almost 100% and 0.64%, respectively, regardless of the type of outliers. Furthermore, if the functional model of observations is unknown, wavelet analysis perform better than thresholding.
S. Hosseini, A. Azizi, A. Bahroudi, M. A. Sharifi,
Volume 2, Issue 4 (5-2013)
Abstract

Landslides, one of the major geo hazards, always cause a major problem by killing hundreds of people every year besides damaging the properties and blocking the communication links. In order to mitigate hazards of mass failure, the first step is the production of a landslide susceptibility map. Landslide inventory maps can play important role to prepare landslide susceptibility maps. field observations for preparing these maps are very costly and time consuming also Aerial photos can’t be acquired regularly and don’t have global cover, old satellite images have had low spatial resolution and needed to have control points in order to correct geometric errors that was the reason landslide replacements have approximately been recognized by spectral analysis on that time. The recent improvements in the high resolution satellite image technology, has provided a possibility of geometric corrections with rational coefficient and georeferencing with just one ground control point using GPS/INS/Star-tracker installed in the satellite. By high resolution satellite images, we’ll be able to detect replacements of ground which is the important parameter to provide landslide prediction models. In this paper, four epochs of the high resolution satellite images IRSP5 in Damdol village, Ardebil Province, Iran, are used to detect landslide ground replacements. Our results that are compared with GPS data which are acquired by precise Istasanj consultant engineering show the calculated discrepancies are less than half meter.
M. Mohammad Zamani, A. R. Amiri-Simkooei, M. A. Sharifi,
Volume 3, Issue 2 (11-2013)
Abstract

To estimate the unknown parameters in a linear model in which the observations are linear functions of the unknowns, one of the conventional methods is the least-square estimation. The best linear unbiased estimation (BLUE) is achieved when the inverse of the variance-covariance matrix of the observables is considered as the weight matrix in the estimation process. Therefore having a realistic assessment of the precision of the observations is an important issue. One of the methods to reach this goal is the use of the least-square variance component estimation (LS-VCE). However, in this method, it is not impossible to estimate negative variances. But, they are not acceptable from the statistical point of view. In this paper, numerical methods such as genetic algorithm and also iterative methods based on LS-VCE are presented for non-negative estimation of variance components. By using non-negative variance components estimation methods not only one guarantees the non-negative variance components but also one can investigate to incorporate different noise components into the stochastic model. Those components that are not likely present are automatically estimated zeros. In this paper, using the above-mentioned methods, we assess the noise characteristics of time series of GPS permanent stations. The data used in this research are the coordinates of IGS stations located in Mehrabad-Tehran and also two other stations in Ahvaz and Mashhad (2005-2010). To deal with this amount of data, the iterative methods are superior over the numerical methods such as the genetic algorithm. The results indicate the noise of GPS position time series are a combination of white noise plus flicker noise, and in some cases combined with random walk noise.


S. M. Razeghi, A. R. Amiri-Simkooei, M. A. Sharifi,
Volume 3, Issue 2 (11-2013)
Abstract

In most applications of GPS time series, white noise is considered as a covariance matrix, while over 20 years of GPS time series, white noise is not enough for unwanted signals and on the other hand white noise is unavoidable part of realistic noise model. Selecting of a prepare noise model assure us to considering all unwanted signals which GPS have been involved during its time series. In order of comparison between two modes, white noise and realistic noise, we considered one of the most important of application of GPS time series. For this purpose, strain analysis was selected, but, in additional of considering noise model, for better comparison, we introduces Normalized Strains which could help us for the comparison in two different states, realistic noise and white noise. California Network with more than 10years continuous GPS data with proper density is appropriate criteria to assessment of efficacy of time series duration and No discontinuity GPS data on final results. Beside, Azerbaijan Network is one of the most regular and most dense of Iranian Networks which its obtained noise model can be considered as Iranian Geodynamic Network’s noise model. By comparing of these two networks results, there are some useful suggestions to improve the Iranian Geodynamic Network’s future outcomes.
M. Akbari, M. Abbasi, M. A. Sharifi,
Volume 3, Issue 2 (11-2013)
Abstract

To investigate Caspian Sea level changes, sea surface height (SSH) data obtained from satellite altimeters, Topex/Poseidon, Jason1 and Jason2, between 1992 and 2011 were used. The data between 1992 and 2002 were obtained from T/P, between 2002 and 2008 were obtained from Jason1, and between 2008 and 2011 were obtained from Jason2. Finally, in almost 19 years, 9 passes and in average over each pass 680 cycles of data have been observed. According to this fact that the orbit of these satellites in the mentioned years are the same, time series for the points over passes have been produced. Time series on points over passes and in the crossover of passes have been produced and Least Squares Spectral Analysis have been performed on them. For further comparison and investigation of Caspian Sea level changes, tide gauge records in Neka and Anzali stations have been used. The correlation coefficient between tide gauge time series of Neka and the altimetry time series of one point on Caspian Sea were calculated 0.6663 and for tide gauge time series of Anzali and the altimetry point, correlation coefficient was 0.8198. Between Neka and Anzali, correlation coefficient was 0.7919. Spectral analysis on these time series showed a period of 1 year over all of them. Some of periods such as 6.7 years have been observed in Neka and altimetry that they didn’t exist in Aanzali, that there is no physical justification for them. Between 1992 and 2011 Caspian Sea level dropped about 2.5 cm per year.
A. R. Safari, M. A. Sharifi, H. Amin, I. Foroughi,
Volume 3, Issue 2 (11-2013)
Abstract

With the appearance of the satellite altimetry in 1973, a new window was opened in the oceanography, marine sciences and Earth-related studies. Advances in the sensors technology and different satellite altimetry missions in the recent years led to a great evolution in geodesy and the gravity field modeling studies. Satellite altimetry provides a huge source of information for the geoid determination with high accuracy and spatial resolution. The information from this approach is a sufficient alternate for the marine gravity data in the high-frequency modeling of the Earth’s gravity field in marine areas. Marine gravity observations always carry a high noise level due to the environmental effects. Moreover, it is not possible to model the high frequencies of the Earth’s gravity field in a global scale using these observations. The gravitational gradient tensor, as the second order spatial derivatives of the gravitational potential, provides more information than other measurements from the Earth’s gravity field such as the gravity anomaly. In this paper, a new approach is introduced for the determination of the gravitational gradient tensor at sea level based on the satellite altimetry and using two modeling techniques, namely radial base functions and harmonic splines. As a case study, the gravitational gradient tensor is determined in Persian Gulf based on the satellite altimetry data, and the results are presented. By the investigation of the results for the gravitational gradient tensor, it is concluded that modeling of the Earth’s gravity field using radial base functions leads to better results compared to the modeling based on the harmonic splines.
M. R. Seif , M. A. Sharifi,
Volume 4, Issue 1 (8-2014)
Abstract

The various methods are published for satellite orbit propagation and the numerical integration is one of the most common methods. The numerical integrators can be divided into two main methods, Single-Step and Multi-Step methods. In this paper, the results of Runge-Kutta method as the most common Single-Step method and prediction-estimation-correction-estimation method as the most common Multi-step method are compared. In this paper, it is proved that if the error-controlled integration methods are used, the difference between Single-step and Multi-step methods would be at sub-millimeter level. In this paper, the Lagrange method is introduced as a semi-analytical method too and the Lagrange coefficients are developed from central field to full gravitational field of Earth. By this development, the Lagrange method can be used in LEO satellite orbit determination. The results show that the accuracy of the Lagrange method for LEO satellite orbit determination is about 5 millimeter over one day.


R. Shamshiri, M. Motagh, M. A. Sharifi,
Volume 4, Issue 1 (8-2014)
Abstract

In order to make a connection between West and East Azerbaijan embankments a bridge with the length of ~1300 m and width of ~30 m has been constructed in the late 2009s. This bridge has an important role in the development of tourism, transportation and trade in the area. The difference between the deformation rates of the embankments on both sides of the bridge may seriously damage the bridge itself, so it is very important to accurately monitor them in space and time in order to assess the state of the bridge concerning deformations. Interferometric Synthetic Aperture Radar (InSAR) is a powerful geodetic technique for precise deformation monitoring in space and time due to its extensive area coverage, high spatial (1-20 m) and temporal resolution (11-46 days) and acceptable accuracy (cm to mm level). Advanced interferometric time-series techniques such as Small BAseline Subset (SBAS) approach is a valuable tool for structural monitoring and provide deformation maps with millimeter accuracy. In this study, this technique has been applied on a dataset of 58 SAR images acquired by ENVISAT, ALOS and TerraSAR-X (TSX) satellites from 2003 to 2013 to monitor spatio-temporal evolution of ground deformation on the embankments. The InSAR results show deflation on both embankments of the bridge is occurred with peak amplitude of ~50 mm/year in the line of sight direction.


A. Azizi, A. Hadiloo, M. A. Sharifi,
Volume 4, Issue 1 (8-2014)
Abstract

In this paper a quasi-rigorous approach for the solution of a two-dimensional relative orientation of two images, acquired during the satellite revisit, is proposed. Unlike the conventional relative orientation methods in frame photography, the proposed approach utilizes both x and y parallaxes for the solution of the relative orientation. Since the revisit images have similar viewing geometry, the ill-conditioning problem arising from the height undulations in the x-parallaxes, does not occur. The proposed algorithm is tested on two afterwards Cartosat IRS-P5 revisit images acquired over a highly mountainous terrain. The accuracy figures indicate a sub-pixel precision of 0.30 and 0.47 pixels for the residual x and y parallaxes respectively. The proposed algorithm is suitable for the landslide applications for which highly accurate image co-registration is required
A. R. Safari, M. A. Sharifi, A. Bahroudi, S. Parang,
Volume 4, Issue 2 (11-2014)
Abstract

This study aims at finding Moho Depth in the Zagros zone and estimating crustal thickness through Euler Deconvolution method. Euler Deconvolution method is an automatic one to estimate depth, shape and place of magnetic and gravity sources which is based on using field derivatives in Euler`s homogenous equation. In using Euler Deconvolution method it is important to identify structural index and window size of estimator, these parameters strongly affect the final results. The study first uses spherical harmonic coefficients of EIGEN-GL04C geopotential model to calculate free air gravity anomaly in φ=29.25 ͦ - 34.75 ͦ & λ= 48.25 ͦ - 53.75 ͦ region, then Moho Depth was estimated using free air gravity anomaly and Euler Deconvolution method for various structural indices and window sizes. The best structural index and window size were resulted from comparing Moho depth of Euler Deconvolution and of receiver function method (based on seismic studies) in 14 seismic stations of the region and finally the results were compared to CRUST 2.0 model. Results declare that for 0.5 structural index and 40-45 km window size the best Moho depth is estimated for the region.
M. M. Shadmehri, M. A. Sharifi, V. Ebrahimzade Ardestani, A. R. Safari, A. Baghani,
Volume 4, Issue 4 (5-2015)
Abstract

Currently, in the field of optimization, ant colony algorithm has been implemented successfully on a wide variety of optimization problems. This algorithm is inspired by the real life of ants to find the shortest path from nest to food. This behavior of ants is closely similar to the inverse problems in geophysics which try to find the best solution for the unknowns in observation model. Therefore, this idea is applied for solving linear inverse problems. The aim of this article, is inversion of the gravity data in a linear form, it means that with constant geometric parameters, the physical parameters to be modeled. For examine the performance of this algorithm, firstly the algorithm has been tested on artificial complex T and L model. This method is applied for artificial models with and without noise. Outcomes show that for inversion by the use of ant colony algorithm, there is no need to separation of interferential anomaly and it is possible to use it for a combination of density contrast. Finally, the propose method for the measurement of regional gravity data Gotvand Dam is located in the province have been used. The results of the inverse of the data, and large diameter holes with depth in the region. As a result of dam construction in the study area, according to the regional geological information would cause serious environmental problems.
A. Habibi, M. Motagh, M. A. Sharifi,
Volume 5, Issue 2 (11-2015)
Abstract

By using the ascending and descending Envisat satellite images and also Multiple-Aperture Interferometry (MAI) and Conventional Interferometry techniques, we have calculated the co-seismic deformations on the satellite line of sight and on the azimuth track for the BAM earthquake in 2003. Also the three Orthogonal components of the displacement field from this geodetic measurements were obtained. In order for calculating the fault geometry and the slip distribution on the fault plane then, we inverted the components by using Genetic Optimization Method and also the Analytical Model (Okada Elastic Half-Space). The maximum of the slip was about 2.5 meters along with the 30 K.ms BAM fault on the approximate depth of 4~5 K.ms and by inverting, we have estimated seismic moment (M0), N.m 1018× 7/6 , that indicates a shock on 6.5 Mw scale. By inverting and using the Bootstrap Statistical Method we have also estimated the 68% confidence interval  for model parameters.


A. R. Amiri-Simkooei, H. Ansari, M. A. Sharifi,
Volume 5, Issue 2 (11-2015)
Abstract

In many geodetic applications a large number of observations are being measured to estimate the unknown parameters. The unbiasedness property of the estimated parameters is only ensured if there is no bias (e.g. systematic effect) or falsifying observations, which are also known as outliers. One of the most important steps towards obtaining a coherent analysis for the parameter estimation is the detection and elimination of outliers, which may appear to be inconsistent with the remainder of the observations or the model. Outlier detection is thus a primary step in many geodetic applications. There are various methods in handling the outlying observations among which a sequential data snooping procedure, known as Detection, Identification and Adaptation (DIA) algorithm, is employed in the present contribution. An efficient data snooping procedure is based on the Baarda’s theory in which blunders are detected element-wise and the model is adopted in an iterative manner. This method may become computationally expensive when there exists a large number of blunders in the observations. An attempt is made to optimize this commonly used method for outlier detection. The optimization is performed to improve the computational time and complexity of the conventional method. An equivalent formulation is thus presented in order to simplify the elimination of outliers from an estimation set-up in a linear model. The method becomes more efficient when there is a large number of model parameters involved in the inversion. In the conventional method this leads to a large normal matrix to be inverted in a consecutive manner. Based on the recursive least squares method, the normal matrix inversion is avoided in the presented algorithm. The accuracy and performance of the proposed formulation is validated based on the results of two real data sets. The application of this formulation has no numerical impact on the final result and it is identical to the conventional outlier elimination. The method is also tested in a simulation case to investigate the accuracy of the outlier detection method in critical cases when large amount of the data is contaminated. In the application considered, it is shown that the proposed algorithm is faster than the conventional method by at least a factor of 3. The method becomes faster when the number of observations and parameters increases. 


A. Ashrafzadeh Afshar, Gh. R. Joodaki, M. A. Sharifi,
Volume 5, Issue 4 (6-2016)
Abstract

Gravity Recovery and Climate Experiment (GRACE) satellite mission has provided a powerful tool to evaluate groundwater resources. In many cases, groundwater resources are nonrenewable, and monitoring the rates at which they are utilized is important for planning purposes. In this study, we have used GRACE level 2 Release 05 data to evaluate groundwater resources across southern Iran (south of 34o latitude) during August 2002 to December 2010. We estimate monthly changes in total water storage (groundwater plus soil moisture plus surface water and snow) across this region using data of GRACE level 2 Release 05, from the Center for Space Research (CSR) at the University of Texas (data available at http://podaac.jpl.nasa.gov). We replace the GRACE results for the degree-one spherical harmonic coefficient, which correspond to geocentre motion due to the Earth’s mass redistribution, with those computed as described by Swenson et al. [2008]. We also replace it for the lowest-degree zonal harmonic coefficient, C20, which is due to the flattening of the Earth, with those obtained from Satellite Laser Ranging (SLR). The effects of Glacial Isostatic Adjustment (GIA) are small in this region, but are nevertheless corrected by a GIA correction model. Stripping effects in the GRACE data, due to the nature of the measurement technique in GRACE and mission geometry, are smoothed by applying a Gaussian smoothing function with a 350 km radius. The results show a large negative trend in total water storage centered over western and southern Iran. GRACE data have no vertical resolution, in the sense that it is impossible to use the GRACE data alone to determine how much of the mass variability comes from surface water or snow, how much comes from water stored in the soil, and how much comes from water in the subsoil layers (i.e., from groundwater). Because our goal is to isolate the changes in groundwater storage, it is necessary to first remove estimates of the other water storage components. Using output from a land surface model such as a version of Community Land Model (CLM4.5) to remove contributions from soil moisture, snow, canopy storage, and river storage, we conclude that most of the long-term water loss in the southern Iran is due to a decline in groundwater storage. Our estimates show that the groundwater loss during this period is at an average rate of 45 km3/yr. We compare our GRACE estimates over southern Iran with Iranian groundwater estimates obtained from 330 active observation wells, used to monitor the level and quality of groundwater across this region. The results show that the conclusion of significant Iranian groundwater loss is further supported by the in situ well data. These estimates represent the combined effects of natural climate variability (e.g., drought) and human activities. Because CLM4.5 also includes unconfined aquifer storage, we can estimate anthropogenic groundwater trends by subtracting the CLM4.5 predictions of naturally occurring groundwater change from our total groundwater change estimates. These results indicate that 2.99 ± 1 km3/yr of the groundwater loss in southern Iran may be attributed to human withdrawals.


M. Sharif, A. A. Alesheikh,
Volume 5, Issue 4 (6-2016)
Abstract

Movement of objects is taking place in geographical contexts. Context directly/indirectly influences movement process and causes different reactions to moving objects. Therefore, considering context in movement studies and the development of movement models are of vital importance. In this regard, incorporating context can play a crucial role in similarity measurement of objects movements and their corresponding trajectories. Trajectories of moving point objects, beside their spatial and temporal dimensions, have another aspect which is called contextual dimension. This dimension, however, has been less considered so far and a few researches in trajectory analysis domain have investigated it. To this end, this research develops a method based on Euclidean distance in which individual spatial, temporal, and contextual dimensions as well as their integration can be explored in the process of similarity measurement of trajectory. Beside the simplicity of the method, it is developed in a way for taking into account every small change in each type of dimension(s). To validate the proposed method and survey the role of contextual data in similarity measurement of trajectories, three experiments are performed on commercial airplane dataset. Accordingly, geographical coordinates and altitude of airplane as spatial dimension, travel time as temporal dimension, and airplane speed, wind speed, and wind direction as contextual dimension are utilized in these experiments.

The first experiment measures the correspondence of trajectories in different dimensions. Also, it explores the role of dimensions weights individually and collaboratively along the similarity measure process. The results demonstrate that weights severely affect similarity values, while they are totally application dependent. Meanwhile, it can be confirmed that contexts may increase or decrease the values of trajectories similarities. This effect can be seen in the average of relative similarity values of commercial airplanes trajectories in spatial (0.60), spatial-temporal (0.51), and spatial-temporal-contextual (0.46) dimensions. Contexts can enhance and restrict movements as well. To justify this statement, the second experiment is conducted to explore how movement and geographical contexts interact in similarity measure process. To this end, four sample trajectories are compared with respect to different dimensions. For a pair of trajectory, the relative similarity value at spatial dimension is 0.04. By incorporating time dimension, this value increases to 0.30 at spatio-temporal dimension. Given the high similarity of these two trajectories in wind direction, wind speed, and airplane speed (0.85), the ultimate similarity of them becomes 0.48. In contrast, for another pair of trajectory, the spatial and spatio-temporal similarity values are 0.85 and 0.91, respectively. Considering the similarity value of these two trajectories in wind direction, wind speed, and airplane speed (0.37), the final relative similarity becomes 0.73. The third experiment sought for the role of motivation context in similarity measure process. Although such context is very difficult to capture and in many applications will remain inaccessible, we consider the pilots decisions in handling the airplanes during the approaching and landing phases (i.e., continuous descent final approach or dive and drive) as the motivation context in this application. Choosing either of these techniques highly affects the figure of trajectories where quantifying them can be accomplished by measuring the similarity of trajectories at spatial and spatial-temporal dimensions. All in all, the results of the above experiments yield the robustness of the proposed method in similarity measurement of trajectories as well as its sensitivity to slight alterations in dimensions.


M. Malekpour Golsefidi, F. Karimipour, M. A. Sharifi,
Volume 5, Issue 4 (6-2016)
Abstract

Nowadays, considering the importance of marine commerce, monitoring the marine navigation and routing the ships could be regarded as important issues. Moreover, specifying the weather condition of the marine environment for minimizing the damages and fatalities to vessels, crews and cargos is vital. Hence, weather routing is absolutely crucial. In addition, due to the high cost of voyage, the duration of voyage is one of the essential parameters of weather routing.

The aim of this research is to minimize the voyage duration regarding the weather conditions. The marine environment is simulated by a grid of weather data which has the resolution of 0.25 D that is updated every 6 hours. The data is downloaded from European Centre for Medium Range Weather Forecasts (ECMWF). Following that, the weight of each edge is calculated with respect to the time in which the vessel passes the edge.  Travel time is related to the impact of wave, wind and sea depth on vessel’s speed which is computed based on Kwon method and Lackenby’s formula. Finally Dijkestra algorithm is applied for calculating the optimum route.

The studied area located in the north of Indian ocean in Persian gulf, Oman Sea and Arabian Sea. The model is implemented in two different weather conditions (calm and rough conditions) for calculating minimum time route between Pipavav port ( ) in India and Bushehr port ( ). The results indicate that although the voyage distance increased in this model, the duration of voyage decreased. Thus, the cost of voyage dropped noticeably. In addition, the depth of the marine environment determines the route of journey in the calm weather conditions because of lack of existence of high seas and storm in front of the ship. In the rough weather conditions the weather condition parameter (speed and direction of the wind and height and direction of the wave) has more effect than depth parameter in order to prevent the ship from navigation in high seas in which the ship's speed reduces dramatically. Moreover, results show that in the bounded seas like Persian Gulf with small area with respect to spatial resolution of marine environment (the resolution of the weather data) in which the weather condition between neighbouring cells does not change obviously, depth parameter is the critical parameter to determine the journeys path.


Sh. Sharifi Hashjin, A. Darvishi Boloorani, S. Khazai,
Volume 6, Issue 1 (10-2016)
Abstract

A hyperspectral image contains hundreds of narrow and contiguous spectral bands. Therefore, such images provide valuable information from the earth surface objects. Target detection (TD) is a fast growing research filed in processing hyperspectral images. In recent years, developing target detection algorithms has received growing interest in hyperspectral images. The aim of TD algorithms is to find specific targets with known spectral signatures. Nevertheless, the enormous amount of information provided by hyperspectral images increases the computational burden as well as the correlation among spectral bands. Besides, even the best TD algorithms exhibit a large number of false alarms due to spectral similarity between the target and background especially at subpixel level in which the size of target of interest is smaller than the ground pixel size of the image. Thus, dimensionality reduction is often conducted as one of the most important steps before TD to both maximize the detection performance and minimize the computational burden. However, in hyperspectral image analysis few studies have been carried out on dimension reduction or band selection for target detection in comparison to the hyperspectral image classification field. Otherwise band selection has great impact on remote sensing processing because of its effect on dimension reduction and reducing computational burden of hyperspectral image processing by selecting of optimum bands subset.

This paper presents a simple method to improve the efficiency of subpixel TD algorithms based on removing bad bands in a supervised manner. The idea behinds the proposed method is to compare field and laboratory spectra of desired target for detecting bad bands. Since the laboratory spectrum of targets is measured under standard conditions with the minimized level of noise and atmospheric effects, they can be considered as ideal spectrum. On the other hand, the recorded field-based reflectance spectrum are affected by surrounding objects such as vegetation cover and atmospheric affects specially water vapor absorption. Obviously, the spectrum becomes progressively noisier at longer wavelengths due to reduction of radiance of the illumination source, i.e., the sun. In this way, bad bands can be observed in the field based spectrum when comparing with the laboratory spectrum of the target of interest. Based on fitting a normal distribution to laboratory-field spectral difference of all corresponding bands, best of them will be select and introduce to target detection methods.

In this study for our evaluation, the proposed method is compared with six popular band selection methods implemented in PRtools and False alarm parameter for validation is used in this study. Comparison is done using two well-known sub-pixel TD algorithms, the adaptive coherence estimator (ACE) and the constrained energy minimization (CEM), in the target detection blind test dataset. This dataset includes two HyMap radiance and reflectance images of Cooke City in Montana, USA. The images are obtained by an airborne HyMap sensor which has 126 spectral bands and a ground sample distance of 3m. This dataset includes 10 sub-pixel targets located in an open grass region. Experimental results show that the proposed method improves the efficiency of the ACE and CEM comparing with other band selection methods used. Between of all target detection experiments only in 12 percent results destroyed. Moreover, high speed, simplicity, low computational burden, and time consuming are the advantages of the proposed method.


E. Kiana, S. Homayouni, M. A. Sharifi, M. R. Faridrohani,
Volume 6, Issue 2 (12-2016)
Abstract

Nowadays, Earth observation (EO) technology became an indispensable tool to help environmental monitoring, as well as their changes, for natural resources management, urban planning and development, water management and land use planning. In particular, radar EOs, unlike the optical ones, can be collected regardless of illumination and weather conditions. Multitemporal polarimetric synthetic aperture radar (PolSAR) images are useful source of information for detection and mapping the environmental changes, especially in wide areas, during the day and night and all weather conditions. Change detection methods can identify the change or no change conditions in land covers using the time series observations. In this paper a method is proposed for change detection in SAR remote sensing images. This method is based on the Change Point Analysis. The cumulative frequency of difference image, which contains the environmental changes, normally follows a specific class of statistical distribution. Gaussian mixture model is one of the most suitable models for Change Point Analysis. This model can efficiently estimate the parameters of mixture distribution. The intersection point of two distributions is a change point, which can be seen as a threshold. This threshold is then used to separate the change and no change classes. The proposed method is implemented and analyzed using three SAR data sets. The analytical evaluations of the final change maps from two of these data sets with reference data had the Kappa coefficients of 90% and 96% respectively. The other data set contained the multitemporal PolSAR images and had been acquired over an agricultural area. The changes in these images were enough reliable to be connected to the agricultural activities, such as crop growing stages and harvesting, based on an available crop map. Finally, the method was evaluated against the Otsu method, as one of the best threshold estimation methods, and the results showed the superiority of the proposed method, e.g. 2% better in term of kappa coefficient. . As a result, the proposed method, can be efficiently employed for land cover change detection and monition in natural resources management.


A. Azizi, A. Hadiloo, M. A. Sharifi,
Volume 7, Issue 3 (2-2018)
Abstract

Determination of the displacement vectors due to the landslide phenomenon is the initial stage for generating landslide inventory and susceptibility maps. The main objective of this paper is to evaluate the achievable accuracy of a simple two-dimensional localized geometric transformation method for the determination of the landslide displacement vectors. For this purpose, two IRS P5 backward images taken in two different revisit epochs over the Ardabil Province are used. The main objective of this paper is to compare the acquired image based displacement vectors with the ground measurements. The 2D localized transformation approach may be utilized to produce a small and medium scale country-wide landslide inventory maps and for continuous monitoring of the areas susceptible to large landslides. With regard to the fact that the coverage of the landslide zones in satellite imageries are usually small as compared with the entire image frame, the achieved accuracy figures suggest that the localized transformation approach may fulfill the required demands. Onsite displacement measurements performed by the GPS receivers, conducted by the ISTA SANJ DAGHIGH Co, were utilized to assess the accuracy of the localized transformation method. To compare the displacement vectors generated from the images with the ground observations, spatial registration between the points measured on the image and the points measured on the ground was conducted using the satellite supplied RPCs through which the ground coordinates for the measured image points are calculated. However, due to the fact that the supplied RPCs have a systematic shift error, to generate the ground coordinates for the measured image points, it is necessary to eliminate the systematic shift of the RPCs. This is achieved by identifying fixed feature points (outside the landslide zone) on the stereo P5 images for which ground observation were available through the large scale map of the area. The RPC shift error is then calculated by comparing the ground coordinates of the fixed points generated by the RPCs and their corresponding ground coordinates extracted from the large scale map. On the other hand, due to the lack of coincidence between the dates of the image acquisition and the ground observations, a temporal interpolation was also necessary. The temporal registration was performed using a linear interpolation approach assuming a linear land displacement. The average landslide speed on the ground was adopted as the amount of linear displacement trend for the period over which the assessment was conducted. The outline of the evaluation approach and a detailed description of the test results and the final accuracy figures are presented in this paper. 



Page 1 from 2    
First
Previous
1
 

نشریه علمی علوم و فنون نقشه برداری Journal of Geomatics Science and Technology