|
|
 |
Search published articles |
 |
|
Showing 40 results for Optimization
F. Hosseinali, A. A. Alesheikh, F. Nourian, Volume 3, Issue 2 (11-2013)
Abstract
Urban land-use expansion is a challenging issue in developing countries. The expansion usually deteriorates other land-uses such as agriculture and natural resources. Urban expansion is unavoidable, thus, urban planners are always estimating the optimum regions for development. Due to the spatial component of this issue, GIS is widely used.
The complexity of urbanization may not be well-handled using simple GIS functions such as overlay. Land-use development may also use advanced functions of optimizations that are integrated in a GIS environment. Such optimization considers numerous conflicting criteria.
Even with taking all of the probable assumptions and constraints into account, it is possible that optimum result is not doable. Thus, it is desirable to suggest more than one (semi) optimum solutions. If the implementation of one solution is complex, other ones should be examined.
In this study, Modeling to Generate Alternatives (MGA) is used to determine the best sites for future urban land-uses in the vicinity of Qazvin with 1620 squared kilometers. This method which is based on linear programming (binary integer programming) can result in proper alternatives. The characteristic of the performed method is the use of Density-Based Design Constraint (DBDC) as a constraint that guaranties the contiguity and compactness of the suggested sites and prevents scattered developments. The results of this study not only offered the optimum sites for future urban land-use developments but also determined some other options to be chosen if the optimum solution is not admired.
E. Ferdosi, F. Samadzadegan, Volume 3, Issue 3 (2-2014)
Abstract
Because of using the different polarization of electromagnetic wave in Polarimetric imagery, it provides a rich source of information from the several aspects of targets. Recently, Polarimetric images as a powerful and efficient tool have been interested to identify the various objects in the complex geographic areas. In order to extracting information, classification of Polarimetric image has an important effect. Support Vector Machines (SVMs) due to their operation based on geometrical characteristics and robustness in high dimensional space, are considered as a suitable case for classification of Polarimetric images. However, the performance of SVMs classifier is strongly influenced by its parameters. Therefore, the optimum values for SVMs parameters should be determined to achieve SVMs classifier with maximum efficiency. Traditional optimization techniques because of computational complexities in the large search space usually trap in local optimum. Thereby, it is inevitable to apply Meta-heuristic Algorithms which performe exploration and exploitation to obtain global optimum. In this paper, the potential of Genetic, Bees and Particle Swarm Optimization (PSO) Algorithms as powerfull techniques in determining the optimum SVMs parameters are evaluated. Comparing the results, demonstrates the superior performance of PSO Algorithm in terms of classification accuracy and speed of convergence.
M. Alisufi, B. Voosoghi, Volume 4, Issue 2 (11-2014)
Abstract
In studies of crustal deformation, volcanic models give valuable insights of volcanoes features and their behavior over time. Displacement field modeling, using analytical models needs to determine the parameters of the geophysical and geological volcanic reservoir. Therefore, modelling of volcanic magma reservoir was performed by an inverse problem resolution using the displacement field obtained from geodetic observations as the boundary values of the analytical models of deformation. This modeling was performed by genetic evolutionary optimization algorithm for Damavand volcano. Also, Displacement field during the years 2000 to 2001 for campi flegrei volcano was modelled. Comparing with the previous studies, the research has demonstrated favorable results. The result of displacement field modelling of the Damavand volcano has shown unexpected residuals (according to the magnitude of the displacement field) in some areas. This fact indicates presence of other displacement sources in addition to the volcanic deformation source in the region. The results lead to a reduction in the volume of the volcano magma reservoir with rate of equal to 0.001, located at a depth of 5.6 km below the ground. This volume reduction causes a small displacement in the area near the volcanic source. Modelled 2D coordinate of center of the source in the Lambert projection (4831.999E, 31328.536N) km and RMSE of inversion results 2mm was achieved.
M. M. Shadmehri, M. A. Sharifi, V. Ebrahimzade Ardestani, A. R. Safari, A. Baghani, Volume 4, Issue 4 (5-2015)
Abstract
Currently, in the field of optimization, ant colony algorithm has been implemented successfully on a wide variety of optimization problems. This algorithm is inspired by the real life of ants to find the shortest path from nest to food. This behavior of ants is closely similar to the inverse problems in geophysics which try to find the best solution for the unknowns in observation model. Therefore, this idea is applied for solving linear inverse problems. The aim of this article, is inversion of the gravity data in a linear form, it means that with constant geometric parameters, the physical parameters to be modeled. For examine the performance of this algorithm, firstly the algorithm has been tested on artificial complex T and L model. This method is applied for artificial models with and without noise. Outcomes show that for inversion by the use of ant colony algorithm, there is no need to separation of interferential anomaly and it is possible to use it for a combination of density contrast. Finally, the propose method for the measurement of regional gravity data Gotvand Dam is located in the province have been used. The results of the inverse of the data, and large diameter holes with depth in the region. As a result of dam construction in the study area, according to the regional geological information would cause serious environmental problems.
S. Alaei Moghadam, M. Karimi, M. Mohammadzadeh, Volume 4, Issue 4 (5-2015)
Abstract
Urban land use planning which is one of the main components of urban planning typically defined as a multi-objective planning problem in optimal use of urban space and existing facilities. Among numerous land use maps, urban planners are usually interested in choosing the map which is contiguous to the optimal land use map of an interested vision. Reference point multi-objective optimization algorithms provide possibility of introducing the optimal values for different objectives as a reference point and producing optimal solutions near to reference points. In this study, the implementation and efficiency of Reference-Point-Nondominated Sorting Genetic Algorithm II (R-NSGA II) for urban landuse allocation is investigated and a method for chromosomes coding is proposed. Maximizing compatibility of adjacent land use, land suitability, accessibility to roads and main socio-economic centers, and minimizing resistance of land use to change are defined as the main objectives. Then the optimal values of objectives were introduced to the algorithm as reference points. Consequently, planners will be able to select within proposed land use maps according to their priorities. The results of land use allocation modeling for Shiraz city in 2011 indicate that the decision maker is able to choose a better decision with more reliability comparing to situations with a single solution. This achievement indicates proposed model ability for simulation of different scenarios in land use planning
A. Alizadeh Naeini, M. Saadatseresht, S. Homayouni, A. Jamshidzadeh, Volume 4, Issue 4 (5-2015)
Abstract
One of the most important applications of hyperspectral data analysis is either supervised or unsupervised classification for land cover mapping. Among different unsupervised methods, partitional clustering has attracted a lot of attention, due to its performance and efficient computational time. The success of partitional clustering of hyperspectral data is, indeed, a function of five parameters: 1) the number of clusters, 2) the position of clusters, 3) the number of bands, 4) the spectral position of bands, and 5) the similarity measure. As a result, partitional clustering can be considered as an optimization problem whose goal is to find the optimal values for above-mentioned parameters. Depending on this fact that which of these five parameters entered to the optimization four different scenarios have been considered in this paper to be resolved by particle swarm optimization. Our goal is, then, finding the solution leading to the best accuracy. It should be noted that among five different parameters of clustering, both similarity measure and the number of clusters have been considered fixed to prevent over-parameterization phenomenon. Investigations on a simulated dataset and two real hyperspectral data showed that the case in which the number of bands has been reduced in a pre-processing stage using either band clustering in the data space or PCA in the feature space, can result in the highest accuracy and efficiency for thematic mapping.
Z. Masoomi, M. S. Mesgari, Volume 5, Issue 1 (8-2015)
Abstract
In urban space, the need to different facilities and diverse land uses increases continuously. Continuous changes in the demands of the citizens results in rapid and frequent land use changes. Therefore, dynamic characteristics of urban environment should be considered in urban planning. On the other hand, land uses have different effects on each other. In other words, any change in the land use of a parcel or zone, will results in tendency of its neighboring parcels for land use change. Therefore, proposing of new land use arrangements after any occurred land use change could be a proper response to such tendencies. The main goal of this study is to propose a method, based on GIS and NSGAII optimization algorithm, for generating optimum land use arrangements after any occurred land use change.
Usually, the output of a multi-objective optimization algorithm is a collection of optimum solutions. Selection of appropriate solution from such a collection needs extra efforts and processes. Therefore, another goal of this research is to use an appropriate clustering method that helps the user to select the most preferred solution. With such a method, the decision maker can introduce his planning priorities, perceive the resulted scenario and select accordingly. In this research, ant colony clustering algorithm is used for clustering, first because of its high speed, and then because the representative of the clusters are selected from the Pareto front solutions.
In this research, for modelling of the aspects of land use change and its factors, four objective functions are considered, which are: the maximization of land use compatibility, the maximization of land use dependency, the maximization of land use suitability, and the maximization of land use compactness. Finally, the providence of per capita for different land uses is considered as constraints. The ant colony clustering algorithm is used for clustering of the found solutions (land use arrangements). The developed method is implemented and tested using the data related to some districts of region 7 of Tehran.
Different evaluations are considered and carried out for the results of optimization. These include the convergence trend, repeatability test, and the comparison of the previous land use arrangement with the optimized ones. In the resulted optimized land use arrangements, the levels of objective functions are much better than the previous arrangement. Furthermore, the required per capita for different land uses are much better satisfied. The highest improvement in the objective functions is 36%, which is related to land suitability. In addition, the required per capita is improved by 18.5% of. The results of clustering using ant colony clustering algorithm are compared with those of K-means and Fuzzy K-means. The comparison showed that the ant colony clustering algorithm is faster. In addition, the results of this clustering method are exactly the original solutions of the land use arrangement optimization.
Finally, the developed method can help urban planners and decision makers to correct and change the detailed urban plans according to any occurred land use change. One of the limitations of the detailed urban land uses plans is that they are not flexible and cannot opt to the deviations from the plan. This research is one step in the development of a general approach to dynamic urban planning. Such a planning approach can respond to the continuous and dynamic changes of the land uses in urban space.
A. A. Heidari, R. A. Abaspour, Volume 5, Issue 1 (8-2015)
Abstract
Unmanned aerial systems (UAS) are one of the latest technologies utilized in the hazard management and remote sensing. Nowadays, tendency in the development of UAS is toward autonomous navigation or hybrid tasks. In this context, development of comprehensive, efficient methodologies for path planning, control and navigation of UAS can be regarded as one of the fundamental steps for the development of autonomous systems. Up to now, different planning algorithms have been proposed in the specialized literature in order to enrich the framework of autonomous navigation of unmanned aerial systems. However, few efforts have been devoted to design new chaotic path planners for determining the optimal trajectories of these aerial systems in urban areas. An effective path planning technique can attain mission aims with respect to various restrictions of the UAS and less computational time.
Chaos theory is one of the most studied theories with different applications in engineering and technology. Most of the natural processes demonstrate chaotic behavior such as black hole and clouds. Past researchers showed that if an evolutionary algorithm be hybridized with chaos, its performance will have improved, considerably. However, most of the evolutionary algorithms are inspired from nature, but all of their steps are random based motions. But nature is not either completely random based or chaotic. Hence, the combination of these theories should be more realistic. With this regard, evolution and chaos are related to each other narrowly in most of the complex natural systems. It is evidenced that some of the chaotic signals can alleviate the premature convergence problem of the evolutionary algorithms in tackling optimization problems.
In this article, first, UAS path planning is modeled as a 3D constrained optimization problem. In this modeling, the aim is the optimization of path, fuel and safety with respect to different restrictions. After scheming and suggesting of general planning framework, UAS path planning problem is investigated by comparative study with regard to the studied scenario. For this aim, evolutionary planner is implemented in order to minimize the flight height, path length and energy consumption considering different restrictions such as safe altitude, turning angle, climbing slope, gliding slope, no fly zones and mission map limits. Then, a comprehensive model is employed to describe route-planning task, and then, based on the hybridization of chaos theory with evolutionary computing, four new evolutionary optimizers are developed. Hence, this paper developed four chaotic optimizers including particle swarm optimization, differential evolution, imperialist competitive algorithm and artificial bee colony technique based on 14 chaotic signals.
In the rest of this paper, analyses, and extensive performance evaluation of the designed trajectory-planning approaches are performed according to the success rate results, precision and quality of the results, CPU running times, and convergence speed. The results show that the proposed framework can be utilized in represented scenario as an effective path planner. Proposed strategies are capable to compute the optimal paths more efficiently in comparison with the standard algorithms. From the results it is known that the chaotic differential evolution with logistic map can outperform the other compared algorithms.
V. Sadeghi, H. Ebadi, A. Mohammadzadeh, F. Farnood Ahmadi, Volume 5, Issue 3 (2-2016)
Abstract
Timely and accurate detection of land cover/use changes is one of the most important issues in land planning and management. Remote sensing (RS) images have become an important data source for change detection (CD) in recent decades. Thresholding of difference image (DI) is a prevalent approach for RS-based CD. It can be shown that the changes in an environment are occurred in such a way that the different spectral changes of phenomenon can be detected in different parts of electromagnetic spectrum. Hence, utilization of several spectral bands can offer a higher accuracy in CD process. However, prevalent thresholding techniques are developed for one-dimensional space and they are not appropriate for CD in multi-dimentional space of RS images. The common approach to overcome this deficiency is to fuse data at feature and/or decision level. Some methods have already been developed for this purpose. Whereas, it is enigmatic to decide which of data fusion technique is the most appropriate one, a common particularity in all these approaches (except: voting and Bayesian) is their supervised nature, as the analyst must determine some parameters which can be the best fit to a certain application and dataset. On the other hand, unsupervised approaches, generally have low accuracy in CD process.
In order to develop the thresholding technique to support multi-spectral images, a simple yet effective data fusion approach is proposed in this paper. The developed method is a linear combination of multi-spectral change image based on fusion. Applied weights in linear combination are optimized using Particle Swarm Optimization (PSO) algorithm.
The proposed approach consists of the following two major steps. In the first step a multi-spectral change image is generated. Several methods can be used for that purpose. In this research, we chose difference image operation as it is simple to implement and easy to interpret. It includes a simple and straightforward arithmetic difference between the digital values of the two images obtained on different dates. In the next step, PSO is initialized with arbitrary weights and the weighted image fusion is then carried out as follows: . Where denotes the weight associated to ith band of multi-spectral difference image ( ), such that Afterwards, the OTSU thresholding technique is applied to produce binary change mask (BCM) and evaluate the fitness of the fused change index (FCI). If any of the termination conditions (optimum fitness or maximum number of iteration) is satisfied, the current weights are saved as optimum weights of a weighted linear combination or else they are updated with PSO algorithm to reach the optimum values.
The performance of the developed technique is evaluated on a bi-temporal multispectral images acquired by the Landsat-5 Thematic Mapper (TM) sensor in July 2000 and 2009. This data set is characterized by a spatial resolution of 30m×30m and 7 spectral bands ranging from blue light to shortwave infrared (0.45~2.35 µm). It is worth noting that the 6th band of these images (thermal infrared band), is not utilized due to low spatial resolution. The selected area is co-registered subsets of size (470×830 pixels) of two full scenes, including Khodafarin Dam (an earth-fill embankment dam on the Aras River straddling the border between Iran and Azerbaijan).
Moreover to visual assessment of CD results, quantitative analysis has been carried out by selecting 2799 samples of changed regions and 5168 samples of unchanged regions, according to field work and image interpretation. The proposed linear combination of multispectral difference images based on fusion which is the development of the thresholding technique to support the multi-spectral images, has better accuracy in CD in comparison with individual spectral bands of DI and the other state-of-the-art image fusion algorithms at feature and/or decision level. Overall accuracy of 90.68% using the proposed method in comparison to an overall accuracy of 79.06% and 70.81% related to the prevalent voting algorithms (data fusion at decision level) and 80.77% related to the Bayesian algorithm (data fusion at feature level), confirms the effectiveness of the proposed method for unsupervised CD in multi-spectral and multi-temporal RS images.
R. Shah-Hoseini, A. Safari, S. Homayouni, Volume 5, Issue 3 (2-2016)
Abstract
In the past few decades as a result of urban population, spatial development of urban areas has been growing fast. This has led to some changes in the environment in these areas. Hence, detecting changes in different time periods in urban areas has a great importance. Conventional CD methods partition the observation space linearly or rely on a linear combination of the multitemporal data. As a result, they can be inefficient for images corrupted by either noise or radiometric differences that cannot be normalized. On the other hand, one of the main challenges in the production of maps of changes in urban areas, Constraints on the spectral separation of bare land and built-up area from each other in these areas. Therefore, in this paper, an automatic kernel based change detection method with the ability to use a combination of spectral data and spectral indices have been proposed. First, the spectral index for the separation of classes covering the urban area of multi-temporal images are extracted. In next step, differential image was generated via two approaches in high dimensional Hilbert space. By using change vector analysis and determining automatically a threshold, the pseudo training samples of the change and no-change classes were extracted. These training samples were used for determining the initial value of kernel C-means clustering parameters. Then, an optimizing a cost function with the nature of geometrical and spectral similarity in the kernel space is employed in order to estimate the kernel based C-means clustering’s parameters and to select the precise training samples. These training samples were used to train the kernel based minimum distance (KBMD) classifier. Lastly, the class’s label of each unknown pixel was determined using the KBMD classifier. To assess the accuracy and efficiency of the proposed change detection algorithm, this algorithm were applied on multi-spectral and multi-temporal Landsat 5 TM images of the city of Karaj in 1987 and 2011. Respect to the features used, the sensitivity analysis for proposed method carried out using five different feature sets. In order to assess the performance of the proposed automatic kernel-based CD algorithm in the case of using DFSS (Accuracy: 86.40 and Kappa: 0.83) and DFHS (Accuracy: 85.54 and Kappa: 0.82) differencing methods, we compared this technique with well-known CD methods, namely, the MNF based (Minimum Noise Fraction) CD method (Accuracy: 77.42 and Kappa: 0.76), SAM (Spectral Angle Mapper) CD method (Accuracy: 64.60 and Kappa: 0.60), and simple Image differencing CD method (Accuracy: 73.44 and Kappa: 0.70). The comparative analysis of proposed method and the classical CD techniques show that the accuracy of obtained change map can be considerably improved.
Y. Delaviz, J. Karami, M. Shaygan, Volume 5, Issue 3 (2-2016)
Abstract
The occurrence of earthquake has made human being consider fundamental plans to reduce the consequent danger and destruction. The only means to reduce the vulnerability is to set specific management of urban crisis in construction; moreover, this aim cannot be achieved unless the city immunity, in confrontation of the earthquake, is considered as a major purpose in all stages of urban planning. Proper allocation of various urban land uses helps hugely the process of management of urban crisis related to the earthquake; accordingly, recognizing the different effective variables of vulnerability of urban areas from the aspects of urban land use, definition and declaring their relations with vulnerability, their analysis and finally preparing the land use optimizing maps with less percentage of vulnerability, is the principle target of this paper.
In this paper, for optimizing urban land use allocation, with the approach of reducing the vulnerability caused by the earthquake based on the physical factors, the multi-objective optimization algorithm NSGA-II was used for modeling. The 12th district of Tehran was taken as the subject of study. In this algorithm the main objectives include: maximizing compatibility of adjacent land uses, accessibility of land uses, availability of sanitary-medical and residential land uses to the Road network and minimizing susceptibility in earthquake's time and Minimizing land uses change. Considering the fact that the NSGA-II algorithm is multi-objective, the decision maker encounters different solutions in the Pareto-optimal front, which makes the process more complicated. Accordingly, to aid the decision making process and presenting the correspondent scenarios with the decision makers' priority, the clustering analysis was used with K-means approach. To study the changes of the results of different implementations of algorithm and stability of optimization algorithm, convergence trend and repeatability test carried out.
In the resulted optimized land use arrangements, the levels of objective functions are much better than the previous arrangement. Moreover, accessibility objective function has been improved mostly under the effect of optimization (27 %). The average percentage of the improvement of the objective functions in the algorithm was 19 %. In the repeatability test, the average percentage of the overlay of the algorithm's solutions in different runs was recorded as 76 %, which can be recognized as a proper value, and represents the suitable repeatability of the algorithm. The results were found acceptable based on the convergence trend, by having the stable value of the objective functions after specific times of iteration.
Several factors represent the efficiency of the model which can be named as; the proper method of optimization that was compatible with the problem, defining the objective functions based on the reality and including the main aspects of the problem of the earthquake vulnerability in the presented model, concerning the opinion of the decision makers in the process of the research and the final stage for selecting the optimum arrangement with the analysis of the results of the scenarios and the scenarios' clustering. The results of this research can be an aid as a means to support deciding for the planner and urban management policy makers encountering earthquake, in planning appropriately for the urban spaces.
M. Moradi, M. R. Delavar, A. Moradi, Volume 5, Issue 4 (6-2016)
Abstract
Land use suitability assessment is a traditional problem and many researches have been undertaken to address this problem. The main reason why this problem is important is that experts want to consider all the aspects into account when they are trying to find the optimum location for a specific purpose. In other words, experts want to find the best place from all point of views such as environmental, ecological, economic and political aspects. Therefore, a decision support system is obligatory in order to facilitate decision making using mathematical models. On the other hand, due to the fact that a place is going to be chosen in this problem, GIS is a main science involved in this assessment. In this paper, a spatial decision support system is proposed using the integration of Sugeno integral and Imperialist Competitive Algorithm (ICA). Sugeno integral is able to aggregate alternative scores with respect to their interaction. In other decision making methods, it is assumed that the criteria are independent but it is against the real world situations. For example, in land use suitability assessment problem, some criteria such as land price and distance to major roads are not independent. Therefore, this study can improve spatial decision support systems by taking the impact of interaction among criteria into account. Sugeno integral operator uses fuzzy capacities instead of layer weights. Fuzzy capacities show the importance of each group of criteria for land suitability assessment. Furthermore, Sugeno integral can provide a number of numerical measures to indicate the importance of each criteria (Shapley value), the interaction among each set of criteria (interaction index) and the power of each criteria to veto the final decision (veto index). Shapley value is a parameter defined by game theory which indicates the power of each player in a game. In terms of decision making problem, Shapley index shows the importance of each criterion in the decision making process. A more important criterion has a higher impact on the results of the decision making. Interaction index shows how two players cooperate. If the two players have a positive cooperation they will make a better situation and if they have negative interaction the power of their coalition will be less than the power of each of them. In multiple criteria decision making, interaction index represents how two criteria interact. When the simultaneous satisfaction of two criteria is favourable, the interaction among them is positive and when the simultaneous satisfaction of the two criteria is not what the decision maker wants, it means that the two criteria have negative interaction. In this research imperialist competitive algorithm is applied to find the best values of fuzzy capacities that best describe the experts’ knowledge. In other words, a constrained optimization problem is solved here to compute the optimum value of fuzzy capacity for each set of criteria. ICA is selected because it is able to find the optimum value of a continuous function under constrains. The proposed SDSS is employed for land use suitability assessment for a new power plant. The results indicate that the method is highly suitable for modeling GIS-based decision making over interacting criteria. This model may be used in other areas of decision support systems with minor modifications.
M. Malekpour Golsefidi, F. Karimipour, M. A. Sharifi, Volume 5, Issue 4 (6-2016)
Abstract
Nowadays, considering the importance of marine commerce, monitoring the marine navigation and routing the ships could be regarded as important issues. Moreover, specifying the weather condition of the marine environment for minimizing the damages and fatalities to vessels, crews and cargos is vital. Hence, weather routing is absolutely crucial. In addition, due to the high cost of voyage, the duration of voyage is one of the essential parameters of weather routing.
The aim of this research is to minimize the voyage duration regarding the weather conditions. The marine environment is simulated by a grid of weather data which has the resolution of 0.25 D that is updated every 6 hours. The data is downloaded from European Centre for Medium Range Weather Forecasts (ECMWF). Following that, the weight of each edge is calculated with respect to the time in which the vessel passes the edge. Travel time is related to the impact of wave, wind and sea depth on vessel’s speed which is computed based on Kwon method and Lackenby’s formula. Finally Dijkestra algorithm is applied for calculating the optimum route.
The studied area located in the north of Indian ocean in Persian gulf, Oman Sea and Arabian Sea. The model is implemented in two different weather conditions (calm and rough conditions) for calculating minimum time route between Pipavav port ( ) in India and Bushehr port ( ). The results indicate that although the voyage distance increased in this model, the duration of voyage decreased. Thus, the cost of voyage dropped noticeably. In addition, the depth of the marine environment determines the route of journey in the calm weather conditions because of lack of existence of high seas and storm in front of the ship. In the rough weather conditions the weather condition parameter (speed and direction of the wind and height and direction of the wave) has more effect than depth parameter in order to prevent the ship from navigation in high seas in which the ship's speed reduces dramatically. Moreover, results show that in the bounded seas like Persian Gulf with small area with respect to spatial resolution of marine environment (the resolution of the weather data) in which the weather condition between neighbouring cells does not change obviously, depth parameter is the critical parameter to determine the journeys path.
H. Davodi, A. Safari, V. Ebrahimzade Ardestani, Volume 6, Issue 1 (10-2016)
Abstract
Optimization methods such as Simulated Annealing (SA), Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Partial Swarm Optimization (PSO) are very popular for solving inverse problems. Gravity inversion is one of the fields of geophysical exploration that mainly used for mining and oil exploration. In this research optimization method is considered for gravity inversion of a sedimentary basin to find out its geometry. A new method for 3D inversion of gravity data is developed. In such optimization methods before anything else having a forward model is necessary. This model is the relationship between Bouguer gravity anomalies and a combination of prisms. The gravity anomalies of the density interface are generated by equating the material below the interface to a series of juxtaposing rectangular blocks. The stochastic optimization method that is used for solving inverse problem is Simulated Annealing. With a try and error method at first the cost function of a lot of choices is measured. Then the minimum value between them is used as the primary set of variables for introducing to model that is the start configuration of model variables. After that the repeating algorithm starts until the cost function reaches to as small as possible value by considering the geophysical constraints of the place that is being studied. Finally the geometry of prisms that is the depth of each prism is achieved. This geometry shows the situation of anomaly source. For better inversion regional anomaly is used with introducing some unknown constants to the model. The values of unknown parameters of model are extracted in a continuous range that is obtained from priori information of the region. At first the algorithm was used in solving a synthetic problem that is defined by developing random data in an arbitrary region, so with using forward model again and again finally the cost function reaches to its minimum value. For evaluating the success or failure of the algorithm, the contour map of depths that was used in forward model is compared with map that is produced after inversion and discrepancies was considered. The good results of synthetic problem lead to implementing the algorithm for real gravity data from Aman Abad region in Arak city. For this purpose at first the gravity data must be changed to Bouguer gravity anomalies by some reduction. After that the repeated algorithm is implemented and finally the results are compared with some priori information which is obtained by madding boreholes around the region. This priori information claims that the minimum and maximum depth of prisms is between 70 meters to 120 meters. Results of the algorithm are compatible with this information and show the power of algorithm in solving this kind of inverse problem. It must be mentioned that without using this priori information the results of gravity inversion are not unique and many different interpretations can be concluded even interpretations that cannot be accepted in geophysical view. As a norm to find the ability of algorithm in solving this problem the cost function can be helpful which shows the amount of error. The maximum value of it was 0.5 mgal in some regions in this research.
K. Kabiri, M. Saadi Mesgari, Volume 6, Issue 4 (6-2017)
Abstract
The development of effective decision support tools that can be adopted in the transportation industry is vital since it can lead to substantial cost reduction and efficient resource consumption. However, vehicles moving on our roads contribute to congestion, noise، pollution, and accidents. So route planning and transport management, using optimization tools, can help reduce transport costs by cutting mileage and improving driver and vehicle usage. In addition, it can improve customer service, cut carbon emissions, improve strategic decision making and reduce administration costs.
Due to the simultaneous pick-up and delivery postal service and delivery time importance of those parcels, this study focuses on the pick-up and delivery problems. The pick-up and delivery problems are important types of vehicle routing problem (VRP). VRP is the core of scientific research on the distribution and transport of people and goods. Unlike the classical VRP, in which all customers require the same services, in the pick-up and delivery problem basic it is considered that two different types of services can be found in one place, in fact there's a pick up or delivery. PDP has several applications in the transportation of pick-up and delivery parcel post. The purpose of this research is to find the most optimal route to transport postal service. It is performed by imposing a series of conditions to the pick-up and delivery problems using meta-heuristic algorithms for the simulation data. It is followed by brief explanation of present metaheuristic algorithms including bee colony algorithm and genetic algorithms and their features. Finally the results of the algorithms are compared on the basis of the accuracy, repeatability, speed of convergence. It is necessary to note that the results are not ideal, but the best case is considered. The results showed the performance of the bee algorithm are better than genetic. Based on the results obtained in each run, genetic algorithms and Bee were 84% and 93% are possible to achieve the best solution.
R. Ghasemi Nejad, R. Ali Abbaspor, M. Mojarab, Volume 6, Issue 4 (6-2017)
Abstract
As a common method in data mining, pattern recognition within seismic data using clustering leads to the extraction of valuable information from large databases. Categorization of clustering algorithms is neither straightforward nor canonical. Clustering algorithms can be divided into four broad classes of hierarchical, density-based, grid-based, and partitioning methods. The application of these methods depends on the kind and nature of problem. From the labeling and assignment point of view, clustering algorithms can be divided into hard and soft methods. In the hard clustering, each data belongs to one and only one cluster while in soft (or fuzzy) clustering; each data belongs to different clusters with different degrees of membership. In the field of seismology and with application of hazard analysis, it is an essential task to break an area into different regions with more or less similar seismological characteristics. So it is needed to use clustering algorithms. For data mining and clustering analysis among seismic catalogs, some issues should be considered, such as, among an active seismic area, there are different regions with different rates of seismicity, As a result, the density and number of events are not the same in different regions or seismotectonic provinces, the earthquake events are mainly distributed among different segments of major faults, there are different seismotectonic regions among an area, therefor seismic characteristics in a region vary gradually and there are not abrupt changes in these characteristics. Thus, it may be a more proper approach to partition earthquakes based on the fuzzy clustering methods that tend to investigate realistic data. Although many clustering algorithms have been proposed so far, these algorithms are very sensitive to initial conditions and pretty often get trapped in local optimum solutions so they couldn’t find real clusters in space of problem. Therefore, some other global optimal searching algorithm should be used to find global clusters. The clustering problem may be considered as an optimization problem in general. Metaheuristics are widely renowned as efficient approaches for many hard optimization problems including cluster analysis. Metaheuristics uses an iterative search strategy to find an approximate optimal solution using a limited computation resource, such as computing power and computation time. Therefore, the present paper suggests some metaheuristics algorithms to solve the problems associated with clustering algorithms, Gustafson Kessel and Fuzzy c-means. The two algorithms called PSO-GK and PSO-FCM, respectively then they are applied on synthetic seismic data as well as real seismic data acquired across Iran, with the results validated using validity clustering indexes such as fuzzy hyper volume (FHV), average partition density (APD) and partition density (PD). These indexes show the clear separation between the clusters, minimal volume of the clusters, and maximal number of data points concentration in the vicinity of the cluster centroid. A low value for FHV and high values for APD and PD indexes would ideally indicate a good partition. The amount of FHV index in PSO-GK algorithm for synthetic seismic data is 0.4272 and for real seismic data acquired across Iran is 0.0941 better than this index in PSO-FCM algorithm. The two other indexes are also achieved better amounts in PSO-GK algorithm than PSO-FCM algorithm. Based on the comparison results, the proposed Gustafson-Kessel approach-based algorithm was found to be more appropriate for the analysis of seismic data.
A. Sargolzaei, A. R. Vafaeinejad, Volume 6, Issue 4 (6-2017)
Abstract
Nowadays with the rapid rate of urban development and increasing volume of vehicles and traffic restrictions, routing in urban networks is not only necessary but essential. Management of such massive volume of data makes the need to for GIS with capabilities to conduct spatial data analysis inevitable.
People often, when deciding to start a journey from one location to another, consider not only which route and means of transportation will save them time, but also which are the most inexpensive and cost effective. Hence, they outline the issue as a question in their mind, and based on the criteria, seek to find the optimal solution. The same behavior occurs in a different routing system. Finding the most optimal, efficient and shortest route is one of the key pillars in route finding for which finding the right solutions could lead to answering other questions on the issue. In fact, for a more in depth level of analysis, the answer to this question is essential; Finding the shortest path possible from a starting point or origin, to an ending point or destination. Metaheuristic algorithms are estimating algorithms, that are able to find optimal or almost optimal solutions in a reasonable time.
The showcased methodology in this research for solving the optimal route is recommended for the first time and is the Cuckoo Optimization Algorithm. The reason for choosing this algorithm, is the fact that it is a new method that provides appropriate solutions for different problems than other meta-heuristic algorithms. Route finding which is by nature a discrete problem, is managed by changes in binary version of this algorithm. In setting up the first population, a controlled approach was used to prevent the creation of random populations, that only a few of them could create routes. In this method, population variables that are basically the same network points and situations of each cuckoo are not randomly selected. These variables are selected in a controlled system. Meaning, selection of each next node is from those that are connected to it. While implementation of the algorithm, cuckoo’s locations are converted to binary numbers, if a node exists in the route it will become 1 and if not 0. A Sigmoid Function is used in the migration phase of the Cuckoo. In this phase the new location of Cuckoo stands between the range of zero and one, and other locations are converted to zero and one. To test the recommended algorithm, three network are used; hypothetical, local and real networks. The result of running this algorithm in 2 hypothetical and local networks with 20 and 31 nodes was the same result of a deterministic algorithm. However, in a network, that was part of a real network and composed of 617 nodes and 995 arcs, it could indicate the optimal route slightly better than that of deterministic algorithm. The results showed that the algorithm is capable of routing in the network and with some changes on the structure of the network can be used on networks with large data.
M. Moradi, M. R. Sahebi, Volume 7, Issue 1 (9-2017)
Abstract
Nowadays spatial data and urban areas rapidly changing due to the many kinds of natural and artificial factors. These changes lead to the loss of reliability of the information for urban planning, resource management and inefficiency of spatial information systems. so, monitoring of these changes and obtaining update information about the land use and the kind of its changes is essential for urban planning, proper resource management, damage determination assessment and the updating of geospatial information systems. Therefore, more accurate change detection is a challenge for experts and researchers of remote sensing and photogrammetry. In recent years, various techniques have been developed for change detection especially on high-resolution images that choosing the appropriate method and algorithm to identify changes is not easy. Despite all the efforts of researchers to develop different methods for change detection, all techniques and methods have advantages and limitations. This article introduces a new category of changes detection methods. In general, methods and techniques of change detection in urban areas can be categorized into four major categories: direct comparison- post classification dipole, object based- pixel based dipole, supervised- unsupervised dipole, textural and spatial information and features. Despite a rich and useful spectral information in high-resolution satellite images of remote sensing and photogrammetry, just use of this kind of information, will not be enough to achieve the required accuracy due to increased variability within homogenous land-cover classes. So, in this paper, in addition to the spectral features, it is also used texture features extracted from the spatial and frequency domain (Spectral, Anomaly, Edge, Morphological building index (MBI), Other color space, Gray Level Co-occurrence Matrix (GLCM), Features extracted from wavelet transform, Features extracted from Gabor filter, Features extracted from Fourier transform and Features extracted from curvelet transform) to solving this problem and generating changes mask of high-resolution images. The diversity and variety of extracted features from the spatial and frequency domain require optimization algorithms to achieve optimum features. Therefore, particle swarm optimization and genetic algorithms have been used to achieve optimum features and optimum parameters of support vector machine simultaneously. Also according to the major weakness of post classification method for detection of intra-class changes and bad radiometric conditions of used images for segmentation, 2-class classification of differential features is used to detect changes. QuickBird (0.6 m - October 2006) and GeoEye (0.5 m - August 2010) satellite imagery of AzadShahr/Tehran/Iran are used to evaluate the proposed method. The overall accuracy 93.45 and kappa coefficient 0.87 versus 91.03 and 0.82 show that particle swarm optimization is better than a genetic algorithm to achieve optimum features and optimum parameters of support vector machine simultaneously. It also calculates the effectiveness of each 10 kinds of features used by three criteria introduced in this paper (Effectiveness, Minor Effectiveness, and Overall Effectiveness), indicates the efficiency of using other color spaces, features extracted from wavelet and features extracted from spatial domain (Gray Level Co-occurrence Matrix) and also reflects the weakness of using only spectral data to detect changes in high-resolution images. Compare the proposed approach with other studies (post classification and fuzzy thresholding method) show the effectiveness of proposed method.
R. Safarzadeh Ramhormozi, M. Karimi, S. Alaei Moghadam, Volume 7, Issue 3 (2-2018)
Abstract
Today, urban land use planning and management is an essential need for many developing countries. So far, lots of multi objective optimization models for land use allocation have been developed in the world. These models will provide set of non-dominated solutions, all of which are simultaneously optimizing conflicting social, economic and ecological objective functions, making it more difficult for urban planners to choose the best solution. An issue that is often left unnoticed is the application of spatial pattern and structures of urban growth on models. Clearly solutions that correspond with urban spatial patterns are of higher priority for planners. Quantifying spatial patterns and structures of the city requires the use of spatial metrics. Thus, the main objective of this study is to support decision-making using multi objective Meta-heuristic algorithms for land use optimization and sorting the solutions with respect to the spatial pattern of urban growth. In the first step in this study, we applied the non-dominated sorting genetic algorithm ΙΙ (NSGA_II) and multi objective particle swarm optimization (MOPSO) to optimize land use allocation in the case study. The four objective functions of the proposed model were maximizing compatibility of adjacent land uses, maximizing physical land suitability, maximizing accessibility of each land use to main roads, and minimizing the cost of land use change. In the next step, the two mentioned optimization models were compared and solutions were sorted with respect to the spatial patterns of the city acquired through the use of spatial metrics. A case study of Tehran, the largest city in Iran, was conducted. The six land use classes of industrial, residential, green areas, wetlands, Barren, and other uses were acquired through satellite imagery during the period of 2000 and 2012. Three scenarios were predicted for urban growth spatial structure in 2018; the continuation of the existing trend from 2000 to 2018, fragmented growth, and aggregated growth of the patches. Finally, the convergence and repeatability of the two algorithms were in acceptable levels and the results clearly show the ability of the selected set of spatial metrics in quantifying and forecasting the structure of urban growth in the case study. In the resulted arrangements of land uses, the value of the objective functions were improved in comparison with the present arrangement. In conclusion planners will be able to better sort outputs of the proposed algorithms using spatial metrics, allowing for more reliable decisions regarding the spatial structure of the city. This achievement also indicates the ability of the proposed model in simulation of different scenarios in urban land use planning.
S. Beheshtifar, A. Alimohammmadi, Volume 8, Issue 2 (12-2018)
Abstract
Considering to the compatibility of the urban land uses is one of the important issues in optimization of their spatial arrangement. The most common way of mitigating the negative effects of conflicting land uses on each other is to maintain a certain distance between them. Due to the need to investigate a high amount of information for optimization of the different land uses arrangement and limitations of the precise methods, researchers have focused on the meta-heuristic methods (e.g. genetic algorithm) to solve such problems. Furthermore, because of the need to notice multiple objectives and criteria, multi-objective optimization methods have been considered. To ensure adequate separation distances between incompatible land uses, they can be entered as constraints to these types of optimization methods.
In this research, a hybrid method is proposed to meet distance constraints in an optimization problem for locating multiple land uses. For this purpose, a multi-objective genetic algorithm was used to maximize the location suitability and compatibility of the land uses. Simulated Annealing (SA) method was applied to repair infeasible individuals and meet distance constraints in related solutions. SA is a probabilistic technique for approximating the global optimum of a given function. Simulated annealing starts with an initial solution. A neighboring solution is then selected. If the neighbor solution is better than the current solution, is considered as the current solution. Otherwise, the candidate solution, is accepted as the current solution based on the acceptance probability to escape local optima.
In this study, the solutions are generated by the genetic algorithm. Each gene of the chromosome represents the location of a candidate site. After generating the population, the distance constraints are checked and infeasible solutions are determined. A solution to which all the distance constraints are met is the feasible solution, otherwise the solution is infeasible. Repairing the infeasible chromosomes were done as follows:
• Identify the gene (s) which makes the chromosome infeasible
• Identify the neighbors of that gene (s) according to the distances between genes
• Create new solutions using neighbors
• Calculate the rate violations of new solution
• If all new solutions are infeasible, the solution will be replaced by the solution by minimum violation.
• If only one feasible solution is generated, the initial solution will be replaced by it.
• If more than one feasible solutions are generated, the values of the objective functions are
calculated for feasible solutions. Non-dominated solutions are identified. Among them, the solution which has lesser difference with the initial solution, is selected.
The results of the research show that the proposed method can be effective in repairing infeasible individuals and converting them to feasible ones, with regard to distance constraints. In this method, for each infeasible individual, several alternatives are generated, from which the closest feasible solution to the original solution, with the better objective function values, can be selected.
Increasing the number of neighbors for each site in SA will make it easier to get feasible solutions.
By the way, by entering the farther neighbors, there may be more distance between the initial solution and the new solution that replaces it.
|
|