-*- mode: org -*- Environmental management, ecological, and remote sensing applications * 9(<-539): Semisupervised PSO-SVM regression for biophysical parameter estimation In this paper, a novel semisupervised regression approach is proposed to tackle the problem of biophysical parameter estimation that is constrained by a limited availability of training (labeled) samples. The main objective of this approach is to increase the accuracy of the estimation process based on the support vector machine (SVM) technique by exploiting unlabeled samples that are available from the image under analysis at zero cost. The integration of such samples in the regression process is controlled through a particle swarm optimization.(PSO) framework that is defined by considering separately or jointly two different optimization criteria, thus leading to the implementation of three different inflation strategies. These two criteria are empirical and structural expressions of the generalization capability of the resulting semisupervised PSO-SVM regression system. The conducted experiments were focused on the problem of estimating chlorophyll concentrations in coastal waters from multispectral remote sensing images. In particular, we report and discuss resu s of experiments that are designed in such a way as to test the proposed approach in terms of: 1) capability to capture useful information from a set of unlabeled samples for improving the estimation accuracy; 2) sensitivity to the number of exploited unlabeled samples; and 3) sensitivity to the number of labeled samples used for supervising the inflation process. 2007 * 12(<-211): Predicting groundwater level fluctuations with meteorological effect implications-A comparative study among soft computing techniques The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 57(<-264): Exploiting the Classification Performance of Support Vector Machines with Multi-Temporal Moderate-Resolution Imaging Spectroradiometer (MODIS) Data in Areas of Agreement and Disagreement of Existing Land Cover Products Several studies have focused in the past on global land cover (LC) datasets harmonization and inter-comparison and have found significant inconsistencies. Despite the known discrepancies between existing products derived from medium resolution satellite sensor data, little emphasis has been placed on examining these disagreements to improve the overall classification accuracy of future land cover maps. This work evaluates the classification performance of a least square support vector machine (LS-SVM) algorithm with respect to areas of agreement and disagreement between two existing land cover maps. The approach involves the use of time series of Moderate-resolution Imaging Spectroradiometer (MODIS) 250-m Normalized Difference Vegetation Index (NDVI) (16-day composites) and gridded climatic indicators. LS-SVM is trained on reference samples obtained through visual interpretation of Google Earth (GE) high resolution imagery. The core of the training process is based on repeated random splits of the training dataset to select a small set of suitable support vectors optimizing class separability. A large number of independent validation samples spread over three contrasting regions in Europe (Eastern Austria, Macedonia and Southern France) are used to calculate classification accuracies for the LS-SVM NDVI-derived LC map and for two (globally available) LC products: GLC2000 and GlobCover. The LS-SVM LC map reported an overall accuracy of 70%. Classification accuracies ranged from 71% where GlobCover and GLC2000 agreed to 68% for areas of disagreement. Results indicate that existing LC products are as accurate as the LS-SVM LC map in areas of agreement (with little margin for improvements), while classification accuracy is substantially better for the LS-SVM LC map in areas of disagreement. On average, the LS-SVM LC map was 14% and 18% more accurate compared to GlobCover and GLC2000, respectively. 2012 * 68(<- 67): A real-time forecasting model for the spatial distribution of typhoon rainfall Accurate forecasts of hourly rainfall are necessary for early warning systems during typhoons. In this paper, a typhoon rainfall forecasting model is proposed to yield 1- to 6-h ahead forecasts of hourly rainfall. First, an input optimization step integrating multi-objective genetic algorithm (MOGA) with support vector machine (SVM) is developed to identify the optimal input combinations. Second, based on the results of the first step, the forecasted rainfall from each station is used to obtain the spatial characteristics of the rainfall process is presented. An actual application to Tsengwen river basin is conducted to demonstrate the advantage of the proposed model. The results clearly indicate that the proposed model effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. The optimal input combinations can be obtained from the proposed model for different stations with different geographical conditions. In addition, the proposed model is capable of producing more acceptable the results of rainfall maps than other model. In conclusion, the proposed modeling technique is useful to improve the hourly typhoon rainfall forecasting and is expected to be helpful to support disaster warning systems. (C) 2014 Elsevier B.V. All rights reserved. 2015 * 106(<- 22): A review on sustainable construction management strategies for monitoring, diagnosing, and retrofitting the building's dynamic energy performance: Focused on the operation and maintenance phase According to a press release, the building sector accounts for about 40% of the global primary energy consumption. Energy savings can be achieved in the building sector by improving the building's dynamic energy performance in terms of sustainable construction management in the urban-based built environments (referred to as an "Urban Organism"). This study implements the concept of "dynamic approach" to reflect the unexpected changes in the climate and energy environments as well as in the energy policies and technologies. Research in this area is very significant for the future of the building, energy, and environmental industries around the world. However, there is a lack of studies from the perspective of the dynamic approach and the system integration, and thus, this study is designed to fill the research gap. This study highlights the state-of-the-art in the major phases for a building's dynamic energy performance (i.e., monitoring, diagnosing, and retrofitting phases), focusing on the operation and maintenance phase. This study covers a wide range of research works and provides various illustrative examples of the monitoring, diagnosing, and retrofitting of a building's dynamic energy performance. Finally, this study proposes the specific future developments and challenges by phase and suggests the future direction of system integration for the development of a carbon-integrated management system as a large complex system. It is expected that researchers and practitioners can understand and adopt the holistic approach in the monitoring, diagnosing, and retrofitting of a building's dynamic energy performance under the new paradigm of an "Urban Organism". (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 107(<-360): Optimization methods applied to renewable and sustainable energy: A review Energy is a vital input for social and economic development. As a result of the generalization of agricultural, industrial and domestic activities the demand for energy has increased remarkably, especially in emergent countries. This has meant rapid grower in the level of greenhouse gas emissions and the increase in fuel prices, which are the main driving forces behind efforts to utilize renewable energy sources more effectively, i.e. energy which comes from natural resources and is also naturally replenished. Despite the obvious advantages of renewable energy, it presents important drawbacks, such as the discontinuity of generation, as most renewable energy resources depend on the climate, which is why their use requires complex design, planning and control optimization methods. Fortunately, the continuous advances in computer hardware and software are allowing researchers to deal with these optimization problems using computational resources, as can be seen in the large number of optimization methods that have been applied to the renewable and sustainable energy field. This paper presents a review of the current state of the art in computational optimization methods applied to renewable and sustainable energy, offering a clear vision of the latest research advances in this field. (C) 2010 Elsevier Ltd. All rights reserved. 2011 * 125(<-205): Development of an effective data-driven model for hourly typhoon rainfall forecasting In this paper, we proposed a new typhoon rainfall forecasting model to improve hourly typhoon rainfall forecasting. The proposed model integrates multi-objective genetic algorithm with support vector machines. In addition to the rainfall data, the meteorological parameters are also considered. For each lead time forecasting, the proposed model can subjectively determine the optimal combination of input variables including rainfall and meteorological parameters. For 1- to 6-h ahead forecasts, an application to high- and low-altitude metrological stations has shown that the proposed model yields the best performance as compared to other models. It is found that meteorological parameters are useful. However, the use of the optimal combination of input variables determined by the proposed model yields more accurate forecasts than the use of all input variables. The proposed model can significantly improve hourly typhoon rainfall forecasting, especially for the long lead time forecasting. (C) 2013 Elsevier B.V. All rights reserved. 2013 * 134(<-180): Interest in intermediate soft-classified maps in land change model validation: suitability versus transition potential This study compares two types of intermediate soft-classified maps. The first type uses land use/cover suitability maps based on a multi-criteria evaluation (MCE). The second type focuses on the transition potential between land use/cover categories based on a multi-layer perceptron (MLP). The concepts and methodological approaches are illustrated in a comparable manner using a Corine data set from the Murcia region (2300km(2), Spain) in combination with maps of drivers that were created with two stochastic, discretely operating, commonly used tools (MCE in CA_MARKOV and MLP in Land Change Modeler). The importance of the different approaches and techniques for the obtained results is illustrated by comparing the specific characteristics of both approaches by validating the suitability versus transition potential maps to each other using a Spearman correlation matrix and, between the Corine maps, using classical ROC (receiver operating characteristic) statistics. Then, we propose a new use of ROC statistics to compare these intermediate soft-classified maps with their respective hard-classified maps of the models for each category. The validation of these results can be beneficial in choosing a suitable model and provide a better understanding of the implications of the different modeling steps and the advantages and limitations of the modeling tools. 2013 * 150(<-660): Land-use suitability analysis in the United States: Historical development and promising technological achievements Various methods of spatial analysis are commonly used in land-use plans and site selection studies. A historical overview and discussion of contemporary developments of land-use suitability analysis are presented. The paper begins with an exploration into the early 20th century with the infancy of documented applications of the technique, The article then travels through the 20th century, documenting significant milestones. Concluding with present explorations of advanced technologies such as neural computing and evolutionary programming, this work is meant to serve as a foundation for literature review and a premise for the exploration of new advancements as we enter into the 21st century. 2001 * 162(<-646): Decision support to assist environmental sedimentology modelling Attention is drawn to the importance of spatial aspects when adopting a modelling approach to predict the likely character of sediment. This requires an understanding of the processes controlling transport, deposition and remobilization, singly and I in combination. The advantages of incorporating expert systems are examined alongside recently developed GIS techniques utilising multiple criteria and fuzzy sets. 2003 * 168(<-129): A comparison study of three statistical downscaling methods and their model-averaging ensemble for precipitation downscaling in China This study evaluated the performance of three frequently applied statistical downscaling tools including SDSM, SVM, and LARS-WG, and their model-averaging ensembles under diverse moisture conditions with respect to the capability of reproducing the extremes as well as mean behaviors of precipitation. Daily observed precipitation and NCEP reanalysis data of 30 stations across China were collected for the period 1961-2000, and model parameters were calibrated for each season at individual site with 1961-1990 as the calibration period and 1991-2000 as the validation period. A flexible framework of multi-criteria model averaging was established in which model weights were optimized by the shuffled complex evolution algorithm. Model performance was compared for the optimal objective and nine more specific metrics. Results indicate that different downscaling methods can gain diverse usefulness and weakness in simulating various precipitation characteristics under different circumstances. SDSM showed more adaptability by acquiring better overall performance at a majority of the stations while LARS-WG revealed better accuracy in modeling most of the single metrics, especially extreme indices. SVM provided more usefulness under drier conditions, but it had less skill in capturing temporal patterns. Optimized model averaging, aiming at certain objective functions, can achieve a promising ensemble with increasing model complexity and computational cost. However, the variation of different methods' performances highlighted the tradeoff among different criteria, which compromised the ensemble forecast in terms of single metrics. As the superiority over single models cannot be guaranteed, model averaging technique should be used cautiously in precipitation downscaling. 2014 * 172(<-556): Soft combination of local models in a multi-objective framework Conceptual hydrologic models are useful tools as they provide an interpretable representation of the hydrologic behaviour of a catchment. Their representation of catchment's hydrological processes and physical characteristics, however, implies a simplification of the complexity and heterogeneity of reality. As a result, these models may show a lack of flexibility in reproducing the vast spectrum of catchment responses. Hence, the accuracy in reproducing certain aspects of the system behaviour may be paid in terms of a lack of accuracy in the representation of other aspects. By acknowledging the structural limitations of these models, we propose a modular approach to hydrological simulation. Instead of using a single model to reproduce the full range of catchment responses, multiple models are used, each of them assigned to a specific task. While a modular approach has been previously used in the development of data driven models, in this study we show an application to conceptual models. The approach is here demonstrated in the case where the different models are associated with different parameter realizations within a fixed model structure. We show that using a 'composite' model, obtained by a combination of individual 'local' models, the accuracy of the simulation is improved. We argue that this approach can be useful because it partially overcomes the structural limitations that a conceptual model may exhibit. The approach is shown in application to the discharge simulation of the experimental Alzette River basin in Luxembourg, with a conceptual model that follows the structure of the HBV model. 2007 * 173(<- 53): OR models with stochastic components in disaster operations management: A literature survey The increasing number of affected people due to disasters, the complexity and unpredictability of these phenomena and the different problems encountered in the planning and response in different scenarios, establish a need to find better measures and practices in order to reduce the human and economic loss in this kind of events. However this is not an easy task considering the great uncertainty these phenomena present including the multiple number of possible scenarios in terms of location, probability of occurrence and impact, the difficulty in estimating the demand and supply, the complexity of determining the number and type of resources both available and needed and the intricacy to establish the exact location of the demand, the supply and the possible damaged infrastructure, among many others. Disaster Operations Management has become very popular and, considering the properties of disasters, the use of tools and methodologies such as OR have been given a lot of attention in recent years. The present work provides a literature review on the OR models with some stochastic component applied to Disaster Operations Management (DOM), along with an analysis of these stochastic components and the techniques used by different authors to cope with them as well as a detailed database on the consulted papers, which differentiates this research from other reviews developed during the same period, in order to give an insight on the state of the art in the topic and determine possible future research directions. (C) 2014 Elsevier Ltd. All rights reserved. 2015 * 174(<- 91): Multi-objective ecological reservoir operation based on water quality response models and improved genetic algorithm: A case study in Three Gorges Reservoir, China This study proposes a self-adaptive GA-aided multi-objective ecological reservoir operation model (SMEROM) and applies it to water quality management in the Xiangxi River near to the Three Gorges Reservoir, China. The SMEROM integrates statistical water quality models, multi-objective reservoir operations, and a self-adaptive GA within a general framework. Among them, the statistical water quality models of the Xiangxi River are formulated to deal with the relationships between reservoir operation and water quality, which are embedded in constraints of the SMEROM. The multiple objective functions, including maximizing hydropower generation, minimizing loss of flood control, minimizing rate of flood risk, maximizing the average remaining capacity of flood control and maximizing the benefit of shipping, are considered simultaneously to obtain comprehensive benefit among the environment, society and economy. The weighting method is employed to convert the multiple objectives to a single objective. To solve the complex SMEROM, an improved self-adaptive GA is employed through incorporating simulated binary crossover and self-adaptive mutation. To demonstrate the advantage of the developed SMEROM model, the solutions through ecological reservoir operation are compared with those through the traditional reservoir operation and the practical operation in 2011, in terms of water quality, reservoir operation and objective function values. The results show that most of benefit in the ecological operation is better than that in the traditional or practical operations except for the hydropower benefit and loss benefit of flood control. This is because flood control and environmental protection are reasonably considered in the ecological operation. (C) 2014 Elsevier Ltd. All rights reserved. 2014 * 175(<-112): Simulation-optimization modeling for conjunctive water use management Good quality surface water and groundwater resources are limited furthermore they are shrinking because of the urbanization, contamination, and climate change impacts. In this backdrop, the proper allocation and management of these resources is a critical challenge for satisfying the rising water demands of agricultural sector. Because irrigated agriculture is the largest user of all the developed water resources and consumes over 70% of the abstracted freshwater globally. The computer-based models are useful tools for achieving the optimal allocation of limited water resources for the conjunctive use planning and management in irrigated agriculture. Various simulation and optimization modeling approaches have been used to solve the water allocation problems. Optimization models have been shown to be of great importance when used with simulation models and the combined use of these two approaches gives the best results. The reviews on the combined applications of simulation and optimization modeling for the conjunctive use planning and management of surface water and groundwater resources for sustainable irrigated agriculture are done and presented in this paper. Conclusions are provided based on this review which could be useful for all the stakeholders. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 176(<-227): A Multiobjective Optimisation Model for Groundwater Remediation Design at Petroleum Contaminated Sites This study proposes a fuzzy multi-objective model for groundwater remediation in petroleum-contaminated aquifers. The optimisation system is designed based on the PAT technology, and includes two objectives (i.e. total pumping rate and average post-remedial contaminant concentration). The relationship between pumping rates and contamination concentrations at all monitoring wells after remediation are determined by a proxy model, which integrates simulation, inference, and optimisation technologies and is composed of intercept, linear, interactive, and quadratic options. Fuzzy algorithms are used to solve the formulated multi-objective optimisation problem to find optimal solutions. The model is then applied to a petroleum-contaminated aquifer in western Canada. The trade-off and lambda analyses of the results indicate that the fuzzy multi-objective model has great potential in groundwater remediation applications as it can: (1) provide reliable groundwater remediation strategies, (2) reduce computational costs in the optimization processes, and (3) balance the trade-off between remediation costs and remediation outcomes. 2013 * 177(<-257): System optimization for eco-design by using monetization of environmental impacts: a strategy to convert bi-objective to single-objective problems Eco-design is an essential way to reduce the environmental impacts and economic cost of processes and systems, as well as products. Until now, the majority of eco-design approaches have employed multi-objective optimization methods to balance between environmental and economic performances. However, the methods have limitations because multi-objective optimization requires decision makers to subjectively assign weighting factors for objectives, i.e., environmental impacts and economic cost. This implies that, depending on decision makers' preference and knowledge, different design solutions can be engendered for the same design problem. Thus, this study proposes an eco-design method which can generate a single design solution by developing mathematical optimization models with a single-objective function for environmental impacts and economic cost. For the formulation of the single-objective function, environmental impacts are monetized to external cost by using the Environmental Priority Strategies. This enables the tradeoffs between environmental impacts and economic cost in the same unit, i.e., monetary unit. As a case study, the proposed method is applied to the eco-design of a water reuse system in an industrial plant. This study can contribute to improving the eco-efficiency of various products, processes, and systems. (C) 2012 Elsevier Ltd. All rights reserved. 2013 * 178(<-364): Paradigm shift in urban energy systems through distributed generation: Methods and models The path towards energy sustainability is commonly referred to the incremental adoption of available technologies, practices and policies that may help to decrease the environmental impact of energy sector, while providing an adequate standard of energy services. The evaluation of trade-offs among technologies, practices and policies for the mitigation of environmental problems related to energy resources depletion requires a deep knowledge of the local and global effects of the proposed solutions. While attempting to calculate such effects for a large complex system like a city, an advanced multidisciplinary approach is needed to overcome difficulties in modeling correctly real phenomena while maintaining computational transparency, reliability, interoperability and efficiency across different levels of analysis. Further, a methodology that rationally integrates different computational models and techniques is necessary to enable collaborative research in the field of optimization of energy efficiency strategies and integration of renewable energy systems in urban areas. For these reasons, a selection of currently available models for distributed generation planning and design is presented and analyzed in the perspective of gathering their capabilities in an optimization framework to support a paradigm shift in urban energy systems. This framework embodies the main concepts of a local energy management system and adopts a multicriteria perspective to determine optimal solutions for providing energy services through distributed generation. (C) 2010 Elsevier Ltd. All rights reserved. 2011 * 180(<-605): Multi-criteria decision analysis for the optimal management of nitrate contamination of aquifers We present an integrated methodology for the optimal management of nitrate contamination of ground water combining environmental assessment and economic cost evaluation through multi-criteria decision analysis. The proposed methodology incorporates an integrated physical modeling framework accounting for on-ground nitrogen loading and losses, soil nitrogen dynamics, and fate and transport of nitrate in ground water to compute the sustainable on-ground nitrogen loading such that the maximum contaminant level is not violated. A number of protection alternatives to stipulate the predicted sustainable on-ground nitrogen loading are evaluated using the decision analysis that employs the importance order of criteria approach for ranking and selection of the protection alternatives. The methodology was successfully demonstrated for the Sumas-Blaine aquifer in Washington State. The results showed the importance of using this integrated approach which predicts the sustainable on-ground nitrogen loadings and provides an insight into the economic consequences generated in satisfying the environmental constraints. The results also show that the proposed decision analysis framework, within certain limitations, is effective when selecting alternatives with competing demands. (c) 2004 Elsevier Ltd. All rights reserved. 2005 * 182(<-284): Planning of Groundwater Supply Systems Subject to Uncertainty Using Stochastic Flow Reduced Models and Multi-Objective Evolutionary Optimization The typical modeling approach to groundwater management relies on the combination of optimization algorithms and subsurface simulation models. In the case of groundwater supply systems, the management problem may be structured into an optimization problem to identify the pumping scheme that minimizes the total cost of the system while complying with a series of technical, economical, and hydrological constraints. Since lack of data on the subsurface system most often reflects upon the development of groundwater flow models that are inherently uncertain, the solution to the groundwater management problem should explicitly consider the tradeoff between cost optimality and the risk of not meeting the management constraints. This work addresses parameter uncertainty following a stochastic simulation (or Monte Carlo) approach, in which a sufficiently large ensemble of parameter scenarios is used to determine representative values selected from the statistical distribution of the management objectives, that is, minimizing cost while minimizing risk. In particular, the cost of the system is estimated as the expected value of the cost distribution sampled through stochastic simulation, while the risk of not meeting the management constraints is quantified as the expected value of the intensity of constraint violation. The solution to the multi-objective optimization problem is addressed by combining a multi-objective evolutionary algorithm with a stochastic model simulating groundwater flow in confined aquifers. Evolutionary algorithms are particularly appropriate in optimization problems characterized by non-linear and discontinuous objective functions and constraints, although they are also computationally demanding and require intensive analyses to tune input parameters that guarantee optimality to the solutions. In order to drastically reduce the otherwise overwhelming computational cost, a novel stochastic flow reduced model is thus developed, which practically allows for averting the direct inclusion of the full simulation model in the optimization loop. The computational efficiency of the proposed framework is such that it can be applied to problems characterized by large numbers of decision variables. 2012 * 199(<- 50): Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. 2015 * 200(<- 82): GIS-based multicriteria evaluation with multiscale analysis to characterize urban landslide susceptibility in data-scarce environments Landslides can have a severe negative impact on the socio-economic and environmental state of individuals and their communities. Minimizing these impacts is dependent on the effective identification of risk areas using a susceptibility analysis process. In such a process, output maps are generated to determine various levels of threat to human populations. However, the reliability of the process is controlled by critical factors such as data availability and data quality. In data-scarce environments, susceptibility analysis done at multiple interlocking geographic scales can provide a convergence of evidence to reliably identify risk areas. In this study, multiscale analysis and fuzzy sets are combined with GIS-based multicriteria evaluation (MCE) to determine landslide susceptibility levels for areas of the Metro Vancouver region, British Columbia, Canada. Landslide-conditioning parameters are chosen based on their relevance and effect on a particular scale of analysis. These parameters are derived for three geographic scales using digital elevation models, drainage networks and road networks. An analytical hierarchy process (AHP) analysis provides relative weights of importance to combine variables. The landslide susceptibility analysis is done for regional, municipal and local scales at resolutions of 50 m, 10 m, and 1 m respectively. At each scale, susceptibility models are validated against real inventory data using the seed cell area index (SCAI) method. The strong inverse correlation between the map classes and the SCAI adds to confidence in the results. The developed approach can enable analysts in data-scarce environments to reliably identify susceptible areas thereby improving hazard mitigation, emergency services targeting, and overall community planning. (C) 2014 Elsevier Ltd. All rights reserved. 2015 * 201(<-123): Landslide susceptibility mapping using GIS-based multi-criteria decision analysis, support vector machines, and logistic regression Identification of landslides and production of landslide susceptibility maps are crucial steps that can help planners, local administrations, and decision makers in disaster planning. Accuracy of the landslide susceptibility maps is important for reducing the losses of life and property. Models used for landslide susceptibility mapping require a combination of various factors describing features of the terrain and meteorological conditions. Many algorithms have been developed and applied in the literature to increase the accuracy of landslide susceptibility maps. In recent years, geographic information system-based multi-criteria decision analyses (MCDA) and support vector regression (SVR) have been successfully applied in the production of landslide susceptibility maps. In this study, the MCDA and SVR methods were employed to assess the shallow landslide susceptibility of Trabzon province (NE Turkey) using lithology, slope, land cover, aspect, topographic wetness index, drainage density, slope length, elevation, and distance to road as input data. Performances of the methods were compared with that of widely used logistic regression model using ROC and success rate curves. Results showed that the MCDA and SVR outperformed the conventional logistic regression method in the mapping of shallow landslides. Therefore, multi-criteria decision method and support vector regression were employed to determine potential landslide zones in the study area. 2014 * 202(<-128): GIS-based landslide susceptibility mapping with probabilistic likelihood ratio and spatial multi-criteria evaluation models (North of Tehran, Iran) The aim of this study is to produce landslide susceptibility mapping by probabilistic likelihood ratio (PLR) and spatial multi-criteria evaluation (SMCE) models based on geographic information system (GIS) in the north of Tehran metropolitan, Iran. The landslide locations in the study area were identified by interpretation of aerial photographs, satellite images, and field surveys. In order to generate the necessary factors for the SMCE approach, remote sensing and GIS integrated techniques were applied in the study area. Conditioning factors such as slope degree, slope aspect, altitude, plan curvature, profile curvature, surface area ratio, topographic position index, topographic wetness index, stream power index, slope length, lithology, land use, normalized difference vegetation index, distance from faults, distance from rivers, distance from roads, and drainage density are used for landslide susceptibility mapping. Of 528 landslide locations, 70 % were used in landslide susceptibility mapping, and the remaining 30 % were used for validation of the maps. Using the above conditioning factors, landslide susceptibility was calculated using SMCE and PLR models, and the results were plotted in ILWIS-GIS. Finally, the two landslide susceptibility maps were validated using receiver operating characteristic curves and seed cell area index methods. The validation results showed that area under the curve for SMCE and PLR models is 76.16 and 80.98 %, respectively. The results obtained in this study also showed that the probabilistic likelihood ratio model performed slightly better than the spatial multi-criteria evaluation. These landslide susceptibility maps can be used for preliminary land use planning and hazard mitigation purpose. 2014 * 203(<-130): GIS-based groundwater spring potential assessment and mapping in the Birjand Township, southern Khorasan Province, Iran Three statistical models-frequency ratio (FR), weights-of-evidence (WofE) and logistic regression (LR)-produced groundwater-spring potential maps for the Birjand Township, southern Khorasan Province, Iran. In total, 304 springs were identified in a field survey and mapped in a geographic information system (GIS), out of which 212 spring locations were randomly selected to be modeled and the remaining 92 were used for the model evaluation. The effective factors-slope angle, slope aspect, elevation, topographic wetness index (TWI), stream power index (SPI), slope length (LS), plan curvature, lithology, land use, and distance to river, road, fault-were derived from the spatial database. Using these effective factors, groundwater spring potential was calculated using the three models, and the results were plotted in ArcGIS. The receiver operating characteristic (ROC) curves were drawn for spring potential maps and the area under the curve (AUC) was computed. The final results indicated that the FR model (AUC = 79.38 %) performed better than the WofE (AUC = 75.69 %) and LR (AUC = 63.71 %) models. Sensitivity and factor analyses concluded that the bivariate statistical index model (i.e. FR) can be used as a simple tool in the assessment of groundwater spring potential when a sufficient number of data are obtained. 2014 * 204(<-299): A comparison of landslide susceptibility maps produced by logistic regression, multi-criteria decision, and likelihood ratio methods: a case study at Izmir, Turkey The main purpose of this study is to compare the use of logistic regression, multi-criteria decision analysis, and a likelihood ratio model to map landslide susceptibility in and around the city of Izmir in western Turkey. Parameters, such as lithology, slope gradient, slope aspect, faults, drainage lines, and roads, were considered. Landslide susceptibility maps were produced using each of the three methods and then compared and validated. Before the modeling and validation, the observed landslides were separated into two groups. The first group was for training, and the other group was for validation steps. The accuracy of models was measured by fitting them to a validation set of observed landslides. For validation process, the area under curvature (AUC) approach was applied. According to the AUC values of 0.810, 0.764, and 0.710 for logistic regression, likelihood ratio, and multi-criteria decision analysis, respectively, logistic regression was determined to be the most accurate method among the other used landslide susceptibility mapping methods. Based on these results, logistic regression and likelihood ratio models can be used to mitigate hazards related to landslides and to aid in land-use planning. 2012 * 205(<-407): Landslide susceptibility mapping for Ayvalik (Western Turkey) and its vicinity by multicriteria decision analysis This paper presents the results of geographical information system (GIS)-based landslide susceptibility mapping in AyvalA +/- k, western Turkey using multi-criteria decision analysis. The methodology followed in the study includes data production, standardization, and analysis stages. A landslide inventory of the study area was compiled from aerial photographs, satellite image interpretations, and detailed field surveys. In total, 45 landslides were recorded and mapped. The areal extent of the landslides is 1.75 km(2). The identified landslides are mostly shallow-seated, and generally exhibit progressive character. They are mainly classified as rotational, planar, and toppling failures. In all, 51, 45, and 4% of the landslides mapped are rotational, planar, and toppling types, respectively. Morphological, geological, and land-use data were produced using existing topographical and relevant thematic maps in a GIS framework. The considered landslide-conditioning parameters were slope gradient, slope aspect, lithology, weathering state of the rocks, stream power index, topographical wetness index, distance from drainage, lineament density, and land-cover and vegetation density. These landslide parameters were standardized in a common data scale by fuzzy membership functions. Then, the degree to which each parameter contributed to landslides was determined using the analytical hierarchy process method, and the weight values of these parameters were calculated. The weight values obtained were assigned to the corresponding parameters, and then the weighted parameters were combined to produce a landslide susceptibility map. The results obtained from the susceptibility map were evaluated with the landslide location data to assess the reliability of the map. Based on the findings obtained in this study, it was found that 5.19% of the total area was prone to landsliding due to the existence of highly and completely weathered lithologic units and due to the adverse effects of topography and improper land use. 2010 * 206(<-510): Landslide susceptibility mapping for a landslide-prone area (Findikli, NE of Turkey) by likelihood-frequency ratio and weighted linear combination models Landslides are very common natural problems in the Black Sea Region of Turkey due to the steep topography, improper use of land cover and adverse climatic conditions for landslides. In the western part of region, many studies have been carried out especially in the last decade for landslide susceptibility mapping using different evaluation methods such as deterministic approach, landslide distribution, qualitative, statistical and distribution-free analyses. The purpose of this study is to produce landslide susceptibility maps of a landslide-prone area (Findikli district, Rize) located at the eastern part of the Black Sea Region of Turkey by likelihood frequency ratio (LRM) model and weighted linear combination (WLC) model and to compare the results obtained. For this purpose, landslide inventory map of the area were prepared for the years of 1983 and 1995 by detailed field surveys and aerial-photography studies. Slope angle, slope aspect, lithology, distance from drainage lines, distance from roads and the land-cover of the study area are considered as the landslide-conditioning parameters. The differences between the susceptibility maps derived by the LRM and the WLC models are relatively minor when broad-based classifications are taken into account. However, the WLC map showed more details but the other map produced by LRM model produced weak results. The reason for this result is considered to be the fact that the majority of pixels in the LRM map have high values than the WLC-derived susceptibility map. In order to validate the two susceptibility maps, both of them were compared with the landslide inventory map. Although the landslides do not exist in the very high susceptibility class of the both maps, 79% of the landslides fall into the high and very high susceptibility zones of the WLC map while this is 49% for the LRM map. This shows that the WLC model exhibited higher performance than the LRM model. 2008 * 208(<- 69): Improvement in estimation of soil water retention using fractal parameters and multiobjective group method of data handling Soil water retention characteristic is required for modeling of water and substance movement in unsaturated soils and need to be estimated using indirect methods. Point pedotransfer functions (PTFs) for prediction of soil water content at matric suctions of 1, 5, 25, 50, and 1500kPa were developed and validated using a data-set of 148 soil samples from Hamedan and Guilan provinces, Iran, by multiobjective group method of data handling (mGMDH). In addition to textural and structural properties, fractal parameters of the power-law fractal models for both particles and aggregates distributions were also included as predictors. Their inclusion significantly improved the PTFs' accuracy and reliability. The aggregate size distribution fractal parameters ranked next to the particle size distribution (PSD) in terms of prediction accuracy. The mGMDH-derived PTFs were significantly more reliable than those by artificial neural networks but their accuracies were practically the same. Similarity between the fractal behavior of particle and void size distributions may contribute to the improvement of the derived PTFs using PSD fractal parameters. It means that both distributions of the pore and particle size represent the fractal behavior and can be described by fractal models. 2015 * 209(<-251): Artificial Bee Colony approach to information granulation-based fuzzy radial basis function neural networks for image fusion This paper mainly proposed a novel method of Artificial Bee Colony (ABC) optimized fuzzy radial basis function neural networks with information granulation (IG-FRBFNNs) for solving the image fusion problem. Image fusion is the process of combining relevant information from two or more images into a single image. The fuzzy RBF neural networks exploit the Fuzzy C-Means (FCM) clustering to form the premise part of the rules. As the consequent part of the model (being the local model representing input output relation in the corresponding sub-space), four types of polynomials are considered, with the ordinary least square (OLS) learning being exploited to estimate the values of the coefficients of the polynomial. Since the performance of the IG-FRBFNN model is directly affected by the parameters such as the fuzzification coefficient used in the FCM, the position of their centers and the values of the widths, ABC algorithm is exploited to carry out the structural and parametric optimization of the model respectively while the optimization is of multi-objective character as it is aimed at the simultaneous minimization of complexity and maximization of accuracy. Subsequently, the proposed approach can dynamically obtain optimal image fusion weights based on regional features, so as to optimize performance of image fusion. Series of experimental results are presented to verify the feasibility and effectiveness of the proposed approach. (C) 2012 Elsevier GmbH. All rights reserved. 2013 * 210(<-260): A Neural Network Based Intelligent Predictive Sensor for Cloudiness, Solar Radiation and Air Temperature Accurate measurements of global solar radiation and atmospheric temperature, as well as the availability of the predictions of their evolution over time, are important for different areas of applications, such as agriculture, renewable energy and energy management, or thermal comfort in buildings. For this reason, an intelligent, light-weight and portable sensor was developed, using artificial neural network models as the time-series predictor mechanisms. These have been identified with the aid of a procedure based on the multi-objective genetic algorithm. As cloudiness is the most significant factor affecting the solar radiation reaching a particular location on the Earth surface, it has great impact on the performance of predictive solar radiation models for that location. This work also represents one step towards the improvement of such models by using ground-to-sky hemispherical colour digital images as a means to estimate cloudiness by the fraction of visible sky corresponding to clouds and to clear sky. The implementation of predictive models in the prototype has been validated and the system is able to function reliably, providing measurements and four-hour forecasts of cloudiness, solar radiation and air temperature. 2012 * 214(<-230): Evaluation of several rainfall products used for hydrological applications over West Africa using two high-resolution gauge networks The evaluation of rainfall products over the West African region will be an important component of the Megha-Tropiques (MT) Ground Validation (GV) plan. In this paper, two dense research gauge networks from Benin and Niger, integrated in the MT GV plan, are presented and are used to evaluate several currently available global or regional satellite-based rainfall products. Eight productsthe Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), Climate Prediction Center Morphing method (CMORPH), Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 real-time and gauge-adjusted version, Global Satellite Mapping of Precipitation (GSMaP), Climate Prediction Center (CPC) African Rainfall Estimate (RFE), Estimation des Precipitation par SATellite (EPSAT), and Global Precipitation Climatology Project One Degree Daily estimate (GPCP-1DD)are compared to the ground reference. The comparisons are carried out at daily, 1 degrees resolution, over the rainy season (June-September), between the years 2003 and 2010. The work focuses on the ability of the various products to reproduce salient features of the rainfall regime that impact the hydrological response. The products are analysed on a multi-criteria basis, focusing in particular on the way they distribute the rainfall within the season and by rain rate class. Standard statistical diagnoses such as the correlation coefficient, bias, root mean square error and Nash skill score are computed and the inter-annual variability is documented. Two simplified hydrological models are used to illustrate how the nature and structure of the product error impact the model output in terms of runoff (calculated using the Soil Conservation Service method, SCS, in Niger) or outflow (calculated with the modele du Genie Rural a 4 parametres Journalier', GR4J model, in Benin). Copyright (c) 2013 Royal Meteorological Society 2013 * 215(<-503): Regionalisation of hydrological model parameters under parameter uncertainty: A case study involving TOPMODEL and basins across the globe In this paper, we present a method to account for modeling uncertainties while regionalising model parameters. Linking model. parameters to physical catchment attributes is a popular approach that enables the application of a conceptual model to an ungauged site. The functional relationship can be derived either from the calibrated model. parameters (direct calibration method) or by calibrating the functional. function (regional. calibration method). Both of these approaches are explored through a case study involving TOPMODEL and a number of small- to medium-sized humid basins located in various geographic and climatic regions around the globe. The predictive performance of the functional relationship derived using the direct calibration method (e.g., multiple regression, artificial neural. network and partial least square regression) varied among the different schemes. However, the average of the model. parameters estimated from regionatisation schemes based on direct calibration is found to be a better surrogate. Even with the use of a parsimonious hydrological model and with posing model calibration as a multi-objective problem, the model. parameter uncertainty and its effect on model. prediction were observed to be high and varied among the basins. Therefore, to avoid the effect of model parameter uncertainty on regionalization results, a regional calibration method that skips direct calibration of the hydrological model was implemented. This method was improved in order to take into account multiple objective criteria white calibrating regional parameters. The predictive performance of the improved regional calibration method was found to be superior to the direct calibration method, indicating that the identifiability of model parameters has an apparent effect on deriving predictive models for regionalisation. However, the regional calibration method was unable to uniquely identify the regional relationship, and the modeling uncertainties quantified using Pareto optimal regional relationships were considerable. Regionalisation schemes that are based on direct calibration do not explicitly account for the modeling uncertainties. Therefore, to account for these uncertainties in model parameters and regionalisation schemes, methods based on regionalisation of vectors of model parameters (i.e. regionalizing the vectors of equally likely values of model parameters) and posterior probability distribution of model parameters (i.e. estimating the posterior probability distribution of model parameters at ungauged sites by linking the entries of model parameters' covariance matrix and the posterior mean of model parameter to the catchment attributes) are introduced. The uncertainties in model prediction as quantified from both methods closely followed the prediction uncertainties quantified from calibrated posterior probability distributions of model parameters. Moreover, though the prediction uncertainties associated with the regional calibration method as quantified from the Pareto optimal regional relationship were comparatively higher than those obtained from the direct calibration schemes, they were in close agreement with the prediction uncertainties quantified from the calibrated posterior probability distribution. The ensemble of simulated flows realized from the model parameters sampled from regionalized posterior probability distributions for five ungauged basins are also presented as validation of the proposed methodology. (c) 2008 Elsevier B.V. All rights reserved. 2008 * 216(<- 19): Neural network river forecasting through baseflow separation and binary-coded swarm optimization The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 219(<-483): Improved irrigation water demand forecasting using a soft-computing hybrid model Recently, Computational Neural Networks (CNNs) and fuzzy inference systems have been successfully applied to time series forecasting. In this study the performance of a hybrid methodology combining feed forward CNN, fuzzy logic and genetic algorithm to forecast one-day ahead daily water demands at irrigation districts considering that only flows in previous days are available for the calibration of the models were analysed. Individual forecasting models were developed using historical time series data from the Fuente Palmera irrigation district located in Andalucia, southern Spain. These models included univariate autoregressive CNNs trained with the Levenberg-Marquardt algorithm (LM). The individual models forecasting were then corrected via a fuzzy logic approach whose parameters were adjusted using a genetic algorithm in order to improve the forecasting accuracy. For the purpose of comparison, this hybrid methodology was also applied with univariate autoregressive CNN models trained with the Extended-Delta-Bar-Delta algorithm (EDBD) and calibrated in a previous study in the same irrigation district. A multicriteria evaluation with several statistics and absolute error measures showed that the hybrid model performed significantly better than univariate and multivariate autoregressive CNN's. (C) 2008 IAgrE. Published by Elsevier Ltd. All rights reserved. 2009 * 223(<-225): Comparison of Artificial Neural Network Methods with L-moments for Estimating Flood Flow at Ungauged Sites: the Case of East Mediterranean River Basin, Turkey A regional flood frequency analysis based on the index flood method is applied using probability distributions commonly utilized for this purpose. The distribution parameters are calculated by the method of L-moments with the data of the annual flood peaks series recorded at gauging sections of 13 unregulated natural streams in the East Mediterranean River Basin in Turkey. The artificial neural networks (ANNs) models of (1) the multi-layer perceptrons (MLP) neural networks, (2) radial basis function based neural networks (RBNN), and (3) generalized regression neural networks (GRNN) are developed as alternatives to the L-moments method. Multiple-linear and multiple-nonlinear regression models (MLR and MNLR) are also used in the study. The L-moments analysis on these 13 annual flood peaks series indicates that the East Mediterranean River Basin is hydrologically homogeneous as a whole. Among the tried distributions which are the Generalized Logistic, Generalized Extreme Vaules, Generalized Normal, Pearson Type III, Wakeby, and Generalized Pareto, the Generalized Logistic and Generalized Extreme Values distributions pass the Z statistic goodness-of-fit test of the L-moments method for the East Mediterranean River Basin, the former performing yet better than the latter. Hence, as the outcome of the L-moments method applied by the Generalized Logistic distribution, two equations are developed to estimate flood peaks of any return periods for any un-gauged site in the study region. The ANNs, MLR and MNLR models are trained and tested using the data of these 13 gauged sites. The results show that the predicting performance of the MLP model is superior to the others. The application of the MLP model is performed by a special Matlab code, which yields logarithm of the flood peak, Ln(Q(T)), versus a desired return period, T. 2013 * 224(<-240): Prediction of the baseline toxicity of non-polar narcotic chemical mixtures by QSAR approach Environmental contaminants are frequently encountered as mixtures, and research on mixture toxicity is a hot topic until now. In the present study, the mixture toxicity of non-polar narcotic chemical was modeled by linear and nonlinear statistical methods, that is to say, by forward stepwise multilinear regression (MLR) and radial basis function neural networks (RBFNNs) from molecular descriptors that are calculated and be defined as composite descriptors according to the fractional concentrations of the mixture components. The statistical parameters provided by the MLR model were R-2 = 0.9512, RMS = 0.3792, F = 1402.214 and LOOq(2) = 0.9462 for the training set, and R-2 = 0.9453, RMS = 03458, F = 276.671 and q(ext)(2) = 0.9450 for the external test set. The RBFNN model gave the following statistical results, namely: R-2 = 0.9779, RMS = 0.2498, F = 3188.202 and LOOq(2) = 0.9746 for the training set, and R-2 = 0.9763, RMS = 0.2358, F = 660.631 and q(ext)(2) = 0.9745, for the external test set. Overall, these results suggest that the QSAR MLR-based model is a simple, reliable, credible and fast tool for the prediction mixture toxicity of non-polar narcotic chemicals. The RBFNN model gave even improved results. In addition, epsilon(LUMO+1) (the energy of the second lowest unoccupied molecular orbital) and PPSA (total charge weighted partial positively surface area) were found to have high correlation with the mixture toxicity. (c) 2012 Elsevier Ltd. All rights reserved. 2013 * 225(<-249): Landslide susceptibility estimation by random forests technique: sensitivity and scaling issues Despite the large number of recent advances and developments in landslide susceptibility mapping (LSM) there is still a lack of studies focusing on specific aspects of LSM model sensitivity. For example, the influence of factors such as the survey scale of the landslide conditioning variables (LCVs), the resolution of the mapping unit (MUR) and the optimal number and ranking of LCVs have never been investigated analytically, especially on large data sets. In this paper we attempt this experimentation concentrating on the impact of model tuning choice on the final result, rather than on the comparison of methodologies. To this end, we adopt a simple implementation of the random forest (RF), a machine learning technique, to produce an ensemble of landslide susceptibility maps for a set of different model settings, input data types and scales. Random forest is a combination of Bayesian trees that relates a set of predictors to the actual landslide occurrence. Being it a nonparametric model, it is possible to incorporate a range of numerical or categorical data layers and there is no need to select unimodal training data as for example in linear discriminant analysis. Many widely acknowledged landslide predisposing factors are taken into account as mainly related to the lithology, the land use, the geomorphology, the structural and anthropogenic constraints. In addition, for each factor we also include in the predictors set a measure of the standard deviation (for numerical variables) or the variety (for categorical ones) over the map unit. As in other systems, the use of RF enables one to estimate the relative importance of the single input parameters and to select the optimal configuration of the classification model. The model is initially applied using the complete set of input variables, then an iterative process is implemented and progressively smaller subsets of the parameter space are considered. The impact of scale and accuracy of input variables, as well as the effect of the random component of the RF model on the susceptibility results, are also examined. The model is tested in the Arno River basin (central Italy). We find that the dimension of parameter space, the mapping unit (scale) and the training process strongly influence the classification accuracy and the prediction process. This, in turn, implies that a careful sensitivity analysis making use of traditional and new tools should always be performed before producing final susceptibility maps at all levels and scales. 2013 * 226(<-423): Neural Networks for the Prediction of Species-Specific Plot Volumes Using Airborne Laser Scanning and Aerial Photographs Parametric and nonparametric modeling methods have been widely used for the estimation of forest attributes from airborne laser-scanning data and aerial photographs. However, the methods adopted suffered from complex remote-sensed data structures involving high dimensions, nonlinear relationships, different statistical distributions, and outliers. In this context, artificial neural networks (ANNs) are of interest as they have many clear benefits over conventional modeling methods and could then enhance the accuracy of current forest-inventory methods. This paper examines the ability of common ANN modeling techniques for the prediction of species-specific forest attributes, as exemplified here with the prediction stem volumes (cubic meters per hectare) at the field plot and forest stand levels. Three modeling methods were evaluated, namely, the multilayer perceptron (MLP), support vector regression (SVR), and self-organizing map, and intercompared with the corresponding nonparametric k most similar neighbor method using cross-validated statistical performance indexes. To decrease the number of model-input variables, a multiobjective input-selection method based on genetic algorithm is adopted. The numerical results obtained in the study suggest that ANNs are appropriate and accurate methods for the assessment of species-specific forest attributes, which can be used as alternatives to multivariate linear regression and nonparametric nearest neighbor models. Among the ANN models, SVR and MLP provide the best choices for prediction purposes as they yielded high prediction accuracies for species-specific tree volumes throughout. 2010 * 228(<-523): Incorporating anthropogenic variables into a species distribution model to map gypsy moth risk This paper presents a novel methodology for multi-scale and multi-type spatial data integration in support of insect pest risk/vulnerability assessment in the contiguous United States. Probability of gypsy moth (Lymantria dispar L.) establishment is used as a case study. A neural network facilitates the integration of variables representing dynamic anthropogenic interaction and ecological characteristics. Neural network model (back-propagation network [BPN]) results are compared to logistic regression and multi-criteria evaluation via weighted linear combination, using the receiver operating characteristic area under the curve (AUC) and a simple threshold assessment. The BPN provided the most accurate infestation-forecast predictions producing an AUC of 0.93, followed by multi-criteria evaluation (AUC = 0.92) and logistic regression (AUC = 0.86) when independently validating using post model infestation data. Results suggest that BPN can provide valuable insight into factors contributing to introduction for invasive species whose propagation and establishment requirements are not fully understood. The integration of anthropogenic and ecological variables allowed production of an accurate risk model and provided insight into the impact of human activities. (C) 2007 Elsevier B.V. All rights reserved. 2008 * 229(<-525): Study of the potential of alternative crops by integration of multisource data using a neuro-fuzzy technique This work proposes a neuro-fuzzy method for suggesting alternative crop production over a region using integrated data obtained from land-survey maps as well as satellite imagery. The methodology proposed here uses an artificial neural network (multilayer perceptron, MLP) to predict alternative crop production. For each pixel, the MLP takes vector input comprising elevation, rainfall and goodness values of different existing crops. The first two components of the aforementioned input, that is, elevation and rainfall, are determined from contour information of land-survey maps. The other components, such as goodness values of different existing crops, are based on the productivity estimates of soil determined by fuzzyfication and expert opinion (on soil) along with production quality by the Normalized Difference Vegetation Index (NDVI) obtained from satellite imagery. The methodology attempts to ensure that the suggested crop will also be a high productivity crop for that region. 2008 * 230(<-595): Evaluation of an integrated modelling system containing a multi-layer perceptron model and the numerical weather prediction model HIRLAM for the forecasting of urban airborne pollutant concentrations In this paper, a multi-layer perceptron (MLP) model and the Finnish variant of the numerical weather prediction model HIRLAM (High Resolution Limited Area Model) were integrated and evaluated for the forecasting in time of urban pollutant concentrations. The forecasts of the combination of the MLP and HIRLAM models are compared with the corresponding forecasts of the MLP models that utilise meteorologically pre-processed input data. A. novel input selection method based on the use of a multi-objective genetic algorithm (MOGA) is applied in conjunction with the sensitivity analysis to reduce the excessively large number of potential meteorological input variables; its use improves the performance of the MLP model. The computed air quality forecasts contain the sequential hourly time series of the concentrations of nitrogen dioxide (NO2) and fine particulate matter (PM2.5) from May 2000 to April 2003; the corresponding concentrations have also been measured at two urban air quality stations in Helsinki. The results obtained with the MLP models that use HIRLAM forecasts show fairly good overall agreement for both pollutants. The model performance is substantially better, when the HIRLAM forecasts are used, compared with those obtained both using either HIRLAM analysis data or meteorological pre-processor, for both pollutants. The performance of the currently widely used statistical forecasting methods (such as those based on neural networks) could therefore be significantly improved by using the forecasts of NWP models, instead of the conventionally utilised directly measured or meteorological pre-processed input data. However, the performance of all operational models considered is relatively worse in the course of air pollution episodes. (c) 2005 Elsevier Ltd. All rights reserved. 2005 * 231(<- 18): Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a MBFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 232(<- 34): Improving event-based rainfall-runoff simulation using an ensemble artificial neural network based hybrid data-driven model An ensemble artificial neural network (ENN) based hybrid function approximator (named PEK), integrating the partial mutual information (PMI) based separate input variable selection (IVS) scheme, ENN-based output estimation, and K-nearest neighbor regression based output error estimation, has been proposed to improve event-based rainfall-runoff (RR) simulation. A hybrid data-driven RR model, named non-updating PEK (NU-PEK), is also developed on the basis of the PEK approximator. The rainfall and simulated antecedent discharges input variables for the NU-PEK model are selected separately by using a PMI-based IVS algorithm. A newly proposed candidate rainfall input set, sliding window cumulative rainfall is also proposed. These two methods are integrated to make a good compromise between the adequacy and parsimony of the input information and make contribution to the understandings of the hydrologic responses to the regional precipitation. The number of component networks and the topology and parameter settings of each component network are optimized simultaneously by using the multi-objective NSGA-II optimization algorithm and the early stopping Levenberg-Marquardt algorithm. The optimal combination weights of the ENN are obtained according to the Akaike information criterions of component networks. By combining all these methods, the simulation accuracy and generalization property of the PEK approximator are much better than traditional artificial neural network. The NU-PEK model is constructed by combining the PEK approximator with a newly proposed non-updating modeling approach to improve event-based RR simulation. The NU-PEK model was applied to three Chinese catchments for RR simulation and compared with two popular RR models, including the conceptual Xinanjiang model and the conceptual-data-driven IHACRES model. The results of simulation and sensitivity analysis indicate that the developed model generally outperforms the other two models. The NU-PEK model is capable of producing high accuracy non-updating RR simulation without the use of the real-time information, e.g. the observed discharges at previous time steps. 2015 * 233(<-103): Improved Neural Network Model and Its Application in Hydrological Simulation When applying a back-propagation neural network (BPNN) model in hydrological simulation, researchers generally face three problems. The first one is that real-time correction mode must be adopted when forecasting basin outlet flow, i.e., observed antecedent outlet flows must be utilized as part of the inputs of the BPNN model. Under this mode, outlet flow can only be forecasted one time step ahead, i.e., continuous simulation cannot be implemented. The second one is that topology, weights, and biases of BPNN cannot be optimized simultaneously by traditional training methods. Topology designed by the trial-and-error method and weights and biases trained by back-propagation (BP) algorithm are not always global optimal and the optimizations are experience-based. The third one is that simulation accuracy for the validation period is usually much lower than that for the calibration period, i.e., generalization property of BPNN is not good. To solve these problems, a novel coupled black-box model named BK (BP-KNN) and a new methodology of calibration are proposed in this paper. The BK model was developed by coupling BPNN model with K-nearest neighbor (KNN) algorithm. Unlike the traditional BPNN model previously reported, the BK model implemented continuous simulation under nonreal-time correction mode. Observed antecedent outlet flows were substituted by simulated values. The simulated values were calculated by the BPNN model first and then corrected based on the KNN algorithm, historical simulation error, and other relevant factors. According to the calculation process, parameters of the BK model were divided into three hierarchies and each hierarchy was calibrated respectively by the NSGA-II multiobjective optimization algorithm. This new methodology of calibration ensured higher accuracy and efficiency, and enhanced the generalization property of the BPNN. The accuracy of flow concentration module of Xinanjiang model is not always high enough, in order to combine advantages of conceptual and black-box models, XBK and XSBK models were proposed. The XBK model was constituted by coupling runoff generation module of Xinanjiang model with BK flow concentration model and the XSBK model was constituted by coupling runoff generation and separation module of Xinanjiang model with BK flow concentration model. BK, XBK, XSBK, and Xinanjiang models were applied in Chengcun, Dongwan, and Dage watersheds. The simulation results indicated that improved models obtained higher accuracies than Xinanjiang model and can overcame limitations of traditional BPNN model. (C) 2014 American Society of Civil Engineers. 2014 * 234(<-349): Evaluation of modelling techniques for forest site productivity prediction in contrasting ecoregions using stochastic multicriteria acceptability analysis (SMAA) Accurate estimation of site productivity is crucial for sustainable forest resource management. In recent years, a variety of modelling approaches have been developed and applied to predict site index from a wide range of environmental variables, with varying success. The selection, application and comparison of suitable modelling techniques remains therefore a meticulous task, subject to ongoing research and debate. In this study, the performance of five modelling techniques was compared for the prediction of forest site index in two contrasting ecoregions: the temperate lowland of Flanders, Belgium, and the Mediterranean mountains in SW Turkey. The modelling techniques include statistical (multiple linear regression - MLR, classification and regression trees - CART, generalized additive models - GAM), as well as machine-learning (artificial neural networks - ANN) and hybrid techniques (boosted regression trees - BRT). Although the selected predictor variables differed largely, with mainly topographic predictor variables in the mountain area versus soil and humus variables in the lowland area, the techniques performed comparatively similar in both ecoregions. Stochastic multicriteria acceptability analysis (SMAA) was found a well-suited multicriteria evaluation method to evaluate the performance of the modelling techniques. It has been applied on the individual species models of Flanders, as well as a species-independent evaluation, combining all developed models from the two contrasting ecoregions. We came to the conclusion that non-parametric models are better suited for predicting site index than traditional MLR. GAM and BRT are the preferred alternatives for a wide range of weight preferences. CART is preferred when very high weight is given to user-friendliness, whereas ANN is recommended when most weight is given to pure predictive performance. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 235(<-416): Comparison and ranking of different modelling techniques for prediction of site index in Mediterranean mountain forests Forestry science has a long tradition of studying the relationship between stand productivity and abiotic and biotic site characteristics, such as climate, topography, soil and vegetation. Many of the early site quality modelling studies related site index to environmental variables using basic statistical methods such as linear regression. Because most ecological variables show a typical non-linear course and a non-constant variance distribution, a large fraction of the variation remained unexplained by these linear models. More recently, the development of more advanced non-parametric and machine learning methods provided opportunities to overcome these limitations. Nevertheless, these methods also have drawbacks. Due to their increasing complexity they are not only more difficult to implement and interpret, but also more vulnerable to overfitting. Especially in a context of regionalisation, this may prove to be problematic. Although many non-parametric and machine learning methods are increasingly used in applications related to forest site quality assessment, their predictive performance has only been assessed for a limited number of methods and ecosystems. In this study, five different modelling techniques are compared and evaluated, i.e. multiple linear regression (MLR), classification and regression trees (CART), boosted regression trees (BRT), generalized additive models (GAM), and artificial neural networks (ANN). Each method is used to model site index of homogeneous stands of three important tree species of the Taurus Mountains (Turkey): Pinus brutia, Pinus nigra and Cedrus libani. Site index is related to soil, vegetation and topographical variables, which are available for 167 sample plots covering all important environmental gradients in the research area. The five techniques are compared in a multi-criteria decision analysis in which different model performance measures, ecological interpretability and user-friendliness are considered as criteria. When combining these criteria, in most cases GAM is found to outperform all other techniques for modelling site index for the three species. BRT is a good alternative in case the ecological interpretability of the technique is of higher importance. When user-friendliness is more important MLR and CART are the preferred alternatives. Despite its good predictive performance, ANN is penalized for its complex, non-transparent models and big training effort. (C) 2010 Elsevier B.V. All rights reserved. 2010 * 238(<-568): Authentication of vegetable oils on the basis of their physico-chemical properties with the aid of chemometrics In food production, reliable analytical methods for confirmation of purity or degree of spoilage are required by growers, food quality assessors, processors, and consumers. Seven parameters of physico-chemical properties, such as acid number, colority, density, refractive index, moisture and volatility, saponification value and peroxide value, were measured for quality and adulterated soybean, as well as quality and rancid rapeseed oils. Chemometrics methods were then applied for qualitative and quantitative discrimination and prediction of the oils by methods such exploratory principal component analysis (PCA), partial least squares (PLS), radial basis function-artificial neural networks (RBF-ANN), and multi-criteria decision making methods (MCDM), PROMETHEE and GAIA. In general, the soybean and rapeseed oils were discriminated by PCA, and the two spoilt oils behaved differently with the rancid rapeseed samples exhibiting more object scatter on the PC-scores plot, than the adulterated soybean oil. For the PLS and RBF-ANN prediction methods, suitable training models were devised, which were able to predict satisfactorily the category of the four different oil samples in the verification set. Rank ordering with the use of MCDM models indicated that the oil types can be discriminated on the PROMETHEE 11 scale. For the first time, it was demonstrated how ranking of oil objects with the use of PROMETHEE and GAIA could be utilized as a versatile indicator of quality performance of products on the basis of a standard selected by the stakeholder. In principle, this approach provides a very flexible method for assessment of product quality directly from the measured data. (c) 2006 Elsevier B.V. All rights reserved. 2006 * 250(<- 14): An Interval-Valued Neural Network Approach for Uncertainty Quantification in Short-Term Wind Speed Prediction We consider the task of performing prediction with neural networks (NNs) on the basis of uncertain input data expressed in the form of intervals. We aim at quantifying the uncertainty in the prediction arising from both the input data and the prediction model. A multilayer perceptron NN is trained to map interval-valued input data onto interval outputs, representing the prediction intervals (PIs) of the real target values. The NN training is performed by nondominated sorting genetic algorithm-II, so that the PIs are optimized both in terms of accuracy (coverage probability) and dimension (width). Demonstration of the proposed method is given in two case studies: 1) a synthetic case study, in which the data have been generated with a 5-min time frequency from an autoregressive moving average model with either Gaussian or Chi-squared innovation distribution and 2) a real case study, in which experimental data consist of wind speed measurements with a time step of 1 h. Comparisons are given with a crisp (single-valued) approach. The results show that the crisp approach is less reliable than the interval-valued input approach in terms of capturing the variability in input. 2015 * 253(<-502): Multiobjective training of artificial neural networks for rainfall-runoff modeling This paper presents results on the application of various optimization algorithms for the training of artificial neural network rainfall-runoff models. Multilayered feed-forward networks for forecasting discharge from two mesoscale catchments in different climatic regions have been developed for this purpose. The performances of the multiobjective algorithms Multi Objective Shuffled Complex Evolution Metropolis-University of Arizona (MOSCEM-UA) and Nondominated Sorting Genetic Algorithm II (NSGA-II) have been compared to the single-objective Levenberg-Marquardt and Genetic Algorithm for training of these models. Performance has been evaluated by means of a number of commonly applied objective functions and also by investigating the internal weights of the networks. Additionally, the effectiveness of a new objective function called mean squared derivative error, which penalizes models for timing errors and noisy signals, has been explored. The results show that the multiobjective algorithms give competitive results compared to the single-objective ones. Performance measures and posterior weight distributions of the various algorithms suggest that multiobjective algorithms are more consistent in finding good optima than are single-objective algorithms. However, results also show that it is difficult to conclude if any of the algorithms is superior in terms of accuracy, consistency, and reliability. Besides the training algorithm, network performance is also shown to be sensitive to the choice of objective function(s), and including more than one objective function proves to be helpful in constraining the neural network training. 2008 * 258(<-214): Modeling and optimization of biodiesel engine performance using advanced machine learning methods This study aims to determine optimal biodiesel ratio that can achieve the goals of fewer emissions, reasonable fuel economy and wide engine operating range. Different advanced machine learning techniques, namely ELM (extreme learning machine), LS-SVM (least-squares support vector machine) and RBFNN (radial-basis function neural network), are used to create engine models based on experimental data. Logarithmic transformation of dependent variables is used to alleviate the problems of data scarcity and data exponentiality simultaneously. Based on the engine models, two optimization methods, namely SA (simulated annealing) and PSO (particle swarm optimization), are employed and a flexible objective function is designed to determine the optimal biodiesel ratio subject to various user-defined constraints. A case study is presented to verify the modeling and optimization framework. Moreover, two comparisons are conducted, where one is among the modeling techniques and the other is among the optimization techniques. Experimental results show that, in terms of the model accuracy and training time, ELM with the logarithmic transformation is better than LS-SVM and RBFNN with/without the logarithmic transformation. The results also show that PSO outperforms SA in terms of fitness and standard deviation, with an acceptable computational time. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 297(<-136): Improving the Accuracy of Urban Land Cover Classification Using Radarsat-2 PolSAR Data Land cover classification is one of the most important applications of polarimetric SAR images, especially in urban areas. There are numerous features that can be extracted from these images, hence feature selection plays an important role in PolSAR image classification. In this study, three main steps are used to address this task: 1) feature extraction in the form of three categories, namely original data features, decomposition features, and SAR discriminators; 2) feature selection in the framework of the single and multi-objective optimization; and 3) image classification using the best subset of features. In single objective methods, we employ genetic algorithms (GAs) and support vector machines (SVMs) or multi-layer perceptron (MLP) neural network in order to maximize classification accuracy. Then a new method is proposed to perform an efficient land cover classification of the San Francisco Bay urban area based on the multi-objective optimization approach. The objectives are to minimize the error of classification and the number of selected PolSAR parameters. The experimental results on Radarsat-2 fine-quad data show that the proposed method outperforms the single objective approaches tested against it, while saving computational complexity. Finally, we show that the our method has a better performance than the SVM with full set of features and the Wishart classifier which is based on the covariance matrix. 2014 * 317(<-250): Evaluation of environmental impacts using backpropagation neural network Purpose - The purpose of this present study is to find a scientific method for the evaluation of environmental impacts according to the requirement 4.3.1. Design/methodology/approach - To realize the objectives, the authors worked with a representative sample from certified ISO 14001 organizations. The data aim to identify and evaluate (according to the organization's methodology) significant environmental impacts. In this study, the authors created two models for the evaluation of environmental impacts based on an artificial neural network applied in the pilot organization and compared the results obtained from these models with those obtained by applying an analytic hierarchy process (AHP) method. AHP is part of an multi-criteria decision making method and provides good multi-criteria support for decision making for problems that can be structured hierarchically. Findings - This paper presents a new approach that uses a backpropagation neural network to evaluate environmental impacts regardless of the organization type. Originality/value - This paper presents a unique approach for the reliable and objective evaluation of environmental impacts. 2013 * 335(<-154): Optimal Management of a Freshwater Lens in a Small Island Using Surrogate Models and Evolutionary Algorithms This paper examines a linked simulation-optimization procedure based on the combined application of an artificial neural network (ANN) and genetic algorithm (GA) with the aim of developing an efficient model for the multiobjective management of groundwater lenses in small islands. The simulation-optimization methodology is applied to a real aquifer in Kish Island of the Persian Gulf to determine the optimal groundwater-extraction while protecting the freshwater lens from seawater intrusion. The initial simulations are based on the application of SUTRA, a variable-density groundwater numerical model. The numerical model parameters are calibrated through automated parameter estimation. To make the optimization process computationally feasible, the numerical model is subsequently replaced by a trained ANN model as an approximate simulator. Even with a moderate number of input data sets based on the numerical simulations, the ANN metamodel can be efficiently trained. The ANN model is subsequently linked with GA to identify the nondominated or Pareto-optimal solutions. To provide flexibility in the implementation of the management plan, the model is built upon optimizing extraction from a number of zones instead of point-well locations. Two issues are of particular interest to the research reported in this paper are: (1) how the general idea of minimizing seawater intrusion can be effectively represented by objective functions within the framework of the simulation-optimization paradigm, and (2) the implications of applying the methodology to a real-world small-island groundwater lens. Four different models have been compared within the framework of multiobjective optimization, including (1) minimization of maximum salinity at observation wells, (2) minimization of the root mean square (RMS) change in concentrations over the planning period, (3) minimization of the arithmetic mean, and (4) minimization of the trimmed arithmetic mean of concentration in the observation wells. The latter model can provide a more effective framework to incorporate the general objective of minimizing seawater intrusion. This paper shows that integration of the latest innovative tools can provide the ability to solve complex real-world optimization problems in an effective way. (C) 2014 American Society of Civil Engineers. 2014 * 339(<-619): Joint application of artificial neural networks and evolutionary algorithms to watershed management Artificial neural networks (ANNs) have become common data driven tools for modeling complex, nonlinear problems in science and engineering. Many previous applications have relied on gradient-based search techniques, such as the back propagation ( BP) algorithm, for ANN training. Such techniques, however, are highly susceptible to premature convergence to local optima and require a trial-and-error process for effective design of ANN architecture and connection weights. This paper investigates the use of evolutionary programming ( EP), a robust search technique, and a hybrid EP-BP training algorithm for improved ANN design. Application results indicate that the EP-BP algorithm may limit the drawbacks of using local search algorithms alone and that the hybrid performs better than EP from the perspective of both training accuracy and efficiency. In addition, the resulting ANN is used to replace the hydrologic simulation component of a previously developed multiobjective decision support model for watershed management. Due to the efficiency of the trained ANN with respect to the traditional simulation model, the replacement reduced the overall computational time required to generate preferred watershed management policies by 75%. The reduction is likely to improve the practical utility of the management model from a typical user perspective. Moreover, the results reveal the potential role of properly trained ANNs in addressing computational demands of various problems without sacrificing the accuracy of solutions. 2004 * 349(<- 54): Coupled Data-Driven Evolutionary Algorithm for Toxic Cyanobacteria (Blue-Green Algae) Forecasting in Lake Kinneret Cyanobacteria blooming in surface waters have become a major concern worldwide, as they are unsightly, and cause a variety of toxins, undesirable tastes, and odors. Approaches of mathematical process-based (deterministic), statistically based, rule-based (heuristic), and artificial neural networks have been the subject of extensive research for cyanobacteria forecasting. This study suggests a new framework of linking an evolutionary computational method (a genetic algorithm) with a data driven modeling engine (model trees) for external loading, physical, chemical, and biological parameters selection, all coupled with their associated time lags as decision variables for cyanobacteria prediction in surface waters. The methodology is demonstrated through trial runs and sensitivity analyses on Lake Kinneret (the Sea of Galilee), Israel. Model trials produced good matching as depicted through the results correlation coefficient on verification data sets. Temperature was reconfirmed as a predominant parameter for cyanobacteria prediction. Model optimal input variables and forecast horizons differed in various solutions. Those in turn raised the problem of best variables selection, pointing towards the need of a multiobjective optimization model in future extensions of the proposed methodology. (C) 2014 American Society of Civil Engineers. 2015 * 354(<-512): Prediction of the solar radiation evolution using computational intelligence techniques and cloudiness indices In this paper, Artificial Neural Networks are applied for multi-step long term solar radiation prediction. The input-output structure of the neural network models is selected using evolutionary computation methods. The networks are trained as one-step-ahead predictors and iterated over time to obtain multi-step longer term predictions. Auto-regressive and auto-regressive with exogenous inputs models are compared, considering cloudiness indices as inputs in the latter case. These indices are obtained through pixel classification of ground-to-sky images, captured by a CCD camera. 2008 * 357(<-231): Manganese waste mud immobilization in cement natural zeolite lime blend: Process optimization using artificial neural networks and multi-criteria functions In this study, stabilization/solidification process of manganese contaminated mud using portland cement was optimized. For that purpose, immobilization process was modeled applying artificial neural networks with radial basis activation function. The optimal model presented satisfactory prediction characteristics (R2 value for manganese leaching was 0.9615 while and for concrete flexural strength 0.8748). Therefore, it was used in combination with seven in-house developed multi-criteria optimization functions, separately, in order to optimize concrete formulation. The used approach proved itself as efficient and cost effective alternative in ecological material formulation process. The best properties (i.e. high flexural strength and lowest manganese leaching) manifested stabilization/solidification matrix consisted of 350 g of portland cement, 20 g of lime, 70 g of natural zeolite, 10 g of manganese waste mud and 180 g of water. 2013 * 358(<-318): OPTIMIZATION OF ARSENIC SLUDGE IMMOBILIZATION PROCESS IN CEMENT - NATURAL ZEOLITE - LIME BLENDS USING ARTIFICIAL NEURAL NETWORKS AND MULTI-OBJECTIVE CRITERIA FUNCTIONS This work focuses on optimization of arsenic sludge immobilization process in cement natural zeolite lime blends using artificial neural networks and multi-objective criteria functions. The developed artificial neural network model describes relations between solidified/stabilized cement formulation and its mechanical (compressive strength) and ecological properties (arsenic and iron release). It is proven that developed artificial neural network solidified/stabilized model has satisfactory performance characteristics (R-2>0.9031 without presence of systematic error; based on an external validation experimental data set). Four multi-objective optimization criteria functions, different in terms of mathematical formulation and ecological interpretation, were developed. The developed criteria functions were used in combination with the artificial neural network solidified/stabilized model, providing optimal cement formulation. Finally, this study describes an efficient and cost-effective alternative in ecological material formulation process. 2012 * 370(<-603): Application of genetic algorithms for process integration and optimization considering environmental factors A systematic methodology for pollution prevention based on process integration is presented in this report. In this methodology, process simulation was carried out to provide mass and energy information of the chemical process. An artificial neural network (ANN) was used to replace rigorous process simulation in the optimization process to improve computational efficiency. Multiobjective optimization was performed to obtain coordinate optimization of process performance for both economics and environmental aspects. A multiobjective genetic algorithm was used to solve the multiobjective optimization problems. Mass and energy use were considered simultaneously in this program. A case study of a wastewater recovery system in ammonia production process is discussed to illustrate the effectiveness of this pollution prevention methodology. (c) 2004 American Institute of Chemical Engineers Environ Prog, 2005 * 371(<-611): Decision support for watershed management using evolutionary algorithms An integrative computational methodology is developed for the management of nonpoint source pollution from watersheds. The associated decision support system is based on an interface between evolutionary algorithms (EAs) and a comprehensive watershed simulation model, and is capable of identifying optimal or near-optimal land use patterns to satisfy objectives. Specifically, a genetic algorithm (GA) is linked with the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT) for single objective evaluations, and a Strength Pareto Evolutionary Algorithm has been integrated with SWAT for multiobjective optimization. The model can be operated at a small spatial scale, such as a farm field, or on a larger watershed scale. A secondary model that also uses a GA is developed for calibration of the simulation model. Sensitivity analysis and parameterization are carried out in a preliminary step to identify model parameters that need to be calibrated. Application to a demonstration watershed located in Southern Illinois reveals the capability of the model in achieving its intended goals. However, the model is found to be computationally demanding as a direct consequence of repeated SWAT simulations during the search for favorable solutions. An artificial neural network (ANN) has been developed to mimic SWAT outputs and ultimately replace it during the search process. Replacement of SWAT by the ANN results in an 84% reduction in computational time required to identify final land use patterns. The ANN model is trained using a hybrid of evolutionary programming (EP) and the back propagation (BP) algorithms. The hybrid algorithm was found to be more effective and efficient than either EP or BP alone. Overall, this study demonstrates the powerful and multifaceted role that EAs and artificial intelligence techniques could play in solving the complex and realistic problems of environmental and water resources systems. 2005 * 408(<-277): Defining a nonlinear control problem to reduce particulate matter population exposure In this paper a multi-objective nonlinear approach to control air quality at a regional scale is presented. Both economic and air quality sides of the problem are modeled through artificial neural network models. Simulating the complex nonlinear atmospheric phenomena, they can be used in an optimization routine to identify the efficient solutions of a decision problem for air quality planning. The methodology is applied over Northern Italy, an area in Europe known for its high concentrations of particulate matter. Results illustrate the effectiveness of the approach assessing the nonlinear chemical reactions in an air quality decision problem. (C) 2012 Elsevier Ltd. All rights reserved. 2012 * 409(<-288): Surrogate models to compute optimal air quality planning policies at a regional scale Secondary pollutants (such as PM10) derives from complex non-linear reactions involving precursor emissions, namely VOC, NOx, NH3, primary PM and SO2. Due to difficulty to cope with this complexity, Decision Support Systems (DSSs) are essential tools to help Environmental Authorities to plan air quality policies that fulfill EU Directive 2008/50 requirements in a cost-efficient way. To implement these DSSs the common approach is to describe the air quality indices using linear models, derived through model reduction techniques starting from deterministic Chemical Transport Model simulations. This linear approach limits the applicability of these surrogate models, and while these may work properly at coarse spatial resolutions (continental/national), where average values over large areas are of interest, they often prove inadequate at sub national scales, where the impact of non linearities on air quality are usually higher. The objective of this work is to identify air quality models able to properly describe the relation between emissions and air quality indices, at a sub national scale. In this context, artificial neural networks, identified processing long-term simulation output of a 3D deterministic multi-phase modelling system, are used to describe the non-linear relations between the control variables (precursor emissions reduction) and a pollution index. These models can then be used with a reasonable computing effort to solve a multi-objective (air quality and emission reduction costs) optimization problem, that requires thousands of model runs and thus would be unfeasible using the original process-based model. A case study of Northern Italy is presented. (C) 2011 Elsevier Ltd. All rights reserved. 2012 * 410(<-508): A multi-objective nonlinear optimization approach to designing effective air quality control policies This paper presents the implementation of a two-objective optimization methodology to select effective tropospheric ozone pollution control strategies on a mesoscale domain. The objectives considered are (a) the emission reduction cost and (b) the Air Quality Index. The control variables are the precursor emission reductions due to available technologies. The nonlinear relationship linking air quality objective and precursor emissions is described by artificial neural networks, identified by processing deterministic Chemical Transport Modeling system simulations. Pareto optimal solutions are calculated with the Weighted Sum Strategy. The two-objective problem has been applied to a complex domain in Northern Italy, including the Milan metropolitan area, a region characterized by frequent and persistent ozone episodes. (c) 2008 Elsevier Ltd. All rights reserved. 2008 * 411(<- 11): AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (Al) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 412(<-140): Simulation and Optimization Modeling for the Management of Groundwater Resources. II: Combined Applications The world population is increasing continuously and is expected to reach the 9.5billion mark in 2050 from the current 7.1billion. The importance of the groundwater resources is also increasing with the increase in population, because the quality and quantity of water resources are continuously declining because of urbanization, contamination, and climate change impact. Thus, under the current environment, the conservation and management of groundwater resources is a critical challenge for fulfilling the rising water demand for agricultural, industrial, and domestic uses. Various simulation and optimization approaches have been used to solve the groundwater management problems. Optimization methods have proved to be of great importance when used with simulation models, and the combined use of these two approaches gives the best results. The main objective of this review is to analyze combined uses of simulation and optimization modeling approaches and to provide an impression of their applications reported in literature. In addition to traditional optimization techniques, this paper highlights the application of computational intelligence techniques, such as artificial neural networks, response matrix approach, and multiobjective approach. Conclusions are drawn based on this review, which could be useful for system managers and planners for selecting the best suitable technique for their specific uses. 2014 * 413(<-160): Optimization modelling for seawater intrusion management The coastal aquifers of the world are facing environmental problem of seawater intrusion. This problem is the results of indiscriminate and unplanned groundwater exploitation for fulfilling the growing need of freshwater for the burgeoning global population. There is a need to develop appropriate management models for assessing the maximum feasible pumping rates which protects seawater intrusion in coastal aquifers. The comprehensive reviews on the use of various programming techniques for the solution of seawater intrusion management problem of coastal aquifers have been provided in this paper. The literature review revealed that the management models used in the past mainly considered the objectives of maximization of pumping rate, minimization of drawdown, minimization of pumped water, minimization of seawater volume into the aquifer, and/or minimization of pumping cost. The past reviews are grouped into five sections based on the programming techniques adopted. The sections include: linear programming, nonlinear programming, genetic algorithms, artificial neural networks, and multiobjective optimization models. Conclusions are drawn where gaps exist and more research needs to be focused. This review provides the basis for the selection of appropriate methodology for the management of seawater intrusion problems of coastal aquifers. (C) 2013 Elsevier B.V. All rights reserved. 2014 * 414(<-171): Multi-Objective Quantity-Quality Reservoir Operation in Sudden Pollution Damage caused by entered pollution in reservoirs can affect a water resource system in two ways: (1) Damages that are caused due to consumption of polluted water and (2) damages that are caused due to insufficient water allocation. Those damages conflict with each other. Thus, the crisis should be managed in a way that the least damage occurs in the water resource system. This paper investigates crisis management due to the sudden entrance of a 30 m(3) methyl tert-butyl ether (MTBE) load to the Karaj dam in Iran, which supplies municipal water to the cities of Tehran and Karaj. To simulate MTBE advection, dispersion, and vaporization, the latter process is added to the CE-QUAL-W2 model. After that, the multi-objective NSGAII-ALANN algorithm, which is a combination of the NSGAII optimization method along with a multi layer perceptron (MLP), which is one of the most widely used artificial neural network (ANN) structures, is employed to extract the best set of decisions in which the two aforementioned damages are minimized. By assigning a specific importance to each objective function, after extracting the optimal solutions, it is possible to choose one of the solutions with the least damage. Four scenarios of entering pollution to the Karaj reservoir the first day of each season are considered, resulting in a Pareto set of operation policies for each scenario. Results of the proposed methodology indicate that if the pollution enters the reservoir in summer, by using one of the optimal policies extracted from the Pareto set of the 2nd Scenario, by a 36% reduction in meeting the demand, allocated pollution decreases to about 60%. In other seasons, there is a significant decrease in allocated pollution with a smaller reduction in the met demand. 2014 * 415(<-347): Characterizing the Socio-Economic Driving Forces of Groundwater Abstraction with Artificial Neural Networks and Multivariate Techniques Integrated groundwater quantity management in the Middle East region must consider appropriate control measures of the socio economic needs. Hence, there is a need for a better knowledge and understanding of the socio economic variables influencing the groundwater quantity. Gaza Strip was chosen as the study area and real data were collected from twenty five municipalities for the reference year 2001. In this paper, the effective variables have been characterized and prioritized using multi-criteria analysis with artificial neural networks (ANN) and expert opinion and judgment. The selected variables were classified and organized using the multivariate techniques of cluster analysis, factor analysis, principal components and classification analysis. There are significant discrepancies between the results of ANN analysis and expert opinion and judgment in terms of ranking and prioritizing the socio-economic variables. Characterization of the priority effective socio-economic driving forces indicates that water managers and planners can introduce demand-based groundwater management in place of the existing supply-based groundwater management. This ensures the success of undertaking responsive technical, managerial and regulatory measures. Income per capita has the highest priority. Efficiency of revenue collection is not a significant socio-economic factor. The models strengthen the integration of preventive approach into groundwater quantity management. In addition to that, they assist decision makers to better assess the socio economic needs and undertake proactive measures to protect the coastal aquifer. 2011 * 416(<- 25): Multi-Objective Operations of Multi-Wetland Ecosystem: iModel Applied to the Everglades Restoration The Everglades is a complex, multiwetland ecosystem that is heavily managed to meet often-competing flood control, water supply, and environmental demands. Using objective measures to balance these demands through operational protocols has always been a challenge in the multibillion-dollar restoration plans for the ecosystem. Physically based models have been the primary tools for planning efforts but for such a complex system, they are laborious and computationally intensive. Development of optimal operations based on iterative runs of these models is a great challenge. This paper presents an inverse modeling framework for formal optimization suited for wetland system operations that helps overcome such limitations. Labor-intensive and computation-intensive physically representative models are emulated in each individual wetland area using an autoregressive artificial neural network with exogenous variables. Using prescribed inflow, outflow, and meteorological input data, such hydrologic model emulators aided by a dimension-reduction technique provide targeted spatial and temporal predictions for water level (stage) within each area of the Everglades, while excluding computation processes that are intensive but insignificant to the predictions. This computer software uses the augmented Lagrangian genetic algorithm technique (subject to linear and nonlinear constraints) to steer predictions of stage spatial variability within individual wetlands towards corresponding desired goals (including restoration targets). In the augmented Lagrangian genetic algorithm, flow releases are coded as the decision variables to be optimized subject to budget, intrahydraulic conveyance, flow capacity, and upstream storage constraints. Optimization is performed by dividing and solving a sequence of subproblems using the genetic algorithm procedures of initialization, selection, elitism, crossover, and mutation. As part of the process, Lagrangian and penalty parameters are updated, and optimization terminates when certain stopping criteria are met. Applying the technique reported in this paper to a specific Everglades restoration plan (the River of Grass Project) showed a sound hydrologic model emulator prediction when compared to the physical model for all wetland areas. Feeding optimal releases predicted by the computer software into a physical model showed equal or better matching of the restoration target with different release patterns compared to that of the physical model base run scenario. Results show that hydraulic conveyance limitations play a significant role in Everglades restoration. Also, results show that employing an adversity tradeoff matrix presented multiple so-called optimal solutions with different optimization weights and a powerful negotiation matrix. (C) 2015 American Society of Civil Engineers. 2015 * 417(<- 42): Multi-Objective Optimal Operation Model of Cascade Reservoirs and Its Application on Water and Sediment Regulation Recently suspended river, where formed both in tributary and in main stream of Ningxia-Inner Mongolia reaches in the Upper Yellow River, severely threatens people's lives and property safety downstream. In this paper, taking Ningxia-Inner Mongolia reaches for the example, water and sediment are regulated by cascade reservoirs upstream, which can form artificial controlled flood, to improve the relationship of water-sediment and slow down the speed of sedimentation rate. Then, multi-objective optimal operation model of cascade reservoirs is established with four objectives: ice and flood control, power generation, water supply, water and sediment regulation. Based on optimization technique of feasible search space, the non-dominated sorting genetic algorithm (NSGA-II) is improved and a new multi-objective algorithm, Feasible Search Space Optimization-Non-dominated Sorting Genetic Algorithm (FSSO-NSGA-II) is innovatively proposed in this paper. The best time of water and sediment regulation is discussed and the regulation index system and scenarios are constructed in three level years of 2010, 2020 and 2030. After that regulation efforts and contribution to sediment transportation are quantified under three scenarios. Compared with history data in 2010, the accuracy and superiority of multi-objective model and FSSO-NSGA-II are verified. Even more, four-dimensional vector coordinate systems are proposed innovatively to represent each objective and sensitivity of three scenarios are analyzed to clarify the impact on regulation objectives by regulation indexes. At last, relationships between four objectives are revealed. The research findings provide optimal solutions of multi-objectives optimal operation by FSSO-NSGA-II, which have an important theoretical significance to enrich the methods of water and sediment optimal operation by cascade reservoirs, guiding significance to water and sediment regulation implementation and construct water and sediment control system in the whole Yellow River basin. Research Highlights We establish an multi-objective optimal operation model with four regulation objectives. We proposed a improved multi-objective algorithm (FSSO-NSGA-II) based on feasible search space optimization. Regulation index system and three scenarios are constructed. Four-dimensional vector coordinate systems are proposed to represent each objective and sensitivity of scenarios are analyzed. Relationships between four objectives are revealed. 2015 * 418(<- 99): An adaptive ant colony optimization framework for scheduling environmental flow management alternatives under varied environmental water availability conditions Human water use is increasing and, as such, water for the environment is limited and needs to be managed efficiently. One method for achieving this is the scheduling of environmental flow management alternatives (EFMAs) (e.g., releases, wetland regulators), with these schedules generally developed over a number of years. However, the availability of environmental water changes annually as a result of natural variability (e.g., drought, wet years). To incorporate this variation and schedule EFMAs in a operational setting, a previously formulated multiobjective optimization approach for EFMA schedule development used for long-term planning has been modified and incorporated into an adaptive framework. As part of this approach, optimal schedules are updated at regular intervals during the planning horizon based on environmental water allocation forecasts, which are obtained using artificial neural networks. In addition, the changes between current and updated schedules can be minimized to reduce any disruptions to long-term planning. The utility of the approach is assessed by applying it to an 89km section of the River Murray in South Australia. Results indicate that the approach is beneficial under a range of hydrological conditions and an improved ecological response is obtained in a operational setting compared with previous long-term approaches. Also, it successfully produces trade-offs between the number of disruptions to schedules and the ecological response, with results suggesting that ecological response increases with minimal alterations required to existing schedules. Overall, the results indicate that the information obtained using the proposed approach potentially aides managers in the efficient management of environmental water. Key Points Adaptive optimization framework developed for use in an operational setting The framework showing advantages compared with long-term planning approaches Ecological response increased with minimal disruption to existing schedules 10.1002/(ISSN)1944-7973