-*- mode: org -*- * 1(<-479): [Improving hyperspectral matching method through feature-selection/weighting based on SVM]. In the present article, feature selection/weighting based on SVM was employed to improve the algorithm of choosing reference spectrum through a multi-objective optimization approach proposed in reference. Based on the sensitive analysis, half of features having low weights in SVM classification model were eliminated iteratively. Two criteria, matching accuracy and classification confidence, were used to select the best-performing feature subset. Three scenarios were designed: (1) only feature subset selected by SVM was used; (2) both feature subset and global weights were used, in which global weights were the coefficients of selected features in the SVM classification model; (3) both feature subset and local weights, which changed with the distance of a sample point to the SVM separation plan, were used. Experiment executed on the popular Indiana AVIRIS data set indicate that under all the three scenarios, spectral matching accuracies were increased by 13%-17% compared to the situation without feature selection. The result obtained under scenario 3 is the most accurate and the most stable, which can be primarily attributed to the ability of local weights to accurately describe local distribution of spectra from the same class in feature space. Moreover, scenario 3 can be regarded as the extension of scenario 2 because when spectra far away from the separation plane are selected as reference spectrums for matching, the features' weights will not be considered. The results obtained under scenario 1 and 2 are very similar, indicating that considering global weights is not necessary. The research presented in this paper advanced the spectrum analysis using SVM to a higher level. 2009 * 2(<-481): Improving Hyperspectral Matching Method through Feature-Selection/Weighting Based on SVM In the present article, feature selection/weighting based on SVM was employed to improve the algorithm of choosing reference spectrum through a multi-objective optimization approach proposed in reference [18]. Based on the sensitive analysis, half of features having low weights in SVM classification model were eliminated iteratively. Two criteria, matching accuracy and classification confidence, were used to select the best-performing feature subset. Three scenarios were designed: (1) only feature subset selected by SVM was used; (2) both feature subset and global weights were used, in which global weights were the coefficients of selected features in the SVM classification model; (3) both feature subset and local weights, which changed with the distance of a sample point to the SVM separation plan, were used. Experiment executed on the popular Indiana AVIRIS data set indicate that under all the three scenarios, spectral matching accuracies were increased by 13%-17% compared to the situation without feature selection. The result obtained under scenario 3 is the most accurate and the most stable, which can be primarily attributed to the ability of local weights to accurately describe local distribution of spectra from the same class in feature space. Moreover, scenario 3 can be regarded as the extension of scenario 2 because when spectra far away from the separation plane are selected as reference spectrums for matching, the features' weights will not be considered. The results obtained under scenario 1 and 2 are very similar, indicating that considering global weights is not necessary. The research presented in this paper advanced the spectrum analysis using SVM to a higher level. 2009 * 3(<-596): MOP/GP models for machine learning Techniques for machine learning have been extensively studied in recent years as effective tools in data mining. Although there have been several approaches to machine learning, we focus on the mathematical programming (in particular, multi-objective and goal programming; MOP/GP) approaches in this paper. Among them, Support Vector Machine (SVM) is gaining much popularity recently. In pattern classification problems with two class sets, its idea is to find a maximal margin separating hyperplane which gives the greatest separation between the classes in a high dimensional feature space. This task is performed by solving a quadratic programming problem in a traditional formulation, and can be reduced to solving a linear programming in another formulation. However, the idea of maximal margin separation is not quite new: in the 1960s the multi-surface method (MSM) was suggested by Mangasarian. In the 1980s, linear classifiers using goal programming were developed extensively. This paper presents an overview on how effectively MOP/GP techniques can be applied to machine learning such as SVM, and discusses their problems. (c) 2004 Elsevier B.V. All rights reserved. 2005 * 4(<-614): Study on Support Vector Machines Using Mathematical Programming Machine learning has been extensively studied in recent years as eective tools inpattern classication problem. Although there have been several approaches to machinelearning, we focus on the mathematical programming (in particular, multi-objective andgoal programming; MOP/GP) approaches in this paper. Among them, Support VectorMachine (SVM) is gaining much popularity recently. In pattern classication problemwith two class sets, the idea is to nd a maximal margin separating hyperplane whichgives the greatest separation between the classes in a high dimensional feature space.However, the idea of maximal margin separation is not quite new: in 1960's the multi-surface method (MSM) was suggested by Mangasarian. In 1980's, linear classiersusing goal programming were developed extensively. This paper proposes a new familyof SVM using MOP/GP techniques, and discusses its eectiveness throughout severalnumerical experiments. 2005 * 5(<-462): Automatic Ground-Truth Validation With Genetic Algorithms for Multispectral Image Classification In this paper, we propose a novel method that aims at assisting the ground-truth expert through an automatic detection of potentially mislabeled learning samples. This method is based on viewing the mislabeled sample detection issue as an optimization problem where it is looked for the best subset of learning samples in terms of statistical separability between classes. This problem is formulated within a genetic optimization framework, where each chromosome represents a candidate solution for validating/invalidating the learning samples collected by the ground-truth expert. The genetic optimization process is guided by the joint optimization of two different criteria which are the maximization of a between-class statistical distance and the minimization of the number of invalidated samples. Experiments conducted on both simulated and real data sets show that the proposed ground-truth validation method succeeds in the following: 1) in detecting the mislabeled samples with a high accuracy, even when up to 30% of the learning samples are mislabeled, and 2) in strongly limiting the negative impact of the mislabeling issue on the accuracy of the classification process. 2009 * 6(<-465): A Multiobjective Genetic SVM Approach for Classification Problems With Limited Training Samples In this paper, a novel method for semsupervised classification with limited training samples is presented. Its aim is to exploit unlabeled data available at zero cost in the image under analysis for improving the accuracy of a classification process based on support vector machines (SVMs). It is based on the idea to augment the original set of training samples with a set of unlabeled samples after estimating their label. The label estimation process is performed within a multiobjective genetic optimization framework where each chromosome of the evolving population encodes the label estimates as well as the SVM classifier parameters for tackling the model selection issue. Such a process is guided by the joint minimization of two different criteria which express the generalization capability of the SVM classifier. The two explored criteria are an empirical risk measure and an indicator of the classification model sparseness, respectively. The experimental results obtained on two multisource remote sensing data sets confirm the promising capabilities of the proposed approach, which allows the following: 1) taking a clear advantage in terms of classification accuracy from unlabeled samples used for inflating the original training set and 2) solving automatically the tricky model selection issue. 2009 * 7(<-474): Unsupervised Pixel Classification in Satellite Imagery Using Multiobjective Fuzzy Clustering Combined With SVM Classifier The problem of unsupervised classification of a satellite image in a number of homogeneous regions can be viewed as the task of clustering the pixels in the intensity space. This paper proposes a novel approach that combines a recently proposed multiobjective fuzzy clustering scheme with support vector machine (SVM) classifier to yield improved solutions. The multiobjective technique is first used to produce a set of nondominated solutions. The nondominated set is then used to find some high-confidence points using a fuzzy voting technique. The SVM classifier is thereafter trained by these high-confidence points. Finally, the remaining points are classified using the trained classifier. Results demonstrating the effectiveness of the proposed technique are provided for numeric remote sensing data described in terms of feature vectors. Moreover, two remotely sensed images of Bombay and Calcutta cities have been classified using the proposed technique to establish its utility. 2009 * 8(<-514): Genetic SVM approach to semisupervised multitemporal classification The updating of classification maps, as new image acquisitions are obtained, raises the problem of ground-truth information (training samples) updating. In this context, semisupervised multitemporal classification represents an interesting though still not well consolidated approach to tackle this issue. In this letter, we propose a novel methodological solution based on this approach. Its underlying idea is to update the ground-truth information through an automatic estimation process, which exploits archived ground-truth information as well as basic indications from the user about allowed/forbidden class transitions from an acquisition date to another. This updating problem is formulated by means of the support vector machine classification approach and a constrained multiobjective optimization genetic algorithm. Experimental results on a multitemporal data set consisting of two multisensor (Landsat-5 Thematic Mapper and European Remote Sensing satellite synthetic aperture radar) images are reported and discussed. 2008 * 9(<-539): Semisupervised PSO-SVM regression for biophysical parameter estimation In this paper, a novel semisupervised regression approach is proposed to tackle the problem of biophysical parameter estimation that is constrained by a limited availability of training (labeled) samples. The main objective of this approach is to increase the accuracy of the estimation process based on the support vector machine (SVM) technique by exploiting unlabeled samples that are available from the image under analysis at zero cost. The integration of such samples in the regression process is controlled through a particle swarm optimization.(PSO) framework that is defined by considering separately or jointly two different optimization criteria, thus leading to the implementation of three different inflation strategies. These two criteria are empirical and structural expressions of the generalization capability of the resulting semisupervised PSO-SVM regression system. The conducted experiments were focused on the problem of estimating chlorophyll concentrations in coastal waters from multispectral remote sensing images. In particular, we report and discuss resu s of experiments that are designed in such a way as to test the proposed approach in terms of: 1) capability to capture useful information from a set of unlabeled samples for improving the estimation accuracy; 2) sensitivity to the number of exploited unlabeled samples; and 3) sensitivity to the number of labeled samples used for supervising the inflation process. 2007 * 10(<- 76): Machine learning methods for sub-pixel land-cover classification in the spatially heterogeneous region of Flanders (Belgium): a multi-criteria comparison Until now, few research has addressed the use of machine learning methods for classification at the sub-pixel level. To close this knowledge gap, in this article, six machine learning methods were compared for the specific task of sub-pixel land-cover extraction in the spatially heterogeneous region of Flanders (Belgium). In addition to the classification accuracy at the pixel and the municipality level, three evaluation criteria reflecting the methods' ease-of-use were added to the comparison: the time needed for training, the number of meta-parameters, and the minimum training set size. Robustness to changing training data was also included as the sixth evaluation criterion. Based on their scores for these six criteria, the machine learning methods were ranked according to three multi-criteria ranking scenarios. These ranking scenarios correspond to different decision-making scenarios that differ in their weighting of the criteria. In general, no overall winner could be designated: no method performs best for all evaluation scenarios. However, when both time available for preprocessing and the magnitude of the training data set are unconstrained, Support Vector Machines (SVMs) clearly outperform the other methods. 2015 * 11(<-400): Assessing bank soundness with classification techniques The recent crisis highlighted, once again, the importance of early warning models to assess the soundness of individual banks. In the present study, we use six quantitative techniques originating from various disciplines to classify banks in three groups. The first group includes very strong and strong banks: the second one includes adequate banks, while the third group includes banks with weaknesses or serious problems. We compare models developed with financial variables only, with models that incorporate additional information in relation to the regulatory environment, institutional development, and macroeconomic conditions. The accuracy of classification of the models that include only financial variables is rather poor. We observe a substantial improvement in accuracy when we consider the country-level variables, with five out of the six models achieving out-of-sample classification accuracy above 70% on average. The models developed with multi-criteria decision aid and artificial neural networks achieve the highest accuracies. We also explore the development of stacked models that combine the predictions of the individual models at a higher level. While the stacked models outperform the corresponding individual models in most cases, we found no evidence that the best stacked model can outperform the best individual model. (C) 2009 Published by Elsevier Ltd. 2010 * 12(<-211): Predicting groundwater level fluctuations with meteorological effect implications-A comparative study among soft computing techniques The knowledge of groundwater table fluctuations is important in agricultural lands as well as in the studies related to groundwater utilization and management levels. This paper investigates the abilities of Gene Expression Programming (GEP), Adaptive Neuro-Fuzzy Inference System (ANFIS), Artificial Neural Networks (ANN) and Support Vector Machine (SVM) techniques for groundwater level forecasting in following day up to 7-day prediction intervals. Several input combinations comprising water table level, rainfall and evapotranspiration values from Hongcheon Well station (South Korea), covering a period of eight years (2001-2008) were used to develop and test the applied models. The data from the first six years were used for developing (training) the applied models and the last two years data were reserved for testing. A comparison was also made between the forecasts provided by these models and the Auto-Regressive Moving Average (ARMA) technique. Based on the comparisons, it was found that the GEP models could be employed successfully in forecasting water table level fluctuations up to 7 days beyond data records. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 13(<- 90): A hybrid meta-learning architecture for multi-objective optimization of SVM parameters Support Vector Machines (SVMs) have achieved a considerable attention due to their theoretical foundations and good empirical performance when compared to other learning algorithms in different applications. However, the SVM performance strongly depends on the adequate calibration of its parameters. In this work we proposed a hybrid multi-objective architecture which combines meta-learning (ML) with multi-objective particle swarm optimization algorithms for the SVM parameter selection problem. Given an input problem, the proposed architecture uses a ML technique to suggest an initial Pareto front of SVM configurations based on previous similar learning problems; the suggested Pareto front is then refined by a multi-objective optimization algorithm. In this combination, solutions provided by ML are possibly located in good regions in the search space. Hence, using a reduced number of successful candidates, the search process would converge faster and be less expensive. In the performed experiments, the proposed solution was compared to traditional multi-objective algorithms with random initialization, obtaining Pareto fronts with higher quality on a set of 100 classification problems. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 14(<-592): Association rules mining based on SVM and its application in simulated moving bed PX adsorption process In this paper, a novel data mining method is introduced to solve the multi-objective optimization problems of process industry. A hyperrectangle association rule mining (HARM) algorithm based on support vector machines (SVMs) is proposed. Hyperrectangles rules are constructed on the base of prototypes and support vectors (SVs) under some heuristic limitations. The proposed algorithm is applied to a simulated moving bed (SMB) paraxylene (PX) adsorption process. The relationships between the key process variables and some objective variables such as purity, recovery rate of PX are obtained. Using existing domain knowledge about PX adsorption process, most of the obtained association rules can be explained. 2005 * 15(<-604): Consensus feature selection for multi-objective SVM modeling of protein ion-exchange displacement chromatography. 2005 * 16(<- 10): On learning of weights through preferences We present a method to learn the criteria weights in multi-criteria decision making (MCDM) by applying emerging learning-to-rank machine learning techniques. Given the pairwise preferences by a decision maker (DM), we learn the weights that the DM attaches to the multiple criteria, characterizing each alternative. As the training information, our method requires the pairwise preferences of alternatives, as revealed by the DM. Once, the DM's decision model is learnt in terms of the criteria weights, it can be applied to predict his choices for any new set of alternatives. The empirical validation of the proposed approach is done on a collection of 12 standard datasets. The accuracy values are compared with those obtained for the state-of-the-art methods such as ranking-SVM and TOPSIS. (C) 2015 Elsevier Inc. All rights reserved. 2015 * 17(<- 16): A multi-criteria recommendation system using dimensionality reduction and Neuro-Fuzzy techniques Multi-criteria collaborative filtering (MC-CF) presents a possibility to provide accurate recommendations by considering the user preferences in multiple aspects of items. However, scalability and sparsity are two main problems in MC-CF which this paper attempts to solve them using dimensionality reduction and Neuro-Fuzzy techniques. Considering the user behavior about items' features which is frequently vague, imprecise and subjective, we solve the sparsity problem using Neuro-Fuzzy technique. For the scalability problem, higher order singular value decomposition along with supervised learning (classification) methods is used. Thus, the objective of this paper is to propose a new recommendation model to improve the recommendation quality and predictive accuracy of MC-CF and solve the scalability and alleviate the sparsity problems in the MC-CF. The experimental results of applying these approaches on Yahoo!Movies and TripAdvisor datasets with several comparisons are presented to show the enhancement of MC-CF recommendation quality and predictive accuracy. The experimental results demonstrate that SVM dominates the K-NN and FBNN in improving the MC-CF predictive accuracy evaluated by most broadly popular measurement metrics, F1 and mean absolute error. In addition, the experimental results also demonstrate that the combination of Neuro-Fuzzy and dimensionality reduction techniques remarkably improves the recommendation quality and predictive accuracy of MC-CF in relation to the previous recommendation techniques based on multi-criteria ratings. 2015 * 18(<- 41): Investigation of a Bridge Pier Scour Prediction Model for Safe Design and Inspection A novel bridge scour estimation approach that comprises advantages of both empirical and data-driven models is developed here. Results from the new approach are compared with existing approaches. Two field datasets from the literature are used in this study. Support vector machine (SVM), which is a machine-learning algorithm, is used to increase the pool of field data samples. For a comprehensive understanding of bridge-pier-scour modeling, a model evaluation function is suggested using an orthogonal projection method on a model performance plot. A fast nondominated sorting genetic algorithm (NSGA-II) is evaluated on the model performance objective functions to search for Pareto optimal fronts. The proposed formulation is compared with two selected empirical models [Hydraulic Engineering Circular No. 18 (HEC-18) and Froehlich equation] and a recently developed data-driven model (gene expression programming model). Results show that the proposed model improves the estimation of critical scour depth compared with the other models. 2015 * 19(<-585): Multiobjective analysis of chaotic dynamic systems with sparse learning machines Sparse learning machines provide a viable framework for modeling chaotic time-series systems. A powerful state-space reconstruction methodology using both support vector machines (SVM) and relevance vector machines (RVM) within a multiobjective optimization framework is presented in this paper. The utility and practicality of the proposed approaches have been demonstrated on the time series of the Great Salt Lake (GSL) biweekly volumes from 1848 to 2004. A comparison of the two methods is made based on their predictive power and robustness. The reconstruction of-the dynamics of the Great Salt Lake volume time series is attained using the most relevant feature subset of the training data. In this paper, efforts are also made to assess the uncertainty and robustness of the machines in learning and forecasting as a function of model structure, model parameters, and bootstrapping samples. The resulting model will normally have a structure, including parameterization, that suits the information content of the available data, and can be used to develop time series forecasts for multiple lead times ranging from two weeks to several months. (c) 2005 Elsevier Ltd. All rights reserved. 2006 * 20(<-157): Leave-one-out cross-validation-based model selection for multi-input multi-output support vector machine As an effective approach for multi-input multi-output regression estimation problems, a multi-dimensional support vector regression (SVR), named M-SVR, is generally capable of obtaining better predictions than applying a conventional support vector machine (SVM) independently for each output dimension. However, although there are many generalization error bounds for conventional SVMs, all of them cannot be directly applied to M-SVR. In this paper, a new leave-one-out (LOO) error estimate for M-SVR is derived firstly through a virtual LOO cross-validation procedure. This LOO error estimate can be straightway calculated once a training process ended with less computational complexity than traditional LOO method. Based on this LOO estimate, a new model selection methods for M-SVR based on multi-objective optimization strategy is further proposed in this paper. Experiments on toy noisy function regression and practical engineering data set, that is, dynamic load identification on cylinder vibration system, are both conducted, demonstrating comparable results of the proposed method in terms of generalization performance and computational cost. 2014 * 21(<-314): Research of load identification based on multiple-input multiple-output SVM model selection In this article, the problem of multiple-input multiple-output (MIMO) load identification is addressed. First, load identification is proved in dynamic theory as non-linear MIMO black-box modelling process. Second, considering the effect of hyper-parameters in small-size sample problem, a new MIMO Support Vector Machine (SVM) model selection method based on multi-objective particle swarm optimization is proposed in order to improve the identification's performance. The proposed method treats the model selection of MIMO SVM as a multi-objective optimization problem, and leave-one-out generalization errors of all output models are minimized simultaneously. Once the Pareto-optimal solutions are found, the SVM model with the best generalization ability is determined. The proposed method is evaluated in the experiment of dynamic load identification on cylinder stochastic vibration system, demonstrating its benefits in comparison to the existing model selection methods in terms of identification accuracy and numerical stability, especially near the peaks. 2012 * 22(<-138): Applying multiple kernel learning and support vector machine for solving the multicriteria and nonlinearity problems of traffic flow prediction This article proposes to develop a prediction model for traffic flow using kernel learning methods such as support vector machine (SVM) and multiple kernel learning (MKL). Traffic flow prediction is a dynamic problem owing to its complex nature of multicriteria and nonlinearity. Influential factors of traffic flow were firstly investigated; five-point scale and entropy methods were employed to transfer the qualitative factors into quantitative ones and rank these factors, respectively. Then, SVM and MKL-based prediction models were developed, with the influential factors and the traffic flow as the input and output variables. The prediction capability of MKL was compared with SVM through a case study. It is proved that both the SVM and MKL perform well in prediction with regard to the accuracy rate and efficiency, and MKL is more preferable with a higher accuracy rate when under proper parameters setting. Therefore, MKL can enhance the decision-making of traffic flow prediction. Copyright (c) 2012 John Wiley & Sons, Ltd. 2014 * 23(<-609): Multi-objective model selection for support vector machines In this article, model selection for support vector machines is viewed as a multi-objective optimization problem, where model complexity and training accuracy define two conflicting objectives. Different optimization criteria are evaluated: Split modified radius margin bounds, which allow for comparing existing model selection criteria, and the training error in conjunction with the number of support vectors for designing sparse solutions. 2005 * 24(<-482): A confidence voting process for ranking problems based on support vector machines In this paper, we deal with ranking problems arising from various data mining applications where the major task is to train a rank-prediction model to assign every instance a rank. We first discuss the merits and potential disadvantages of two existing popular approaches for ranking problems: the 'Max-Wins' voting process based on multi-class support vector machines (SVMs) and the model based on multi-criteria decision making. We then propose a confidence voting process for ranking problems based on SVMs, which can be viewed as a combination of the SVM approach and the multi-criteria decision making model. Promising numerical experiments based on the new model are reported. 2009 * 25(<-593): A numerical investigation of mixing processes in a novel combustor application A mixing process in a staggered toothed-indented shaped channel was investigated. It was studied in two steps: (1) numerical simulations for different sizes of the boundary-contour were performed by using a CFD code; (2) these results were used for simulation-data modeling for prediction of mixing performances across the whole field of changing geometric and the aerodynamic stream parameters. Support vector machine (SVM) technique, known as a new type of self learning machine, was selected to carry out this stage. The suitability of this application method was demonstrated in comparison with a neural network (NN) method. The established modeling system was then applied to some further studies of the prototype mixer including observations of the mixing performance in three special cases and performing optimizations of the mixing processes for two conflicting objectives and hereby obtaining the Pareto optimum sets. 2005 * 26(<-150): A novel feature selection method for twin support vector machine Both support vector machine (SVM) and twin support vector machine (TWSVM) are powerful classification tools. However, in contrast to many SVM-based feature selection methods, TWSVM has not any corresponding one due to its different mechanism up to now. In this paper, we propose a feature selection method based on TWSVM, called FTSVM. It is interesting because of the advantages of TWSVM in many cases. Our FTSVM is quite different from the SVM-based feature selection methods. In fact, linear SVM constructs a single separating hyperplane which corresponds a single weight for each feature, whereas linear TWSVM constructs two fitting hyperplanes which corresponds to two weights for each feature. In our linear FTSVM, in order to link these two fitting hyperplanes, a feature selection matrix is introduced. Thus, the feature selection becomes to find an optimal matrix, leading to solve a multi-objective mixed-integer programming problem by a greedy algorithm. In addition, the linear FTSVM has been extended to the nonlinear case. Furthermore, a feature ranking strategy based on FTSVM is also suggested. The experimental results on several public available benchmark datasets indicate that our FTSVM not only gives nice feature selection on both linear and nonlinear cases but also improves the performance of TWSVM efficiently. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 27(<-439): MULTIOBJECTIVE MULTICLASS SUPPORT VECTOR MACHINES MAXIMIZING GEOMETRIC MARGINS In this paper, we focus on the all together model of the support vector machine (SVM) for multiclass classification, which constructs a piece-wise linear discriminant function It is formulated as a single-objective optimization problem maximizing the sum of margins between all pairs of classes. which is defined as the distance between two normalized support hyperplanes parallel to the corresponding discriminant. hyperplane where any sample is not contained However, it is not necessarily equal to the geometric margin defined as the minimal distance of Patterns in a pair of classes to the corresponding discriminant, hyperplanes Then, we formulate the proposed model as a multiobjective problem which maximizes all of the, margins simultaneously Moreover, we derive two kinds of single-objective second order cone programming (SOCP) problems based on scalarization approaches, Benson's method and e-constraint method to solve the proposed multiobjective model, and show that the methods can find Pareto optimal solutions of the model Furthermore, through numerical experiments we verify the generalization ability of discriminant, functions obtained by the proposed SOCP problems 2010 * 28(<-518): A Multi-criteria Convex Quadratic Programming model for credit data analysis Speed and scalability are two essential issues in data mining and knowledge discovery. This paper proposed a mathematical programming model that addresses these two issues and applied the model to Credit Classification Problems. The proposed Multicriteria Convex Quadric Programming (MCQP) model is highly efficient (computing time complexity O(n(1.5-2))) and scalable to massive problems (size of O(10(9))) because it only needs to solve linear equations to find the global optimal solution. Kernel functions were introduced to the model to solve nonlinear problems. In addition, the theoretical relationship between the proposed MCQP model and SVM was discussed. (c) 2007 Elsevier B.V. All rights reserved. 2008 * 29(<-587): A new multi-criteria Convex Quadratic Programming model for credit analysis Mathematical programming based methods have been applied to credit risk analysis and have proven to be powerful tools. One challenging issue in mathematical programming is the computation complexity in finding optimal solutions. To overcome this difficulty, this paper proposes a Multi-criteria Convex Quadratic Programming model (MCCQP). Instead of looking for the global optimal solution, the proposed model only needs to solve a set of linear equations. We test the model using three credit risk analysis datasets and compare MCCQP results with four well-known classification methods: LDA, Decision Tree, SVMLight, and LibSVM. The experimental results indicate that the proposed MCCQP model achieves as good as or even better classification accuracies than other methods. 2006 * 30(<- 8): Novel approaches using evolutionary computation for sparse least square support vector machines This paper introduces two new approaches to building sparse least square support vector machines (LSSVM) based on genetic algorithms (GAs) for classification tasks. LSSVM classifiers are an alternative to SVM ones because the training process of LSSVM classifiers only requires to solve a linear equation system instead of a quadratic programming optimization problem. However, the absence of sparseness in the Lagrange multiplier vector (i.e. the solution) is a significant problem for the effective use of these classifiers. In order to overcome this lack of sparseness, we propose both single and multi-objective GA approaches to leave a few support vectors out of the solution without affecting the classifier's accuracy and even improving it. The main idea is to leave out outliers, non-relevant patterns or those ones which can be corrupted with noise and thus prevent classifiers to achieve higher accuracies along with a reduced set of support vectors. Differently from previous works, genetic algorithms are used in this work to obtain sparseness not to find out the optimal values of the LSSVM hyper-parameters. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 31(<-586): Additive preference model with piecewise linear components resulting from Dominance-based Rough Set Approximations Dominance-based Rough Set Approach (DRSA) has been proposed for multi-criteria classification problems in order to handle inconsistencies in the input information with respect to the dominance principle. The end result of DRSA is a decision rule model of Decision Maker preferences. In this paper, we consider an additive function model resulting from dominance-based rough approximations. The presented approach is similar to UTA and UTADTS methods. However, we define a goal function of the optimization problem in a similar way as it is done in Support Vector Machines (SVM). The problem may. also be defined as the one of searching for linear value functions in a transformed feature space obtained by exhaustive binarization of criteria. 2006 * 32(<-120): A niching genetic programming-based multi-objective algorithm for hybrid data classification This paper introduces a multi-objective algorithm based on genetic programming to extract classification rules in databases composed of hybrid data, i.e., regular (e.g. numerical, logical, and textual) and non-regular (e.g. geographical) attributes. This algorithm employs a niche technique combined with a population archive in order to identify the rules that are more suitable for classifying items amongst classes of a given data set. The algorithm is implemented in such a way that the user can choose the function set that is more adequate for a given application. This feature makes the proposed approach virtually applicable to any kind of data set classification problem. Besides, the classification problem is modeled as a multi-objective one, in which the maximization of the accuracy and the minimization of the classifier complexity are considered as the objective functions. A set of different classification problems, with considerably different data sets and domains, has been considered: wines, patients with hepatitis, incipient faults in power transformers and level of development of cities. In this last data set, some of the attributes are geographical, and they are expressed as points, lines or polygons. The effectiveness of the algorithm has been compared with three other methods, widely employed for classification: Decision Tree (C4.5), Support Vector Machine (SVM) and Radial Basis Function (RBF). Statistical comparisons have been conducted employing one-way ANOVA and Tukey's tests, in order to provide reliable comparison of the methods. The results show that the proposed algorithm achieved better classification effectiveness in all tested instances, what suggests that it is suitable for a considerable range of classification applications. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 33(<-122): Assuring the authenticity of northwest Spain white wine varieties using machine learning techniques Classification of wine represents a multi-criteria decision-making problem characterized by great complexity, non-linearity and lack of objective information regarding the quality of the desired final product. Volatile compounds of wines elaborated from four Galician (NW Spain) autochthonous white Vitis vinifera from four consecutive vintages were analysed by gas chromatography (F1D, FPD and MS detectors), and several aroma compounds were used for correctly classifying autochthonous white grape varieties (Albarifio, Treixadura, Loureira and Dona Branca). The objective of the work is twofold: to find a classification model able to precisely differentiate between existing grape varieties (i.e. assuring the authenticity), and to assess the discriminatory power of different family compounds over well-known classifiers (i.e. guaranteeing the typicality). From the experiments carried out, and given the fact that Principal Component Analysis (PCA) was not able to accurately separate all the wine varieties, this work investigates the suitability of applying different machine learning (ML) techniques (i.e.: Support Vector Machines, Random Forests, MultiLayer Perceptron, k-Nearest Neighbour and Naive Bayes) for classification purposes. Perfect classification accuracy is obtained by the Random Forest algorithm, whilst the other alternatives achieved promising results using only part of the available information. (C) 2013 Elsevier Ltd. All rights reserved. 2014 * 34(<-241): An alternative approach for the classification of orange varieties based on near infrared spectroscopy A multivariate technique and feasibility of using near infrared spectroscopy (NIRS) for non-destructive discriminating Thai orange varieties were studied in this paper. Short-wavelength near infrared (SWNIR) spectra in region of 643 to 970 nm were collected from 100 orange sample of each varieties. A total of 300 spectra were used to develop an accurate classification model by diversity of classifiers. The result showed that Logistic Regression (LGR) model was achieved 100% classification accuracy while Multi-Criteria Quadratic Programming (MCQP) and Support Vector Machine (SVM) ones also demonstrated satisfying result (95%). In order to find simpler and easier interpretable classification model, several feature selection techniques were evaluated to identify the most relevant wavelengths to the orange varieties. With four principal components (PCs) from Principal Component Analysis (PCA) and the effective wavelengths of 769.68, 692.28, 662.61 and 959.31 nm from Least Square Forward Selection (LS-FS), the reduced classification models of LGR also achieved satisfying classification accuracy respectively. Furthermore, both Kernel Principal Component Analysis (KPCA) and Kernel Least Square Forward Selection (KLS-FS) with SVM enhanced performance of models by 5 PCs and features respectively. The result concluded that NIRS can yield an accurate classification for Thai tangerine varieties from whole spectra and can enhance interpretability of classification model by feature subset. (C) 2013 Elsevier B.V. All rights reserved. 2013 * 35(<-370): Analysis of complex, processed substances with the use of NIR spectroscopy and chemometrics: Classification and prediction of properties - The potato crisps example An NIR spectroscopic method was researched and developed for the analysis of potato crisps (chips) chosen as an example of a common, cheap but complex product. Four similar types of the 'original flavour' potato chips from different manufacturers were analysed by NIR spectroscopy; as well, the quality parameters - fat, moisture, acid and peroxide values of the extracted oil were predicted. Principal component analysis (PCA) of the NIR data displayed the clustering of objects with respect to the type of chips. NIR spectra were rank-ordered with the use of the sparingly applied multiple criteria decision making (MCDM) ranking methods, PROMETHEE (Preference Ranking Organization METHod for Enrichment Evaluation) and GAIA (Geometrical Analysis for Interactive Aid), and a comprehensive quantitative description of the data was obtained. The four traditional parameters were predicted on the basis of the NIR spectra; the performance of the Partial Least Squares (PLS), and Kernel Partial Least Squares (KPLS) calibrations was compared with those from Least Squares-Support Vector Machines (LS-SVM) method. The LS-SVM calibrations, which model better data linearity and non-linearity, successfully predicted all four parameters. This work has demonstrated that NIR methodology with the use of chemometrics can describe comprehensively qualitative and quantitative properties of complex, processed substances as illustrated by the potato chips example, and indicated that this approach may be applied to other similar complex samples. (C) 2010 Elsevier B.V. All rights reserved. 2011 * 36(<- 87): Fingerprint Recognition by Multi-objective Optimization PSO Hybrid with SVM Researchers put efforts to discover more efficient ways to classification problems for a period of time. Recent years, the support vector machine (SVM) becomes a well-popular intelligence algorithm developed for dealing this kind of problem. In this paper, we used the core idea of multi-objective optimization to transform SVM into a new form. This form of SVM could help to solve the situation: in tradition, SVM is usually a single optimization equation, and parameters for this algorithm can only be determined by user's experience, such as penalty parameter. Therefore, our algorithm is developed to help user prevent from suffering to use this algorithm in the above condition. We use multi-objective Particle Swarm Optimization algorithm in our research and successfully proved that user do not need to use trial - and - error method to determine penalty parameter C. Finally, we apply it to NIST-4 database to assess our proposed algorithm feasibility, and the experiment results shows our method can have great results as we expect. 2014 * 37(<- 64): Surrogate-assisted multi-objective model selection for support vector machines Classification is one of the most well-known tasks in supervised learning. A vast number of algorithms for pattern classification have been proposed so far. Among these, support vector machines (SVMs) are one of the most popular approaches, due to the high performance reached by these methods in a wide number of pattern recognition applications. Nevertheless, the effectiveness of SVMs highly depends on their hyper-parameters. Besides the fine-tuning of their hyper-parameters, the way in which the features are scaled as well as the presence of non-relevant features could affect their generalization performance. This paper introduces an approach for addressing model selection for support vector machines used in classification tasks. In our formulation, a model can be composed of feature selection and pre-processing methods besides the SVM classifier. We formulate the model selection problem as a multi-objective one, aiming to minimize simultaneously two components that are closely related to the error of a model: bias and variance components, which are estimated in an experimental fashion. A surrogate-assisted evolutionary multi-objective optimization approach is adopted to explore the hyper-parameters space. We adopted this approach due to the fact that estimating the bias and variance could be computationally expensive. Therefore, by using surrogate-assisted optimization, we expect to reduce the number of solutions evaluated by the fitness functions so that the computational cost would also be reduced. Experimental results conducted on benchmark datasets widely used in the literature, indicate that highly competitive models with a fewer number of fitness function evaluations are obtained by our proposal when it is compared to state of the art model selection methods. (C) 2014 Elsevier B.V. All rights reserved. 2015 * 38(<-327): MHC-I prediction using a combination of T cell epitopes and MHC-I binding peptides We propose a novel learning method that combines multiple experimental modalities to improve the MHC Class-I binding prediction. Multiple experimental modalities are often accessible in the context of a binding problem. Such modalities can provide different labels of data, such as binary classifications, affinity measurements, or direct estimations of the binding profile. Current machine learning algorithms usually focus on a given label type. We here present a novel Multi-Label Vector Optimization (MEMO) formalism to produce classifiers based on the simultaneous optimization of multiple labels. Within this methodology, all label types are combined into a single constrained quadratic dual optimization problem. We apply the MLVO to MHC class-I epitope prediction. We combine affinity measurements (IC50/EC50), binary classifications of epitopes as T cell activators and existing algorithms. The multi-label vector optimization algorithms produce classifiers significantly better than the ones resulting from any of its components. These matrix based classifiers are better or equivalent to the existing state of the art MHC-I epitope prediction tools in the studied alleles. (C) 2010 Elsevier B.V. All rights reserved. 2011 * 39(<-345): A new feature extraction and selection scheme for hybrid fault diagnosis of gearbox A novel feature extraction and selection scheme was proposed for hybrid fault diagnosis of gearbox based on S transform, non-negative matrix factorization (NMF), mutual information and multi-objective evolutionary algorithms. Time-frequency distributions of vibration signals, acquired from gearbox with different fault states, were obtained by S transform. Then non-negative matrix factorization (NMF) was employed to extract features from the time-frequency representations. Furthermore, a two stage feature selection approach combining filter and wrapper techniques based on mutual information and non-dominated sorting genetic algorithms II (NSGA-II) was presented to get a more compact feature subset for accurate classification of hybrid faults of gearbox. Eight fault states, including gear defects, bearing defects and combination of gear and bearing defects, were simulated on a single-stage gearbox to evaluated the proposed feature extraction and selection scheme. Four different classifiers were employed to incorporate with the presented techniques for classification. Performances of four classifiers with different feature subsets were compared. Results of the experiments have revealed that the proposed feature extraction and selection scheme demonstrate to be an effective and efficient tool for hybrid fault diagnosis of gearbox. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 40(<-467): AG-ART: An adaptive approach to evolving ART architectures This paper focuses on classification problems, and in particular on the evolution of ARTMAP architectures using genetic algorithms, with the objective of improving generalization performance and alleviating the adaptive resonance theory (ART) category proliferation problem. In a previous effort, we introduced evolutionary fuzzy ARTMAP (FAM), referred to as genetic Fuzzy ARTMAP (GFAM). In this paper we apply an improved genetic algorithm to FAM and extend these ideas to two other ART architectures; ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (CAM). One of the major advantages of the proposed improved genetic algorithm is that it adapts the CA parameters automatically, and in a way that takes into consideration the intricacies of the classification problem under consideration. The resulting genetically engineered ART architectures are justifiably referred to as AG-FAM, AG-EAM and AG-GAM or collectively as AG-ART (adaptive genetically engineered ART). We compare the performance (in terms of accuracy, size, and computational cost) of the AG-ART architectures with GFAM, and other ART architectures that have appeared in the literature and attempted to solve the category proliferation problem. Our results demonstrate that AG-ART architectures exhibit better performance than their other ART counterparts (semi-supervised ART) and better performance than GFAM. We also compare AG-ART's performance to other related results published in the classification literature, and demonstrate that AG-ART architectures exhibit competitive generalization performance and, quite often, produce smaller size classifiers in solving the same classification problems. We also show that AG-ART's performance gains are achieved within a reasonable computational budget. (C) 2008 Elsevier B.V. All rights reserved. 2009 * 41(<-580): Multi-objective parameters selection for SVM classification using NSGA-II Selecting proper parameters is an important issue to extend the classification ability of Support Vector Machine (SVM), which makes SVM practically useful. Genetic Algorithm (CA) has been widely applied to solve the problem of parameters selection for SVM classification due to its ability to discover good solutions quickly for complex searching and optimization problems. However, traditional CA in this field relys on single generalization error bound as fitness function to select parameters. Since there have several generalization error bounds been developed, picking and using single criterion as fitness function seems intractable and insufficient. Motivated by the multi-objective optimization problems, this paper introduces an efficient method of parameters selection for SVM classification based on multi-objective evolutionary algorithm NSCA-II. We also introduce an adaptive mutation rate for NSGA-II. Experiment results show that our method is better than single-objective approaches, especially in the case of tiny training sets with large testing sets. 2006 * 42(<-589): Multiobjective optimization of ensembles of multilayer perceptrons for pattern classification Pattern classification seeks to minimize error of unknown patterns, however, in many real world applications, type I (false positive) and type II (false negative) errors have to be dealt with separately, which is a complex problem since an attempt to minimize one of them usually makes the other grow. Actually, a type of error can be more important than the other, and a trade-off that minimizes the most important error type must be reached. Despite the importance of type-II errors, most pattern classification methods take into account only the global classification error. In this paper we propose to optimize both error types in classification by means of a multiobjective algorithm in which each error type and the network size is an objective of the fitness function. A modified version of the GProp method (optimization and design of multilayer perceptrons) is used, to simultaneously optimize the network size and the type I and II errors. 2006 * 43(<-238): Multiplicative Update Rules for Concurrent Nonnegative Matrix Factorization and Maximum Margin Classification The state-of-the-art classification methods which employ nonnegative matrix factorization (NMF) employ two consecutive independent steps. The first one performs data transformation (dimensionality reduction) and the second one classifies the transformed data using classification methods, such as nearest neighbor/centroid or support vector machines (SVMs). In the following, we focus on using NMF factorization followed by SVM classification. Typically, the parameters of these two steps, e. g., the NMF bases/coefficients and the support vectors, are optimized independently, thus leading to suboptimal classification performance. In this paper, we merge these two steps into one by incorporating maximum margin classification constraints into the standard NMF optimization. The notion behind the proposed framework is to perform NMF, while ensuring that the margin between the projected data of the two classes is maximal. The concurrent NMF factorization and support vector optimization are performed through a set of multiplicative update rules. In the same context, the maximum margin classification constraints are imposed on the NMF problem with additional discriminant constraints and respective multiplicative update rules are extracted. The impact of the maximum margin classification constraints on the NMF factorization problem is addressed in Section VI. Experimental results in several databases indicate that the incorporation of the maximum margin classification constraints into the NMF and discriminant NMF objective functions improves the accuracy of the classification. 2013 * 44(<-425): A multi-model selection framework for unknown and/or evolutive misclassification cost problems In this paper, we tackle the problem of model selection when misclassification costs are unknown and/or may evolve. Unlike traditional approaches based on a scalar optimization, we propose a generic multimodel selection framework based on a multi-objective approach. The idea is to automatically train a pool of classifiers instead of one single classifier, each classifier in the pool optimizing a particular trade-off between the objectives. Within the context of two-class classification problems, we introduce the "ROC front concept" as an alternative to the ROC curve representation. This strategy is applied to the multimodel selection of SVM classifiers using an evolutionary multi-objective optimization algorithm. The comparison with a traditional scalar optimization technique based on an AUC criterion shows promising results on UCl datasets as well as on a real-world classification problem. (C) 2009 Elsevier Ltd. All rights reserved. 2010 * 45(<-429): NONCOST SENSITIVE SVM TRAINING USING MULTIPLE MODEL SELECTION In this paper, we propose a multi-objective optimization framework for SVM hyperparameters tuning. The key idea is to manage a population of classifiers optimizing both False Positive and True Positive rates rather than a single classifier optimizing a scalar criterion. Hence, each classifier in the population optimizes a particular trade-off between the objectives. Within the context of two-class classification problems, our work introduces "the receiver operating characteristics (ROC) front concept" depicting a population of SVM classifiers as an alternative to the receiver operating characteristics (ROC) curve representation. The proposed framework leads to a noncost sensitive SVM training relying on the pool of classifiers. The comparison with a traditional scalar optimization technique based on an AUC criterion shows promising results on UCI datasets. 2010 * 46(<-567): Two-group classification via a biobjective margin maximization model In this paper we propose a biobjective model for two-group classification via margin maximization, in which the margins in both classes are simultaneously maximized. The set of Pareto-optimal solutions is described, yielding a set of parallel hyperplanes, one of which is just the solution of the classical SVM approach. In order to take into account different misclassification costs or a priori probabilities, the ROC curve can be used to select one out of such hyperplanes by expressing the adequate tradeoff for sensitivity and specificity. Our result gives a theoretical motivation for using the ROC approach in case misclassification costs in the two groups are not necessarily equal. (c) 2005 Elsevier B.V. All rights reserved. 2006 * 47(<-573): Multi-class ROC analysis from a multi-objective optimisation perspective The receiver operating characteristic (ROC) has become a standard tool for the analysis and comparison of classifiers when the costs of misclassification are unknown. There has been relatively little work, however, examining ROC for more than two classes. Here we discuss and present an extension to the standard two-class ROC for multi-class problems. We define the ROC surface for the Q-class problem in terms of a multi-objective optimisation problem in which the goal is to simultaneously minimise the Q(Q-1) misclassification rates, when the misclassification costs and parameters governing the classifier's behaviour are unknown. We present an evolutionary algorithm to locate the Pareto front-the optimal trade-off surface between misclassifications of different types. The use of the Pareto optimal surface to compare classifiers is discussed and we present a straightforward multi-class analogue of the Gini coefficient. The performance of the evolutionary algorithm is illustrated on a synthetic three class problem, for both k-nearest neighbour and multi-layer perceptron classifiers. (c) 2005 Elsevier B.V. All rights reserved. 2006 * 48(<- 26): Joint model for feature selection and parameter optimization coupled with classifier ensemble in chemical mention recognition Mention recognition in chemical texts plays an important role in a wide-spread range of application areas. Feature selection and parameter optimization are the two important issues in machine learning. While the former improves the quality of a classifier by removing the redundant and irrelevant features, the later concerns finding the most suitable parameter values, which have significant impact on the overall classification performance. In this paper we formulate a joint model that performs feature selection and parameter optimization simultaneously, and propose two approaches based on the concepts of single and multiobjective optimization techniques. Classifier ensemble techniques are also employed to improve the performance further. We identify and implement variety of features that are mostly domain-independent. Experiments are performed with various configurations on the benchmark patent and Medline datasets. Evaluation shows encouraging performance in all the settings. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 49(<- 83): LG-Trader: Stock trading decision support based on feature selection by weighted localized generalization error model Stock trading is an important financial activity of human society. Machine learning techniques are adopted to provide trading decision support by predicting the stock price or trading signals of the next day. Decisions are made by analyzing technical indices and fundamental analysis of companies. There are two major machine learning research problems for stock trading decision support: classifier architecture selection and feature selection. In this work, we propose the LG-Trader which will deal with these two problems simultaneously using a genetic algorithm minimizing a new Weighted Localized Generalization Error (wL-GEM). An issue being ignored in current machine learning based stock trading researches is the imbalance among buy, hold and sell decisions. Usually hold decision is the majority in comparison to both buy and sell decisions. So, the wL-GEM is proposed to balance classes by penalizing heavier for generalization error being made in minority classes. The feature selection based on wL-GEM helps to select most useful technical indices among choices for each stock. Experimental results demonstrate that the LG-Trader yields higher profits and rates of return in both stock and index trading. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 50(<-289): Enhancing electronic nose performance: A novel feature selection approach using dynamic social impact theory and moving window time slicing for classification of Kangra orthodox black tea (Camellia sinensis (L.) O. Kuntze) This paper presents a novel multiobjective wrapper approach using dynamic social impact theory based optimizer (SITU) and moving window time slicing (MWTS) for the performance enhancement of an electronic nose (EN). SITU, in conjunction with principal component analysis (PCA) and support vector machines (SVMs) classifier, has been used for the classification of samples collected from the single batch production of Kangra orthodox black tea (Camllia sinensis (L) U. Kuntze). The work employs a novel SITU assisted MWTS (SITO-MWTS) technique for identifying the optimum time intervals of the EN sensor array response, which give the maximum classification rate. Results show that, by identifying the optimum time slicing window positions for each sensor response, the performance of an EN can be improved. Also, the sensor response variability is time dependent in a sniffing cycle, and hence good classification can be obtained by selecting different time intervals for different sensors. The proposed method has also been compared with other established techniques for EN feature extraction. The work not only demonstrates the efficacy of SITU for feature selection owing to its simplicity in terms of few control parameters, but also the capability of an EN to differentiate Kangra orthodox black tea samples at different production stages. (C) 2012 Elsevier B.V. All rights reserved. 2012 * 51(<-329): A novel approach using Dynamic Social Impact Theory for optimization of impedance-Tongue (iTongue) This paper presents a novel multiobjective wrapper approach using Dynamic Social Impact Theory based optimizer (SITO). A Fuzzy Inference System in conjunction with support vector machines classifier has been used for the optimization of an impedance-Tongue for the classification of samples collected from single batch production of Kangra orthodox black tea. Impedance spectra of the tea samples have been measured in the range of 20 Hz to 1 MHz using a two electrode setup employing platinum and gold electrodes. The proposed approach has been compared, for its robustness and validity using various intra and inter measures, against Genetic Algorithm and binary Particle Swarm Optimization. Feature subset selection methods based on the first and second order statistics have also been employed for comparisons. The proposed approach outperforms the Genetic Algorithm and binary Particle Swarm Optimization. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 52(<- 20): Application of multi-stage multi-objective multi-disciplinary agent model based on dynamic substructural method in Mistuned Blisk A method called multi-stage multi-objective multi-disciplinary agent model based on dynamic substructural method (MSMOMDAM-DSM) is proposed. For a large amount of calculation, it can increase the mistuned blisk's computational efficiency more significantly comparing with the traditional probability analysis method when the request of computational accuracy is satisfied. Deterministic analysis is investigated based on the improved hybrid interface substructural component modal synthesis method (IHISCMSM). It demonstrates that the symmetry is broken and the localization phenomenon is observed when the blisk is mistuned. Meanwhile, the frequency response function (FRF) manifests that multiple peaks are observed in different frequencies when the mistuning level is greater than 0% which are caused by the combined behavior of resonance and mistuning. On the base of deterministic analysis, the most dangerous point of mistuned blisk is extracted as response amplitude and the probability analysis is investigated. The probability density distribution function (PDDF) of random variable is given and the limit state equation (LSE) of radial deformation, sampling history, simulation sample, cumulative distribution function (CDF) and histogram distribution are obtained. The sensitivity and the scatter diagram are also analyzed which manifests that some variables are positive while some are negative and the rotational speed is the importance degree. Probability design and inverse probability design are also researched which lay a solid foundation for designing safe and reasonable blisk. At last, it is verified that the computational accuracy and efficiency of MSMOMDAM-DSM is superior to support vector machine-response surface method (SVM-RSM). (C) 2015 Elsevier Masson SAS. All rights reserved. 2015 * 53(<- 44): The influence of scaling metabolomics data on model classification accuracy Correctly measured classification accuracy is an important aspect not only to classify pre-designated classes such as disease versus control properly, but also to ensure that the biological question can be answered competently. We recognised that there has been minimal investigation of pre-treatment methods and its influence on classification accuracy within the metabolomics literature. The standard approach to pre-treatment prior to classification modelling often incorporates the use of methods such as autoscaling, which positions all variables on a comparable scale thus allowing one to achieve separation of two or more groups (target classes). This is often undertaken without any prior investigation into the influence of the pre-treatment method on the data and supervised learning techniques employed. Whilst this is useful for deriving essential information such as predictive ability or visual interpretation in many cases, as shown in this study the standard approach is not always the most suitable option available. Here, a study has been conducted to investigate the influence of six pre-treatment methods-autoscaling, range, level, Pareto and vast scaling, as well as no scaling-on four classification models, including: principal components-discriminant function analysis (PC-DFA), support vector machines (SVM), random forests (RF) and k-nearest neighbours (kNN)-using three publically available metabolomics data sets. We have demonstrated that undertaking different pre-treatment methods can greatly affect the interpretation of the statistical modelling outputs. The results have shown that data pre-treatment is context dependent and that there was no single superior method for all the data sets used. Whilst we did find that vast scaling produced the most robust models in terms of classification rate for PC-DFA of both NMR spectroscopy data sets, in general we conclude that both vast scaling and autoscaling produced similar and superior results in comparison to the other four pre-treatment methods on both NMR and GC-MS data sets. It is therefore our recommendation that vast scaling is the primary pre-treatment method to use as this method appears to be more stable and robust across all the different classifiers that were conducted in this study. 2015 * 54(<-127): SVM classification for imbalanced data sets using a multiobjective optimization framework Classification of imbalanced data sets in which negative instances outnumber the positive instances is a significant challenge. These data sets are commonly encountered in real-life problems. However, performance of well-known classifiers is limited in such cases. Various solution approaches have been proposed for the class imbalance problem using either data-level or algorithm-level modifications. Support Vector Machines (SVMs) that have a solid theoretical background also encounter a dramatic decrease in performance when the data distribution is imbalanced. In this study, we propose an L-1-norm SVM approach that is based on a three objective optimization problem so as to incorporate into the formulation the error sums for the two classes independently. Motivated by the inherent multi objective nature of the SVMs, the solution approach utilizes a reduction into two criteria formulations and investigates the efficient frontier systematically. The results indicate that a comprehensive treatment of distinct positive and negative error levels may lead to performance improvements that have varying degrees of increased computational effort. 2014 * 55(<-174): Multiobjective Binary Biogeography Based Optimization for Feature Selection Using Gene Expression Data Gene expression data play an important role in the development of efficient cancer diagnoses and classification. However, gene expression data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a multi-objective biogeography based optimization method is proposed to select the small subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the Fisher-Markov selector is used to choose the 60 top gene expression data. Secondly, to make biogeography based optimization suitable for the discrete problem, binary biogeography based optimization, as called BBBO, is proposed based on a binary migration model and a binary mutation model. Then, multi-objective binary biogeography based optimization, as we called MOBBBO, is proposed by integrating the non-dominated sorting method and the crowding distance method into the BBBO framework. Finally, the MOBBBO method is used for gene selection, and support vector machine is used as the classifier with the leave-one-out cross-validation method (LOOCV). In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on ten gene expression dataset benchmarks. Experimental results demonstrate that the proposed method is better or at least comparable with previous particle swarm optimization (PSO) algorithm and support vector machine (SVM) from literature when considering the quality of the solutions obtained. 2013 * 56(<-254): Multiclass Gene Selection Using Pareto-Fronts Filter methods are often used for selection of genes in multiclass sample classification by using microarray data. Such techniques usually tend to bias toward a few classes that are easily distinguishable from other classes due to imbalances of strong features and sample sizes of different classes. It could therefore lead to selection of redundant genes while missing the relevant genes, leading to poor classification of tissue samples. In this manuscript, we propose to decompose multiclass ranking statistics into class-specific statistics and then use Pareto-front analysis for selection of genes. This alleviates the bias induced by class intrinsic characteristics of dominating classes. The use of Pareto-front analysis is demonstrated on two filter criteria commonly used for gene selection: F-score and KW-score. A significant improvement in classification performance and reduction in redundancy among top-ranked genes were achieved in experiments with both synthetic and real-benchmark data sets. 2013 * 57(<-264): Exploiting the Classification Performance of Support Vector Machines with Multi-Temporal Moderate-Resolution Imaging Spectroradiometer (MODIS) Data in Areas of Agreement and Disagreement of Existing Land Cover Products Several studies have focused in the past on global land cover (LC) datasets harmonization and inter-comparison and have found significant inconsistencies. Despite the known discrepancies between existing products derived from medium resolution satellite sensor data, little emphasis has been placed on examining these disagreements to improve the overall classification accuracy of future land cover maps. This work evaluates the classification performance of a least square support vector machine (LS-SVM) algorithm with respect to areas of agreement and disagreement between two existing land cover maps. The approach involves the use of time series of Moderate-resolution Imaging Spectroradiometer (MODIS) 250-m Normalized Difference Vegetation Index (NDVI) (16-day composites) and gridded climatic indicators. LS-SVM is trained on reference samples obtained through visual interpretation of Google Earth (GE) high resolution imagery. The core of the training process is based on repeated random splits of the training dataset to select a small set of suitable support vectors optimizing class separability. A large number of independent validation samples spread over three contrasting regions in Europe (Eastern Austria, Macedonia and Southern France) are used to calculate classification accuracies for the LS-SVM NDVI-derived LC map and for two (globally available) LC products: GLC2000 and GlobCover. The LS-SVM LC map reported an overall accuracy of 70%. Classification accuracies ranged from 71% where GlobCover and GLC2000 agreed to 68% for areas of disagreement. Results indicate that existing LC products are as accurate as the LS-SVM LC map in areas of agreement (with little margin for improvements), while classification accuracy is substantially better for the LS-SVM LC map in areas of disagreement. On average, the LS-SVM LC map was 14% and 18% more accurate compared to GlobCover and GLC2000, respectively. 2012 * 58(<-321): MicroRNA transcription start site prediction with multi-objective feature selection. MicroRNAs (miRNAs) are non-coding, short (21-23nt) regulators of protein-coding genes that are generally transcribed first into primary miRNA (pri-miR), followed by the generation of precursor miRNA (pre-miR). This finally leads to the production of the mature miRNA. A large amount of information is available on the pre- and mature miRNAs. However, very little is known about the pri-miRs, due to a lack of knowledge about their transcription start sites (TSSs). Based on the genomic loci, miRNAs can be categorized into two types --intragenic (intra-miR) and intergenic (inter-miR). While it is already an established fact that intra-miRs are commonly transcribed in conjunction with their host genes, the transcription machinery of inter-miRs is poorly understood. Although it is assumed that miRNA promoters are similar in structure to gene promoters, since both are transcribed by RNA polymerase II (Pol II), computational validations exhibit poor performance of gene promoter prediction methods on miRNAs. In this paper, we concentrate on the problem of TSS prediction for miRNAs. The present study begins with the identification of positive and negative promoter samples from recently published data stemming from RNA-sequencing studies. From these samples of experimentally validated miRNA TSSs, a number of standard sequence features are extracted. Furthermore, to account for potential footprints related to promoter regulation by CpG dinucleotide targeted DNA methylation, a number of novel features are defined. We develop a support vector machine (SVM) with RBF kernel for the prediction of miRNA TSSs trained on human miRNA promoters. A novel feature reduction technique based on archived multi-objective simulated annealing (AMOSA) identifies the final set of features. The resulting model trained on miRNA promoters shows improved performance over the one trained on protein-coding gene promoters in terms of classification accuracy, sensitivity and specificity. Results are also reported for a completely independent biologically validated test set. In a part of the investigation, the proposed approach is used to predict protein-coding gene TSSs. It shows a significantly improved performance when compared to previously published gene TSS prediction methods. 2012 * 59(<-337): MultiMiTar: A Novel Multi Objective Optimization based miRNA-Target Prediction Method Background: Machine learning based miRNA-target prediction algorithms often fail to obtain a balanced prediction accuracy in terms of both sensitivity and specificity due to lack of the gold standard of negative examples, miRNA-targeting site context specific relevant features and efficient feature selection process. Moreover, all the sequence, structure and machine learning based algorithms are unable to distribute the true positive predictions preferentially at the top of the ranked list; hence the algorithms become unreliable to the biologists. In addition, these algorithms fail to obtain considerable combination of precision and recall for the target transcripts that are translationally repressed at protein level. Methodology/Principal Finding: In the proposed article, we introduce an efficient miRNA-target prediction system MultiMiTar, a Support Vector Machine (SVM) based classifier integrated with a multiobjective metaheuristic based feature selection technique. The robust performance of the proposed method is mainly the result of using high quality negative examples and selection of biologically relevant miRNA-targeting site context specific features. The features are selected by using a novel feature selection technique AMOSA-SVM, that integrates the multi objective optimization technique Archived Multi-Objective Simulated Annealing (AMOSA) and SVM. Conclusions/Significance: MultiMiTar is found to achieve much higher Matthew's correlation coefficient (MCC) of 0.583 and average class-wise accuracy (ACA) of 0.8 compared to the others target prediction methods for a completely independent test data set. The obtained MCC and ACA values of these algorithms range from -0.269 to 0.155 and 0.321 to 0.582, respectively. Moreover, it shows a more balanced result in terms of precision and sensitivity (recall) for the translationally repressed data set as compared to all the other existing methods. An important aspect is that the true positive predictions are distributed preferentially at the top of the ranked list that makes MultiMiTar reliable for the biologists. MultiMiTar is now available as an online tool at www.isical.ac.in/similar to bioinfo similar to miu/multimitarhtm. MultiMiTar software can be downloaded from www.isical.ac.in/similar to bioinfo_miu/multimitar-download.htm.. 2011 * 60(<-547): Gene selection with multiple ordering criteria Background: A microarray study may select different differentially expressed gene sets because of different selection criteria. For example, the fold-change and p-value are two commonly known criteria to select differentially expressed genes under two experimental conditions. These two selection criteria often result in incompatible selected gene sets. Also, in a two-factor, say, treatment by time experiment, the investigator may be interested in one gene list that responds to both treatment and time effects. Results: We propose three layer ranking algorithms, point-admissible, line-admissible (convex), and Pareto, to provide a preference gene list from multiple gene lists generated by different ranking criteria. Using the public colon data as an example, the layer ranking algorithms are applied to the three univariate ranking criteria, fold-change, p-value, and frequency of selections by the SVM-RFE classifier. A simulation experiment shows that for experiments with small or moderate sample sizes (less than 20 per group) and detecting a 4-fold change or less, the two-dimensional (p-value and fold-change) convex layer ranking selects differentially expressed genes with generally lower FDR and higher power than the standard p-value ranking. Three applications are presented. The first application illustrates a use of the layer rankings to potentially improve predictive accuracy. The second application illustrates an application to a two-factor experiment involving two dose levels and two time points. The layer rankings are applied to selecting differentially expressed genes relating to the dose and time effects. In the third application, the layer rankings are applied to a benchmark data set consisting of three dilution concentrations to provide a ranking system from a long list of differentially expressed genes generated from the three dilution concentrations. Conclusion: The layer ranking algorithms are useful to help investigators in selecting the most promising genes from multiple gene lists generated by different filter, normalization, or analysis methods for various objectives. 2007 * 61(<-173): An SVM-Wrapped Multiobjective Evolutionary Feature Selection Approach for Identifying Cancer-MicroRNA Markers MicroRNAs (miRNAs), have been shown to play important roles in gene regulation and various biological processes. Recent studies have revealed that abnormal expression of some specific miRNAs often results in the development of cancer. Microarray datasets containing the expression profiles of several miRNAs are being used for identification of miRNAs which are differentially expressed in normal and malignant tissue samples. In this article, a multiobjective feature selection approach is proposed for this purpose. The proposed method uses Genetic Algorithm for multiobjective optimization and support vector machine (SVM) classifier as a wrapper for evaluating the chromosomes that encode feature subsets. The performance has been demonstrated on real-life miRNA datasets for and the identified miRNA markers are reported. Moreover biological significance tests have been carried out for the obtained markers. 2013 * 62(<-381): Gene expression data analysis using multiobjective clustering improved with SVM based ensemble. Microarray technology facilitates the monitoring of the expression levels of thousands of genes over different experimental conditions simultaneously. Clustering is a popular data mining tool which can be applied to microarray gene expression data to identify co-expressed genes. Most of the traditional clustering methods optimize a single clustering goodness criterion and thus may not be capable of performing well on all kinds of datasets. Motivated by this, in this article, a multiobjective clustering technique that optimizes cluster compactness and separation simultaneously, has been improved through a novel support vector machine classification based cluster ensemble method. The superiority of MOCSVMEN (MultiObjective Clustering with Support Vector Machine based ENsemble) has been established by comparing its performance with that of several well known existing microarray data clustering algorithms. Two real-life benchmark gene expression datasets have been used for testing the comparative performances of different algorithms. A recently developed metric, called Biological Homogeneity Index (BHI), which computes the clustering goodness with respect to functional annotation, has been used for the comparison purpose. 2011 * 63(<-392): Multi-Class Clustering of Cancer Subtypes through SVM Based Ensemble of Pareto-Optimal Solutions for Gene Marker Identification With the advancement of microarray technology, it is now possible to study the expression profiles of thousands of genes across different experimental conditions or tissue samples simultaneously. Microarray cancer datasets, organized as samples versus genes fashion, are being used for classification of tissue samples into benign and malignant or their subtypes. They are also useful for identifying potential gene markers for each cancer subtype, which helps in successful diagnosis of particular cancer types. In this article, we have presented an unsupervised cancer classification technique based on multiobjective genetic clustering of the tissue samples. In this regard, a real-coded encoding of the cluster centers is used and cluster compactness and separation are simultaneously optimized. The resultant set of near-Pareto-optimal solutions contains a number of non-dominated solutions. A novel approach to combine the clustering information possessed by the non-dominated solutions through Support Vector Machine (SVM) classifier has been proposed. Final clustering is obtained by consensus among the clusterings yielded by different kernel functions. The performance of the proposed multiobjective clustering method has been compared with that of several other microarray clustering algorithms for three publicly available benchmark cancer datasets. Moreover, statistical significance tests have been conducted to establish the statistical superiority of the proposed clustering method. Furthermore, relevant gene markers have been identified using the clustering result produced by the proposed clustering method and demonstrated visually. Biological relationships among the gene markers are also studied based on gene ontology. The results obtained are found to be promising and can possibly have important impact in the area of unsupervised cancer classification as well as gene marker identification for multiple cancer subtypes. 2010 * 64(<-484): Combining Pareto-optimal clusters using supervised learning for identifying co-expressed genes Background: The landscape of biological and biomedical research is being changed rapidly with the invention of microarrays which enables simultaneous view on the transcription levels of a huge number of genes across different experimental conditions or time points. Using microarray data sets, clustering algorithms have been actively utilized in order to identify groups of co-expressed genes. This article poses the problem of fuzzy clustering in microarray data as a multiobjective optimization problem which simultaneously optimizes two internal fuzzy cluster validity indices to yield a set of Pareto-optimal clustering solutions. Each of these clustering solutions possesses some amount of information regarding the clustering structure of the input data. Motivated by this fact, a novel fuzzy majority voting approach is proposed to combine the clustering information from all the solutions in the resultant Pareto-optimal set. This approach first identifies the genes which are assigned to some particular cluster with high membership degree by most of the Pareto-optimal solutions. Using this set of genes as the training set, the remaining genes are classified by a supervised learning algorithm. In this work, we have used a Support Vector Machine (SVM) classifier for this purpose. Results: The performance of the proposed clustering technique has been demonstrated on five publicly available benchmark microarray data sets, viz., Yeast Sporulation, Yeast Cell Cycle, Arabidopsis Thaliana, Human Fibroblasts Serum and Rat Central Nervous System. Comparative studies of the use of different SVM kernels and several widely used microarray clustering techniques are reported. Moreover, statistical significance tests have been carried out to establish the statistical superiority of the proposed clustering approach. Finally, biological significance tests have been carried out using a web based gene annotation tool to show that the proposed method is able to produce biologically relevant clusters of co-expressed genes. Conclusion: The proposed clustering method has been shown to perform better than other well-known clustering algorithms in finding clusters of co-expressed genes efficiently. The clusters of genes produced by the proposed technique are also found to be biologically significant, i.e., consist of genes which belong to the same functional groups. This indicates that the proposed clustering method can be used efficiently to identify co-expressed genes in microarray gene expression data. Supplementary Website The pre-processed and normalized data sets, the matlab code and other related materials are available at http://anirbanmukhopadhyay.50webs.com/mogasvm.html. 2009 * 65(<-363): Multi-criteria ABC analysis using artificial-intelligence-based classification techniques ABC analysis is a popular and effective method used to classify inventory items into specific categories that can be managed and controlled separately. Conventional ABC analysis classifies inventory items three categories: A, B, or C based on annual dollar usage of an inventory item. Multi-criteria inventory classification has been proposed by a number of researchers in order to take other important criteria into consideration. These researchers have compared artificial-intelligence (AI)-based classification techniques with traditional multiple discriminant analysis (MDA). Examples of these AI-based techniques include support vector machines (SVMs), backpropagation networks (BPNs), and the k-nearest neighbor (k-NN) algorithm. To test the effectiveness of these techniques, classification results based on four benchmark techniques are compared. The results show that AI-based techniques demonstrate superior accuracy to MDA. Statistical analysis reveals that SVM enables more accurate classification than other AI-based techniques. This finding suggests the possibility of implementing AI-based techniques for multi-criteria ABC analysis in enterprise resource planning (ERP) systems. (C) 2010 Elsevier Ltd. All rights reserved. 2011 * 66(<-498): Accurate and resource-aware classification based on measurement data In this paper, we face the problem of designing accurate decision-making modules in measurement systems that need to be implemented on resource-constrained platforms. We propose a methodology based on multiobjective optimization and genetic algorithms (GAs) for the analysis of support vector machine (SVM) solutions in the classification error-complexity space. Specific criteria for the choice of optimal SVM classifiers and experimental results on both real and synthetic data will also be discussed. 2008 * 67(<-148): Multi-objective evolutionary algorithms for fuzzy classification in survival prediction Objective: This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. Methods and materials: The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. Results: The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Conclusions: Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 68(<- 67): A real-time forecasting model for the spatial distribution of typhoon rainfall Accurate forecasts of hourly rainfall are necessary for early warning systems during typhoons. In this paper, a typhoon rainfall forecasting model is proposed to yield 1- to 6-h ahead forecasts of hourly rainfall. First, an input optimization step integrating multi-objective genetic algorithm (MOGA) with support vector machine (SVM) is developed to identify the optimal input combinations. Second, based on the results of the first step, the forecasted rainfall from each station is used to obtain the spatial characteristics of the rainfall process is presented. An actual application to Tsengwen river basin is conducted to demonstrate the advantage of the proposed model. The results clearly indicate that the proposed model effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. The optimal input combinations can be obtained from the proposed model for different stations with different geographical conditions. In addition, the proposed model is capable of producing more acceptable the results of rainfall maps than other model. In conclusion, the proposed modeling technique is useful to improve the hourly typhoon rainfall forecasting and is expected to be helpful to support disaster warning systems. (C) 2014 Elsevier B.V. All rights reserved. 2015 * 69(<-369): Classification as Clustering: A Pareto Cooperative-Competitive GP Approach Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases. 2011 * 70(<-356): Multi-objective uniform design as a SVM model selection tool for face recognition The primary difficulty of support vector machine (SVM) model selection is heavy computational cost, thus it is difficult for current model selection methods to be applied in face recognition. Model selection via uniform design can effectively alleviate the computational cost, but its drawback is that it adopts a single objective criterion which can not always guarantee the generalization capacity. The sensitivity and specificity as multi-objective criteria have been proved of better performance and can provide a means for obtaining more realistic models. This paper first proposes a multi-objective uniform design (MOUD) search method as a SVM model selection tool, and then applies this optimized SVM classifier to face recognition. Because of replacing single objective criterion with multi-objective criteria and adopting uniform design to seek experimental points that uniformly scatter on whole experimental domain, MOUD can reduce the computational cost and improve the classification ability simultaneously. The experiments are executed on UCI benchmark, and on Yale and CAS-PEAL-R1 face databases. The experimental results show that the proposed method outperforms other model search methods significantly, especially for face recognition. (C) 2010 Elsevier Ltd. All rights reserved. 2011 * 71(<-411): Integrating Clustering and Supervised Learning for Categorical Data Analysis The problem of fuzzy clustering of categorical data, where no natural ordering among the elements of a categorical attribute domain can be found, is an important problem in exploratory data analysis. As a result, a few clustering algorithms with focus on categorical data have been proposed. In this paper, a modified differential evolution (DE)-based fuzzy c-medoids (FCMdd) clustering of categorical data has been proposed. The algorithm combines both local as well as global information with adaptive weighting. The performance of the proposed method has been compared with those using genetic algorithm, simulated annealing, and the classical DE technique, besides the FCMdd, fuzzy k-modes, and average linkage hierarchical clustering algorithm for four artificial and four real life categorical data sets. Statistical test has been carried out to establish the statistical significance of the proposed method. To improve the result further, the clustering method is integrated with a support vector machine (SVM), a well-known technique for supervised learning. A fraction of the data points selected from different clusters based on their proximity to the respective medoids is used for training the SVM. The clustering assignments of the remaining points are thereafter determined using the trained classifier. The superiority of the integrated clustering and supervised learning approach has been demonstrated. 2010 * 72(<- 3): Closed-Loop Restoration Approach to Blurry Images Based on Machine Learning and Feedback Optimization Blind image deconvolution (BID) aims to remove or reduce the degradations that have occurred during the acquisition or processing. It is a challenging ill-posed problem due to a lack of enough information in degraded image for unambiguous recovery of both point spread function (PSF) and clear image. Although recently many powerful algorithms appeared; however, it is still an active research area due to the diversity of degraded images as well as degradations. Closed-loop control systems are characterized with their powerful ability to stabilize the behavior response and overcome external disturbances by designing an effective feedback optimization. In this paper, we employed feedback control to enhance the stability of BID by driving the current estimation quality of PSF to the desired level without manually selecting restoration parameters and using an effective combination of machine learning with feedback optimization. The foremost challenge when designing a feedback structure is to construct or choose a suitable performance metric as a controlled index and a feedback information. Our proposed quality metric is based on the blur assessment of deconvolved patches to identify the best PSF and computing its relative quality. The Kalman filter-based extremum seeking approach is employed to find the optimum value of controlled variable. To find better restoration parameters, learning algorithms, such as multilayer perceptron and bagged decision trees, are used to estimate the generic PSF support size instead of trial and error methods. The problem is modeled as a combination of pattern classification and regression using multiple training features, including noise metrics, blur metrics, and low-level statistics. Multi-objective genetic algorithm is used to find key patches from multiple saliency maps which enhance performance and save extra computation by avoiding ineffectual regions of the image. The proposed scheme is shown to outperform corresponding open-loop schemes, which often fails or needs many assumptions regarding images and thus resulting in sub-optimal results. 2015 * 73(<-343): Building a qualitative recruitment system via SVM with MCDM approach Advances in information technology have led to behavioral changes in people and submission of curriculum vitae (CV) via the Internet has become an often-seen phenomenon. Without any technological support for the filtering process, recruitment can be difficult. In this research, a method combining five-factor personality inventory, support vector machine (SVM), and multi-criteria decision-making (MCDM) method was proposed to improve the quality of recruiting appropriate candidates. The online questionnaire personality testing developed by the International Personality Item Pool (IPIP) was utilized to identify the personal traits of candidates and both SVM and MCDM were employed to predict and support the decision of personnel choice. SVM was utilized to predict the fitness of candidates, while MCDM was employed to estimate the performance for a job placement. The results show the proposed system provides a qualified matching according to the results collected from enterprise managers. 2011 * 74(<-602): Ensemble strategies for a medical diagnostic decision support system: A breast cancer diagnosis application The model selection strategy is an important determinant of the performance and acceptance of a medical diagnostic decision support system based on supervised learning algorithms. This research investigates the potential of various selection strategies from a population of 24 classification models to form ensembles in order to increase the accuracy of decision support systems for the early detection and diagnosis of breast cancer. Our results suggest that ensembles formed from a diverse collection of models are generally more accurate than either pure-bagging ensembles (formed from a single model) or the selection of a "single best model." We find that effective ensembles are formed from a small and selective subset of the population of available models with potential candidates identified by a multicriteria process that considers the properties of model generalization error, model instability, and the independence of model decisions relative to other ensemble members. (C) 2003 Elsevier B.V. All rights reserved. 2005 * 75(<-608): Selection of optimal features for iris recognition Iris recognition is a prospering biometric method, but some technical difficulties still exist. This paper proposes an iris recognition method based on selected optimal features and statistical learning. To better represent the variation details in irises, we extract features from both spatial and frequency domain. Multi-objective genetic algorithm is then employed to optimize the features. Next step is doing classification of the optimal feature sequence. SVM has recently generated a great interest in the community of machine learning due to its excellent generalization performance in a wide variety of learning problems. We modified traditional SVM as non-symmetrical support vector machine to satisfy the different security requirements in iris recognition applications. Experimental data shows that the selected feature sequence represents the variation details of the iris patterns properly. 2005 * 76(<-635): A novel hybrid GA/SVM system for protein sequences classification A novel hybrid genetic algorithm(GA)/Support Vector Machine (SVM) system, which selects features from the protein sequences and trains the SVM classifier simultaneously using a multi-objective genetic algorithm, is proposed in this paper. The system is then applied to classify protein sequences obtained from the Protein Information Resource (PIR) protein database. Finally, experimental results over six protein superfamilies are reported, where it is shown that the proposed hybrid GA/SVM system outperforms BLAST and HMMer. 2004 * 77(<- 79): Pareto-Path Multitask Multiple Kernel Learning A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches. 2015 * 78(<-307): A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n x n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. 2012 * 79(<- 13): Multi-kernel multi-criteria optimization classifier with fuzzification and penalty factors for predicting biological activity Nowadays it is important to develop effective computational methods for accurately identifying and predicting biological activity in the virtual screening of bioassay data so as to speed up the process of drug development. Among these methods, multi-criteria optimization classifier (MCOC) is a classifier which can find a trade-off between the overlapping degree of different classes and the total distance from input points to the decision hyperplane. The former should be minimized while the latter should be maximized. Then a decision function is derived from training data and this function is subsequently used to predict the class label of an unseen instance. However, due to outliers, anomalies, highly imbalanced classes, high dimension, nonlinear separability and other uncertainties in data, MCOC and other methods often give the poor predictive performance. In this paper, we introduce a new fuzzy contribution to each input point based on class median, by defining the new row and column kernel functions the linear combination of different feature kernels to replace the single kernel function in the kernel-induced feature space and penalty factors to imbalanced classes, thus a novel multi-kernel multi-criteria optimization classifier with fuzzification and penalty factors (MK-MCOC-FP) is proposed and the effects of the aforementioned problems are significantly reduced. The experimental results of predicting active compounds in the virtual screening and comparison with linear and quadratic MCOCs, support vector machines (SVM), fuzzy SVM and neural network, the conclusions show that MK-MCOC-FP evidently increased the ability of resisting noise interference, the predictive accuracy of highly class-imbalanced bioassay data, the separation of active compounds and inactive compounds, the interpretability of importance or contributions of different features to classification, the efficiency of classification with feature selection or dimensionality reduction for high-dimensional data, and the generalization of predicting the biological activity of new compounds. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 80(<-108): Credit risk evaluation using multi-criteria optimization classifier with kernel, fuzzification and penalty factors With the fast development of financial products and services, bank's credit departments collected large amounts of data, which risk analysts use to build appropriate credit scoring models to evaluate an applicant's credit risk accurately. One of these models is the Multi-Criteria Optimization Classifier (MCOC). By finding a trade-off between overlapping of different classes and total distance from input points to the decision boundary, MCOC can derive a decision function from distinct classes of training data and subsequently use this function to predict the class label of an unseen sample. In many real world applications, however, owing to noise, outliers, class imbalance, nonlinearly separable problems and other uncertainties in data, classification quality degenerates rapidly when using MCOC. In this paper, we propose a novel multi-criteria optimization classifier based on kernel, fuzzification, and penalty factors (KFP-MCOC): Firstly a kernel function is used to map input points into a high-dimensional feature space, then an appropriate fuzzy membership function is introduced to MCOC and associated with each data point in the feature space, and the unequal penalty factors are added to the input points of imbalanced classes. Thus, the effects of the aforementioned problems are reduced. Our experimental results of credit risk evaluation and their comparison with MCOC, support vector machines (SVM) and fuzzy SVM show that KFP-MCOC can enhance the separation of different applicants, the efficiency of credit risk scoring, and the generalization of predicting the credit rank of a new credit applicant. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 81(<-132): Multi-criteria optimization classifier using fuzzification, kernel and penalty factors for predicting protein interaction hot spots In order to understand the patterns of various biological processes and discover the principles of protein-protein interactions (PPI), it is important to develop effective methods for identifying and predicting PPI and their hot spots accurately. As for multi-criteria optimization classifier (MCOC), it can learn a decision function from different classes of training data and use it to predict the class labels of unknown samples. In many real world applications, owing to noises, outliers, imbalanced class distribution, nonlinearly separable problems, and other uncertainties, the predictive performance of MCOC degenerates rapidly. In this paper, we introduce a fuzzy contribution to each instance of training data, the unequal penalty factors to the samples in imbalanced classes, and kernel method to nonlinearly separable dataset, then a novel multi-criteria optimization classifier with fuzzification, kernel and penalty factors (FKP-MCOC) is constructed so as to reduce the effects of anomalies, improve the class imbalanced performance, and nonlinear separability in classification. The experimental results of predicting active compounds and protein interaction hot spots and comparison with MCOC, support vector machines (SVM) and fuzzy SVM, the conclusion shows that FKP-MCOC significantly increases the efficiency of classification, the partition of active and inactive compounds in bioassay, the separation of hot spot residues and energetically unimportant residues in protein interactions, and the generalization of predicting active compounds and hot spot residues in new instances. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 82(<-330): Integrating multicriteria PROMETHEE II method into a single-layer perceptron for two-class pattern classification PROMETHEE methods based on the outranking relation theory are extensively used in multicriteria decision aid. A preference index representing the intensity of preference for one pattern over another pattern can be measured by various preference functions. The higher the intensity, the stronger the preference is indicated. In contrast to traditional single-layer perceptrons (SLPs) with the sigmoid function, this paper develops a novel PROMETHEE II-based SLP using concepts from the PROMETHEE II method involving pairwise comparisons between patterns. The assignment of a class label to a pattern is dependent on its net preference index, which the proposed perceptron obtains. Specially, this study designs a genetic-algorithm-based learning algorithm to determine the relative weights of respective criteria in order to derive the preference index for any pair of patterns. Computer simulations involving several real-world data sets reveal the classification performance of the proposed PROMETHEE II-based SLP. The proposed perceptron performs well compared to the other well-known fuzzy or non-fuzzy classification methods. 2011 * 83(<-333): SINGLE-LAYER PERCEPTRON WITH NON-ADDITIVE PREFERENCE INDICES AND ITS APPLICATION TO BANKRUPTCY PREDICTION Preference Ranking Organization METHods for Enrichment Evaluations (PROMETHEE), based on outranking relation theory, are used extensively in Multi-Criteria Decision Aid (MCDA). In PROMETHEE, an overall preference index based on weighted average aggregation represents the intensity of preference for one pattern over another pattern and can be measured by a given preference function. Unfortunately, as the criteria making up the patterns are not always independent, the assumption of additivity among single-criterion preference indices may not be reasonable. This paper develops a novel PROMETHEE-based perceptron using nonadditive preference indices for ordinal sorting problems. The applicability of the proposed non-additive PROMETHEE-based single-layer perceptron (SLP) to bankruptcy prediction is examined by using a sample of 53 publicly traded, Taiwanese firms that encountered financial failure between 2000 and 2008. The proposed model performs well compared to PROMETHEE with additive preference indices and other additive PROMETHEE-based classification approaches. 2011 * 84(<-399): A single-layer perceptron with PROMETHEE methods using novel preference indices The Preference Ranking Organization METHods for Enrichment Evaluations (PROMETHEE) methods, based on the outranking relation theory, are used extensively in multi-criteria decision aid (MCDA). In particular, preference indices with weighted average aggregation representing the intensity of preference for one pattern over another pattern are measured by various preference functions. The higher the intensity, the stronger the preference is indicated. For MCDA, to obtain the ranking of alternatives, compromise operators such as the weighted average aggregation, or the disjunctive operators are often employed to aggregate the performance values of criteria. The compromise operators express the group utility or the majority rule, whereas the disjunctive operators take into account the strongly opponent or agreeable minorities. Since these two types of operators have their own unique features, it is interesting to develop a novel aggregator by integrating them into a single aggregator for a preference index. This study aims to develop a novel PROMETHEE-based single-layer perceptron (PROSLP) for pattern classification using the proposed preference index. The assignment of a class label to a pattern is dependent on its net preference index, which is obtained by the proposed perceptron. Computer simulations involving several real-world data sets reveal the classification performance of the proposed PROMETHEE-based SLP. The proposed perceptron with the novel preference index performs well compared to that with the original one. (C) 2010 Elsevier B.V. All rights reserved. 2010 * 85(<-459): Bankruptcy prediction using ELECTRE-based single-layer perceptron For the outranking relation theory, the ELECTRE methods are one of the most extensively used outranking methods. To measure the degree of agreement and the degree of disagreement of the proposition "one alternative outranks another alternative", the concordance and discordance relations are usually associated with the outranking relation. Instead of the traditional single-layer perceptron (SLP) developed according to the multiple-attribute utility theory, this paper contributes to develop a novel ELECTRE-based SLP for multicriteria classification problems based on the ELECTRE methods involving pairwise comparisons among patterns. A genetic-algorithm-based method is then designed to determine connection weights. A real-world data set involving bankruptcy analysis obtained from Moody's Industrial Manuals is employed to examine the classification performance of the proposed ELECTRE-based model. The results demonstrate that the proposed model performs well compared to an arsenal of well-known classification methods involving quantitative disciplines of statistics and machine learning. (C) 2009 Elsevier B.V. All rights reserved. 2009 * 86(<-213): Simulated annealing based classifier ensemble techniques: Application to part of speech tagging Part-of-Speech (PoS) tagging is an important pipelined module for almost all Natural Language Processing (NLP) application areas. in this paper we formulate PoS tagging within the frameworks of single and multi-objective optimization techniques. At the very first step we propose a classifier ensemble technique for PoS tagging using the concept of single objective optimization (SOO) that exploits the search capability of simulated annealing (SA). Thereafter we devise a method based on multiobjective optimization (MOO) to solve the same problem, and for this a recently developed multiobjective simulated annealing based technique, AMOSA, is used. The characteristic features of AMOSA are its concepts of the amount of domination and archive in simulated annealing, and situation specific acceptance probabilities. We use Conditional Random Field (CRF) and Support Vector Machine (SVM) as the underlying classification methods that make use of a diverse set of features, mostly based on local contexts and orthographic constructs. We evaluate our proposed approaches for two Indian languages, namely Bengali and Hindi. Evaluation results of the single objective version shows the overall accuracy of 88.92% for Bengali and 87.67% for Hindi. The MOO based ensemble yields the overall accuracies of 90.45% and 89.88% for Bengali and Hindi, respectively. (C) 2012 Elsevier B.V. All rights reserved. 2013 * 87(<-228): Combining multiple classifiers using vote based classifier ensemble technique for named entity recognition In this paper, we pose the classifier ensemble problem under single and multiobjective optimization frameworks, and evaluate it for Named Entity Recognition (NER), an important step in almost all Natural Language Processing (NLP) application areas. We propose the solutions to two different versions of the ensemble problem for each of the optimization frameworks. We hypothesize that the reliability of predictions of each classifier differs among the various output classes. Thus, in an ensemble system it is necessary to find out either the eligible classes for which a classifier is most suitable to vote (i.e., binary vote based ensemble) or to quantify the amount of voting for each class in a particular classifier (i.e., real vote based ensemble). We use seven diverse classifiers, namely Naive Bayes, Decision Tree (DT), Memory Based Learner (MBL), Hidden Markov Model (HMM), Maximum Entropy (ME), Conditional Random Field (CRF) and Support Vector Machine (SVM) to build a number of models depending upon the various representations of the available features that are identified and selected mostly without using any domain knowledge and/or language specific resources. The proposed technique is evaluated for three resource-constrained languages, namely Bengali, Hindi and Telugu. Results using multiobjective optimization (MOO) based technique yield the overall recall, precision and F-measure values of 94.21%, 94.72% and 94.74%, respectively for Bengali, 99.07%, 90.63% and 94.66%, respectively for Hindi and 82.79%, 95.18% and 88.55%, respectively for Telugu. Results for all the languages show that the proposed MOO based classifier ensemble with real voting attains the performance level which is superior to all the individual classifiers, three baseline ensembles and the corresponding single objective based ensemble. (C) 2012 Elsevier B.V. All rights reserved. 2013 * 88(<-256): Combining feature selection and classifier ensemble using a multiobjective simulated annealing approach: application to named entity recognition In this paper, we propose a two-stage multiobjective-simulated annealing (MOSA)-based technique for named entity recognition (NER). At first, MOSA is used for feature selection under two statistical classifiers, viz. conditional random field (CRF) and support vector machine (SVM). Each solution on the final Pareto optimal front provides a different classifier. These classifiers are then combined together by using a new classifier ensemble technique based on MOSA. Several different versions of the objective functions are exploited. We hypothesize that the reliability of prediction of each classifier differs among the various output classes. Thus, in an ensemble system, it is necessary to find out the appropriate weight of vote for each output class in each classifier. We propose a MOSA-based technique to determine the weights for votes automatically. The proposed two-stage technique is evaluated for NER in Bengali, a resource-poor language, as well as for English. Evaluation results yield the highest recall, precision and F-measure values of 93.95, 95.15 and 94.55 %, respectively for Bengali and 89.01, 89.35 and 89.18 %, respectively for English. Experiments also suggest that the classifier ensemble identified by the proposed MOO-based approach optimizing the F-measure values of named entity (NE) boundary detection outperforms all the individual classifiers and four conventional baseline models. 2013 * 89(<-332): A multiobjective simulated annealing approach for classifier ensemble: Named entity recognition in Indian languages as case studies In this paper, we propose a simulated annealing (SA) based multiobjective optimization (MOO) approach for classifier ensemble. Several different versions of the objective functions are exploited. We hypothesize that the reliability of prediction of each classifier differs among the various output classes. Thus, in an ensemble system, it is necessary to find out the appropriate weight of vote for each output class in each classifier. Diverse classification methods such as Maximum Entropy (ME), Conditional Random Field (CRF) and Support Vector Machine (SVM) are used to build different models depending upon the various representations of the available features. One most important characteristics of our system is that the features are selected and developed mostly without using any deep domain knowledge and/or language dependent resources. The proposed technique is evaluated for Named Entity Recognition (NER) in three resource-poor Indian languages, namely Bengali, Hindi and Telugu. Evaluation results yield the recall, precision and F-measure values of 93.95%, 95.15% and 94.55%, respectively for Bengali, 93.35%, 92.25% and 92.80%, respectively for Hindi and 84.02%, 96.56% and 89.85%, respectively for Telugu. Experiments also suggest that the classifier ensemble identified by the proposed MOO based approach optimizing the F-measure values of named entity (NE) boundary detection outperforms all the individual models, two conventional baseline models and three other MOO based ensembles. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 90(<- 31): Research on the route optimization for fresh air processing of air handling unit in spacecraft launching site The existing control methods for Air handling units (AHUs) in spacecraft launching site (SLS) are comparatively dated, the air processing routes are usually arbitrarily determined in line with experience, which fails to cope with the coupling and function redundancy of air condition system and the diversity of outdoor environment, therefore resulting in tremendous energy waste. This paper proposes a new route optimization strategy for fresh air processing-Firstly analyzing the possibly processing routes for fresh air based on psychrometric chart, then proposing an optimization algorithm AFSA-GA to optimize the possibly processing routes, eventually obtaining the best route that requires the least energy consumption. By adopting the strategy proposed to optimize air processing route for High-Temperature and High-Humidity working conditions, it can be proved that the proposed strategy can decrease considerable amount of energy consumption, and the proposed optimization algorithm AFSA-GA has the advantages of faster convergence speed and avoiding premature. (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 91(<- 51): Multi-objective optimization of the HVAC (heating, ventilation, and air conditioning) system performance A data-driven approach to optimize the total energy consumption of the HVAC (heating, ventilation, and air conditioning) system in a typical office facility is presented. A multi-layer perceptron ensemble is selected to build the total energy model integrating three indoor air quality models, the facility temperature model, the facility relative humidity model, and the facility CO2 concentration model. To balance the energy consumption and the indoor air quality, a quad-objective optimization problem is constructed. The problem is solved with a modified particle swarm optimization algorithm producing control settings of supply air temperature and static pressure of the air handling unit. By assigning different weights to the objectives to the model, the generated control settings optimize HVAC system with the trade-off between the energy consumption and the facility thermal comfort. Significant energy savings can be obtained even with air quality constraint. (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 92(<- 81): Optimal allocation and adaptive VAR control of PV-DG in distribution networks The development of distributed generation (DG) has brought new challenges to power networks. One of them that catches extensive attention is the voltage regulation problem of distribution networks caused by DG. Optimal allocation of DG in distribution networks is another well-known problem being widely investigated. This paper proposes a new method for the optimal allocation of photovoltaic distributed generation (PV-DG) considering the non-dispatchable characteristics of PV units. An adaptive reactive power control model is introduced in PV-DG allocation as to balance the trade-off between the improvement of voltage quality and the minimization of power loss in a distribution network integrated with PV-DG units. The optimal allocation problem is formulated as a chance-constrained stochastic programming (CCSP) model for dealing with the randomness of solar power energy. A novel algorithm combining the multi-objective particle swarm optimization (MOPSO) with support vector machines (SVM) is proposed to find the Pareto front consisting of a set of possible solutions. The Pareto solutions are further evaluated using the weighted rank sum ratio (WRSR) method to help the decision-maker obtain the desired solution. Simulation results on a 33-bus radial distribution system show that the optimal allocation method can fully take into account the time-variant characteristics and probability distribution of PV-DG, and obtain the best allocation scheme. (C) 2014 Elsevier Ltd. All rights reserved. 2015 * 93(<- 86): Optimization of Liquid Desiccant Regenerator with Multiobject Particle Swarm Optimization Algorithm In this paper, a model-based optimization strategy for a liquid desiccant regenerator operating with lithium chloride solution is presented. By analyzing the characteristics of the components, such as electric heater, pump, and fan, energy predictive models for the components in the regenerator are developed. To minimize the energy usage while maintaining the regeneration rate within an accepted level, one multiobjective optimization problem is formulated with two objectives, the constraints of decision variables, components interactions, and the outdoor conditions. A multiobjective optimization strategy based on decreasing inertia weight particle swarm optimization (DIWPSO) is proposed to obtain the optimal nondominated solutions of the optimization problem, and a decision making strategy is introduced to select the final solution, desiccant solution flow rate, desiccant solution temperature, and the regenerating air flow rate, to minimize the energy usage in the regenerator. Experimental studies are carried out on an existing system to compare the energy consumption and regeneration rate between the proposed optimization strategy and conventional strategy to evaluate energy saving performance of the proposed strategy. Experimental results demonstrate that an average of 8.55% energy can be saved by implementing the proposed optimization strategy in liquid desiccant regenerator. 2014 * 94(<-116): Incomplete information-based decentralized cooperative control strategy for distributed energy resources of VSI-based microgrids This paper presents an effective method to control distributed energy resources (DERs) installed in a microgrid (MG) to guarantee its stability after islanding occurrence. Considering voltage and frequency variations after islanding occurrence and based on stability criteria, MG pre-islanding conditions are divided into secure and insecure classes. It is shown that insecure MG can become secure, if appropriate preventive control is applied on the DERs in different operating conditions of the MG. To select the most important variables of MG, which can estimate proper values of output power set points of DERs, a feature selection procedure known as symmetrical uncertainty is used in this paper. Among all the MG variables, critical ones are selected to calculate the appropriate output power of different DERs for different conditions of the MG. The values of selected features are transmitted by the communication system to the control unit installed on each DER to control its output power set point. In order to decrease the communication system cost, previous researchers have used local variables to control the set point of different DERs. This approach decreases the accuracy of the controller because the controller uses incomplete information. In this paper, multi-objective approach is used in order to decrease the cost of the communication system, while keeping the accuracy of the preventive control strategy in an allowable margin. The results demonstrate the effectiveness of the proposed method in comparison with other methods. 2014 * 95(<-161): Feasibility study and performance assessment for the integration of a steam-injected gas turbine and thermal desalination system This study proposes a systematic approach for retrofitting a steam-injection gas turbine (SIGT) with a multieffect thermal vapor compression (METVC) desalination system. The retrofitted unit's product cost of the fresh water (RUPC) was used as a performance criterion, which comprises the thermodynamic, economic, and environmental attributes when calculating the total annual cost of the SIGT-METVC system. For the feasibility study of retrofitting the SIGT plant with the METVC desalination system, the effects of two key parameters were analyzed using response surface methodology (RSM) based on a central composite design (CCD): the steam air ratio (SR) and the temperature difference between the effects of the METVC system (Delta T-METVC) on the fresh water production (Q(freshwater)) and the net power generation (W-net) of the SIGT-METVC system. Multi-objective optimization (MOO) which minimizes the modified total annual cost (MTAC) and maximizes the fresh water flow rate was performed to optimize the RUPC of the SIGT-METVC system. The best Pareto optimal solution showed that the SIGT-METVC system with five effects is the best one among the systems with 4-6 effects. This system under optimal operating conditions can save 21.07% and 9.54% of the RUPC, compared to the systems with four and six effects, respectively. (C) 2013 Elsevier B.V. All rights reserved. 2014 * 96(<-193): Exergy analysis and parametric optimization of three power and fresh water cogeneration systems using refrigeration chillers Three power and fresh water cogeneration systems that combine a GT (gas turbine) power plant and a RO (reverse osmosis) desalination system were compared based on the exergy viewpoint. In the first system, the GT and RO systems were coupled mechanically to form a base system. In the second and third systems, a VCR (vapor-compression refrigeration) cycle and a single-effect AC(Water-LiBr) (water/lithium bromide absorption chiller) were used, respectively, to cool the compressor inlet air and preheat the RO intake seawater via waste heat recovery in the VCR condenser and AC(Water-LiBr) absorber. A parametric analysis-based exergy was conducted to evaluate the effects of the key thermodynamic parameters including the compressor inlet air temperature and the fuel-mass flow rate on the system exergy efficiency. Parameter optimization was achieved using a GA (genetic algorithm) to reach the maximum exergy efficiency, where the thermodynamic improvement potentials of the systems were identified. The optimum values of performance for the three cogeneration systems were compared under the same conditions. The results showed that the cogeneration system with the AC is the best system among the three systems, since it can increase exergy and energy efficiencies as well as net power generation by 3.79%, 4.21%, and 38%, respectively, compared to the base system. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 97(<-357): Multi-objective optimization of HVAC system with an evolutionary computation algorithm A data-mining approach for the optimization of a HVAC (heating, ventilation, and air conditioning) system is presented. A predictive model of the HVAC system is derived by data-mining algorithms, using a dataset collected from an experiment conducted at a research facility. To minimize the energy while maintaining the corresponding IAQ (indoor air quality) within a user-defined range, a multi-objective optimization model is developed. The solutions of this model are set points of the control system derived with an evolutionary computation algorithm. The controllable input variables supply air temperature and supply air duct static pressure set points are generated to reduce the energy use. The results produced by the evolutionary computation algorithm show that the control strategy saves energy by optimizing operations of an HVAC system. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 98(<-470): Optimization of coal-fired boiler SCRs based on modified support vector machine models and genetic algorithms An integrated combustion optimization approach is presented for the combined considering the trade offs in optimization of coal-fired boiler and selective catalyst reaction (SCR) system, to balance the unit thermal efficiency, SCR reagent consumption and NO, emissions. Field tests were performed at a 160 MW coal-fired unit to investigate the relationships between process controllable variables, and optimization targets and constraints. Based on the test data, a modified on-line support vector regression model was proposed for characteristic function approximation, in which the model parameters can be continuously adapted for changes in coal quality and other conditions of plant equipment. The optimization scheme was implemented by a genetic algorithm in two stages. Firstly, the multi-objective combustion optimization problem was solved to achieve an optimal Pareto front, which contains optimal solutions for lowest unit heat rate and lowest NO, emissions. Secondly, best operating settings for the boiler, and SCR system and air preheater were obtained for lowest operating cost under the constraints of NO, emissions limit and air preheater ammonium bisulfate deposition depth. (c) 2008 Elsevier Ltd. All rights reserved. 2009 * 99(<-413): Applying multiobjective RBFNNs optimization and feature selection to a mineral reduction problem The Nickel reduction process is a complex task where many dynamic optimization problems arises that, nowadays, requires a human operator to take decisions based on his experience and intuition. In order to help the operator to optimize the reduction process in terms of maximum amount of mineral extracted and minimum energy consumption, a control system integrated by several modules is being designed. One of the modules has the task of predicting how much petroleum will be burned in the ovens where the raw material is processed. This paper proposes an algorithm to design Radial Basis Function Neural Networks that will be able to predict accurately the amount of petroleum given a set of input parameters. The algorithm is also able of identifying the most relevant input parameters for the network so the dimensionality reduction problem is ameliorated. Hence, this paper, as it will be shown in the experiments section is able to apply the synergy cif different Soft Computing techniques to the industrial process obtaining satisfactory results. (C) 2009 Elsevier Ltd. All rights reserved. 2010 * 100(<-637): Multi-criteria decision-making for optimization of product disassembly under multiple situations With growing interest in recovering materials and subassemblies within consumer products at the end of their useful life, there has been an increasing interest in developing decision-making methodologies that determine how to maximize the environmental benefits of end-of-life (EOL) processing while minimizing costs under variable EOL situations. This paper describes a methodology to analyze how product designs and situational variables impact the Pareto set of optimal EOL strategies with the greatest environmental benefit for a given economic cost or profit. Since the determination of this Pareto set via enumeration of all disassembly sequences and EOL fates is prohibitively time-consuming even for relatively simple products, multi-objective genetic algorithms (GA) are utilized to rapidly approximate the Pareto set of optimal EOL trade-offs between cost and environmentally conscious actions. Such rapid calculations of the Pareto set are critical to better understand the influence of situational variables on how disassembly and recycling decisions change under different EOL-scenarios (e.g., under variable regulatory, infrastructure, or market situations). To illustrate the methodology, a case study involving the EOL treatment of a coffee maker is described. Impacts of situational variables on trade-offs between recovered energy and cost in Aachen, Germany, and in Ann Arbor, MI, are elucidated, and a means of presenting the results in the form of a multi-situational EOL strategy graph is described. The impact of the European Union Directive regarding Waste Electric and Electronic Equipment (WEEE) on EOL trade-offs between energy recovery and cost was also considered for both locations. 2003 * 101(<-207): Zone Refining of Tin: Optimization of Zone Length by a Genetic Algorithm Zone refining comprises a number of techniques utilized to deal with the rearrangement of soluble impurities or phases along a bar in order to produce high-purity materials. The concentration curves can be predicted for given values of segregation partition coefficient (k), molten zone length, and a number of sequential zone passes. The combination of such process parameters can result in many possible experimental conditions, and the optimization by trial-and-error methods is not suitable, even by numerical simulation due to computational time consumption. The purpose of this work is to evolve an interaction between a genetic algorithm (GA) and a predictive model for impurity distribution, permitting the best zone length in each pass to be determined in order to provide maximum purification, minimum bar length waste and the lowest number of zone passes. The proposed approach is validated against experimental results of zone refining of tin, for impurities having opposite segregation behaviour, i.e., k>1 and k<1. 2013 * 102(<-283): Optimization of End Milling Parameters under Minimum Quantity Lubrication Using Principal Component Analysis and Grey Relational Analysis Machining is the major reliable practice in accomplishment of metal cutting industries. The accelerated growing competition demands top superior and large quantity with low cost products. Metal working fluids have significant fragment of manufacturing cost and causes ecological impacts and health problems. This work attempts to advance a competent machining alignment with no ecological impacts. The prediction of quality characteristics and enhancement of machining field are consistently accepting great interest in machining sectors to compress the accomplishment costs. In this paper, GA based ANN prediction model proposes to envisage the quality characteristics of surface roughness and tool wear. The comparison of predicted and experimental values acknowledges the precision of the model. The end milling experiments are conducted beneath minimum quantity lubrication. This paper as well deals with the multiple objective optimization with principal component analysis, grey relational analysis and Taguchi method. ANOVA was carried out to determine each parameter contribution percentage on quality characteristics. The results show that cutting speed is the most influencing parameter followed by feed velocity, lubricant flow rate and depth of cut. The confirmation tests acknowledge that the proposed multiple-objective methodology is able in determining optimum machining parameters for minimum surface roughness and tool wear. 2012 * 103(<-294): Optimum design of run-flat tire insert rubber by genetic algorithm A generalized multi-objective optimization method making use of genetic algorithm (GA) is introduced, in order to simultaneously improve the riding comfort and the durability of run-flat tire by optimally tailoring the shape and stiffness of the sidewall insert rubber. The sensitivity analysis invoking the CPU time-consuming finite element analyses is replaced with the genetic evolution and the fitness of each genome in the population is evaluated by utilizing the response surfaces of objective functions approximated by ANN. It is confirmed through the numerical experiment that a number of Pareto solutions of the shape and stiffness of the sidewall insert rubber for different combinations of weighting factors can be successfully obtained. As well, the reliability of the Pareto solutions has been justified from the comparison with the direct finite element analysis. (C) 2011 Elsevier B.V. All rights reserved. 2012 * 104(<-336): Resource-Aware Compiler Prefetching for Fine-Grained Many-Cores Super-scalar, out-of-order processors that can have tens of read and write requests in the execution window place significant demands on Memory Level Parallelism (MLP). Multi- and many-cores with shared parallel caches further increase MLP demand. Current cache hierarchies however have been unable to keep up with this trend, with modern designs allowing only 4-16 concurrent cache misses. This disconnect is exacerbated by recent highly parallel architectures (e.g. GPUs) where power and area per-core budget favor numerous lighter cores with less resources, further reducing support for MLP on a per-core basis. Support for hardware and software prefetch increases MLP pressure since these techniques overlap multiple memory requests with existing computation. In this paper, we propose and evaluate a novel Resource-Aware Prefetching (RAP) compiler algorithm that is aware of the number of simultaneous prefetches supported, and optimized for the same. We implemented our algorithm in a GCC-derived compiler and evaluated its performance using an emerging fine-grained many-core architecture. Our results show that the RAP algorithm outperforms a well-known loop prefetching algorithm by up to 40.15% in run-time on average across benchmarks and the state-of-the art GCC implementation by up to 34.79%, depending upon hardware configuration. Moreover, we compare the RAP algorithm with a simple hardware prefetching mechanism, and show run-time improvements of up to 24.61%. To demonstrate the robustness of our approach, we conduct a design-space exploration (DSE) for the considered target architecture by varying (i) the amount of chip resources designated for per-core prefetch storage and (ii) off-chip bandwidth. We show that the RAP algorithm is robust in that it improves performance across all design points considered. We also identify the Pareto-optimal hardware-software configuration which delivers 53.66% run-time improvement on average while using only 5.47% more chip area than the bare-bones design. 2011 * 105(<-408): De novo design: balancing novelty and confined chemical space Importance of the field: De novo drug design serves as a tool for the discovery of new ligands for macromolecular targets as well as optimization of known ligands. Recently developed tools aim to address the multi-objective nature of drug design in an unprecedented manner. Areas covered in this review: This article discusses recent advances in de novo drug design programs and accessory programs used to evaluate compounds post-generation. What the reader will gain: The reader is introduced to the challenges inherent in de novo drug design and will become familiar with current trends in de novo design. Furthermore, the reader will be better prepared to assess the value of a tool, and be equipped to design more elegant tools in the future. Take home message: De novo drug design can assist in the efficient discovery of new compounds with a high affinity for a given target. The inclusion of existing chemoinformatic methods with current structure-based de novo design tools provides a means of enhancing the therapeutic value of these generated compounds. 2010 * 106(<- 22): A review on sustainable construction management strategies for monitoring, diagnosing, and retrofitting the building's dynamic energy performance: Focused on the operation and maintenance phase According to a press release, the building sector accounts for about 40% of the global primary energy consumption. Energy savings can be achieved in the building sector by improving the building's dynamic energy performance in terms of sustainable construction management in the urban-based built environments (referred to as an "Urban Organism"). This study implements the concept of "dynamic approach" to reflect the unexpected changes in the climate and energy environments as well as in the energy policies and technologies. Research in this area is very significant for the future of the building, energy, and environmental industries around the world. However, there is a lack of studies from the perspective of the dynamic approach and the system integration, and thus, this study is designed to fill the research gap. This study highlights the state-of-the-art in the major phases for a building's dynamic energy performance (i.e., monitoring, diagnosing, and retrofitting phases), focusing on the operation and maintenance phase. This study covers a wide range of research works and provides various illustrative examples of the monitoring, diagnosing, and retrofitting of a building's dynamic energy performance. Finally, this study proposes the specific future developments and challenges by phase and suggests the future direction of system integration for the development of a carbon-integrated management system as a large complex system. It is expected that researchers and practitioners can understand and adopt the holistic approach in the monitoring, diagnosing, and retrofitting of a building's dynamic energy performance under the new paradigm of an "Urban Organism". (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 107(<-360): Optimization methods applied to renewable and sustainable energy: A review Energy is a vital input for social and economic development. As a result of the generalization of agricultural, industrial and domestic activities the demand for energy has increased remarkably, especially in emergent countries. This has meant rapid grower in the level of greenhouse gas emissions and the increase in fuel prices, which are the main driving forces behind efforts to utilize renewable energy sources more effectively, i.e. energy which comes from natural resources and is also naturally replenished. Despite the obvious advantages of renewable energy, it presents important drawbacks, such as the discontinuity of generation, as most renewable energy resources depend on the climate, which is why their use requires complex design, planning and control optimization methods. Fortunately, the continuous advances in computer hardware and software are allowing researchers to deal with these optimization problems using computational resources, as can be seen in the large number of optimization methods that have been applied to the renewable and sustainable energy field. This paper presents a review of the current state of the art in computational optimization methods applied to renewable and sustainable energy, offering a clear vision of the latest research advances in this field. (C) 2010 Elsevier Ltd. All rights reserved. 2011 * 108(<-382): Realization of Non Linear Controllers in Batch Reactor using GA and SVM This paper presents the application of machine learning schemes, namely SVM and GA, for realization of non linear control schemes and optimization of Batch reactor. Batch reactor is an essential unit operation in almost all batch-processing industries such as chemical and pharmaceuticals. In this approach, the temperature profile of the batch reactor is optimized using Genetic Algorithm (GA) with a view to maximize the desired product and minimize the waste product as a multi-objective function. Generic Model Control is implemented by using SVM Estimator, and it includes the non-linear model of a process to determine the control action. SVM estimator will predict the current value of the heat release makes the control performance to be more robust. The robustness performance of GMC has been experienced. Other non linear control schemes, such as Direct Inverse Control and Internal Model Control, are also implemented. 2011 * 109(<-471): Functional organization of the major late transcriptional unit of canine adenovirus type 2 Vectors derived from canine adenovirus type 2 (CAV-2) are attractive candidates for gene therapy and live recombinant vaccines. CAV-2 vectors described thus far have been generated by modifying the virus genome, most notably early regions 1 and 3 or the fiber gene. Modification of these genes was underpinned by previous descriptions of their mRNA and protein-coding sequences. Similarly, the construction of new CAV-2 vectors bearing changes in other genomic regions, in particular many of those expressed late in the viral cycle, will require prior characterization of the corresponding transcriptional units. In this study, we provide a detailed description of the late transcriptional organization of the CAV-2 genome. We examined the major late transcription unit (MLTU) and determined its six families of mRNAs controlled by the putative major late promoter (MLP). All mRNAs expressed from the MLTU had a common non-coding tripartite leader (224 nt) at their 5' end. In transient transfection assays, the predicted MLP sequence was able to direct luciferase gene expression and the TPL sequence yielded a higher amount of transgene product. Identification of viral transcriptional products following in vitro infection confirmed most of the predicted protein-coding regions that were deduced from computer analysis of the CAV-2 genome. These findings contribute to a better understanding of gene expression in CAV-2 and lay the foundation required for genetic modifications aimed at vector optimization. 2009 * 110(<- 70): Multi-objective efficiency enhancement using workload spreading in an operational data center The cooling systems of rapidly growing Data Centers (DCs) consume a considerable amount of energy, which is one of the main concems in designing and operating DCs. The main source of thermal inefficiency in a typical air-cooled DC is hot air recirculation from outlets of servers into their inlets, causing hot spots and leading to performance reduction of the cooling system. In this study, a thermally aware workload spreading method is proposed for reducing the hot spots while the total allocated server workload is increased. The core of this methodology lies in developing an appropriate thermal DC model for the optimization process. Given the fact that utilizing a high-fidelity thermal model of a DC is highly time consuming in the optimization process, a three dimensional reduced order model of a real DC is developed in this study. This model, whose boundary conditions are determined based on measurement data of an operational DC, is developed based on the potential flow theory updated with the Rankine vortex to account for buoyancy and air recirculation effects inside the DC. Before evaluating the proposed method, this model is verified with a computational fluid dynamic (CFD) model simulated with the same boundary conditions. The efficient load spreading method is achieved by applying a multi-objective particle swarm optimization (MOPSO) algorithm whose objectives are to minimize the hot spot occurrences and to maximize the total workload allocated to servers. In this case study, by applying the proposed method, the Coefficient of Performance (COP) of the cooling system is increased by 17%, and the total allocated workload is increased by 10%. These results demonstrate the effectiveness of the proposed method for energy efficiency enhancement of DCs. Crown Copyright (C) 2014 Published by Elsevier Ltd. All rights reserved. 2015 * 111(<-426): Fuzzy TOPSIS approach for assessing thermal-energy storage in concentrated solar power (CSP) systems The energy produced by thermal solar plants does not have to be limited solely to hours of sunlight. It is possible to conceive a storage system and it is possible to extend the production of heat beyond the hours of full sunshine. The main aim of this paper is to propose and test the validity and effectiveness of the proposed fuzzy multi-criteria method (TOPSIS fuzzy) to compare different heat transfer fluids (HTF) in order to investigate the feasibility of utilizing a molten salt. The thermal processes involved in CSP will not analyzed. The use of molten salt offers the potential to reduce electricity production cost and to increase the energy performance in an eco-compatible way. Salt is less expensive and more environmentally benign than currently used HTFs but unfortunately the high freezing point leads to significant O&M challenges and requires an innovative freeze protection system. (C) 2009 Elsevier Ltd. All rights reserved. 2010 * 112(<- 9): A new methodology for investigating the cost-optimality of energy retrofitting a building category According to the Energy Performance of Buildings Directive (EPBD) Recast, building energy retrofitting should aim "to achieving cost-optimal levels". However, the detection of cost-optimal levels for an entire building stock is a complex task. This paper tackles such issue by introducing a novel methodology, aimed at supporting robust cost-optimal energy retrofit solutions for building categories. Since the members of one building category provide highly different energy performance, they cannot be correctly represented by only one reference design as stipulated by the EPBD Recast. Therefore, a representative building sample (RBS) is here used to consider potential variations in all parameters affecting energy performance. Simulation-based uncertainty analysis is employed to identify the optimal RBS size, followed by simulation-based sensitivity analysis to identify proper retrofit actions. Then post-processing is performed to investigate the cost-effectiveness of all possible retrofit packages including energy-efficient HVAC systems, renewables, and energy saving measures. The methodology is denoted as SLABE, 'Simulation-based Large-scale uncertainty/sensitivity Analysis of Building Energy performance'. It employs EnergyPlus and MATLAB (R). For demonstration, SLABE is applied to office buildings built in South Italy during 1920-1970. The results show that the cost-optimal retrofit package includes the installation of condensing gas boiler, water-cooled chiller and full-roof photovoltaic system. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 113(<- 84): Recent development and application of several high-efficiency surface heat exchangers for energy conversion and utilization In the present study, the recent research of three kinds of surface heat exchangers, i.e., shell-and-tube heat exchangers with helical baffles, air-cooled heat exchangers used in large air-cooled systems, and primary surface heat exchangers are reviewed. They are used in the energy conversion and utilization for liquid to liquid, gas to gas and liquid to gas heat exchange, respectively. It can be concluded that the helical baffled shell-and-tube heat exchangers (STHXs) should be used to replace the conventional segmental baffled STHXs in industries, despite there are a lot of research work have to be done, especially on the novel combined helical baffles. The primary surface gas to gas heat exchangers are developing towards to the snore complex 3D CC primary surfaces, such as the double-wave CC primary surface, offset-bubble primary surface and 3D anti-phase secondary corrugation. The whole performance for the air-cooled heat exchangers in the air cooling system and the multi-objectives optimization for air-cooled heat exchangers should be paid more attention, considering the heat transfer, pumper power, space usage and other economic influence factors. (C) 2014 Elsevier Ltd. All rights reserved. 2014 * 114(<-432): Multi-criteria Axiomatic Design Approach to Evaluate Sites for Grid-connected Photovoltaic Power Plants: A Case Study in Turkey Global warming and climate change are the most serious problems for developing countries as well as the world. Therefore, the usage of renewable energy sources is gaining importance for sustainable energy development and environmental protection. Turkey is one of the developing countries whose demand of electricity is sharply increasing. In order to meet this demand by means of renewable sources, solar power is a suitable source due to the high solar energy potential of Turkey among the renewable sources. In this article, the multi-criteria axiomatic design (AD) approach is proposed for the evaluation of sites for a grid-connected photovoltaic power plant (GCPP) in Turkey. For this aim, four evaluation criteria, which have great influence on determination of a GCPP site are taken into account. 2010 * 115(<- 45): Cyclone optimization by COMPLEX method and CFD simulation The most important performance parameters in cyclone design are the pressure drop and collection efficiency. In general, the best designs provide relatively high efficiency with a low pressure drop. In this study, the multiobjective optimization of cyclones operating with a low particle loading (15 g/m(3)) and small particle diameter (5 mu m to 15 mu m) is performed using the COMPLEX algorithm, a constrained derivative-free optimization method. The objective function is formulated to maximize the collection efficiency with a maximum pressure drop restriction. All objective function evaluations are carried out by CFD simulation with the code CYCLO-EE5 based on the Eulerian multi-fluid concept. An optimized design cyclone is obtained applying the proposed methodology in a feasible time 15 days of computational effort). Also, in comparison with the Stairmand and Lapple cyclones the collection efficiency was 3.5% and 9.2% higher and the pressure drop was 63% and 11.4% lower, respectively. The increase in the collection efficiency with lower pressure drop was due to the displacement of the tangential velocity peak toward the wall and an increase in the tangential velocity near the wall. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 116(<- 59): Design of a novel gas cyclone vortex finder using the adjoint method Gas cyclones have many industrial applications in solid-gas separation. The vortex finder is an essential part in gas cyclones where the shape and diameter highly affect the cyclone performance. Many optimization studies have been conducted to optimize the cylindrical vortex finder diameter. This study introduces a new vortex finder shape optimized for minimum pressure drop using the discrete adjoint method. The new optimum cyclone will save 66% from the driving power needed for the Stairmand cyclone. To efficiently perform the grid independence study for the new cyclone, a new framework using the adjoint solver and the grid convergence index is proposed and tested. The proposed framework relies on local mesh adaptation instead of the global mesh refinement approach. A comparison of numerical simulation of the new cyclone and the Stairmand cyclone confirms the superior performance of the new vortex finder shape for the pressure drop and the cut-off diameter. The results of this study open a new era gas cyclones geometry optimization by using the adjoint method instead of the traditional surrogate based optimization technique. Moreover, the computational costs for the grid independence studies will be reduced via the application of the adjoint methods. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 117(<- 73): A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms. 2015 * 118(<-144): Multiobjective Optimization Design of Heating System in Electric Heating Rapid Thermal Cycling Mold for Yielding High Gloss Parts 2014 * 119(<-212): Investigation of PWR core optimization using harmony search algorithms This work addresses applications of the classical harmony search (HS), improved harmony search (IHS) and the harmony search with differential mutation based pith adjustment (HSDM) to PWR core reloading pattern optimization problems. Proper loading pattern of fuel assemblies (FAs) depends on both neutronic and thermal-hydraulic aspects; obtaining optimal arrangement of fuel assemblies, FA, in a core to meet special objective functions is a complex problem. In this paper, in the first step HS, IHS and HSDM methods are implemented and compared with other meta-heuristic algorithms on Shekel's Foxholes problem. In the second step to evaluate proposed techniques in PWR cores, maximization of multiplication factor, k(eff), decreasing of power picking factor (PPF) as much as possible and power density flattening are chosen as neutronic objective functions for two PWR test cases although other variables can be taken into account. In the third step, obtaining maximum core average critical heat flux (CHF) along no void generation throughout the cores are two thermal-hydraulic objective functions which are included to the desired neutronic objective functions. For neutronic and thermal-hydraulic computation, PARCS (Purdue Advanced Reactor Core Simulator) and COBRA-EN codes are used respectively. Coupling the harmony search with the PARCS code and the COBRA-EN code, we developed a core reloading pattern optimization code. The results, convergence rate and reliability of the techniques are quiet promising and show that the harmony algorithms perform very well. Furthermore, it is found that harmony searches have potential for other optimization applications in other nuclear engineering field. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 120(<-297): Multi-objective parametric optimization of powder mixed electro-discharge machining using response surface methodology and non-dominated sorting genetic algorithm Powder mixed electro-discharge machining (EDM) is being widely used in modern metal working industry for producing complex cavities in dies and moulds which are otherwise difficult to create by conventional machining route. It has been experimentally demonstrated that the presence of suspended particle in dielectric fluid significantly increases the surface finish and machining efficiency of EDM process. Concentration of powder (silicon) in the dielectric fluid, pulse on time, duty cycle, and peak current are taken as independent variables on which the machining performance was analysed in terms of material removal rate (MRR) and surface roughness (SR). Experiments have been conducted on an EZNC fuzzy logic Die Sinking EDM machine manufactured by Electronica Machine Tools Ltd. India. A copper electrode having diameter of 25 mm is used to cut EN 31 steel for one hour in each trial. Response surface methodology (RSM) is adopted to study the effect of independent variables on responses and develop predictive models. It is desired to obtain optimal parameter setting that aims at decreasing surface roughness along with larger material removal rate. Since the responses are conflicting in nature, it is difficult to obtain a single combination of cutting parameters satisfying both the objectives in any one solution. Therefore, it is essential to explore the optimization landscape to generate the set of dominant solutions. Non-sorted genetic algorithm (NSGA) has been adopted to optimize the responses such that a set of mutually dominant solutions are found over a wide range of machining parameters. 2012 * 121(<-312): Selection of EDM Process Parameters Using Biogeography-Based Optimization Algorithm Amongst the nontraditional machining processes, electric discharge machining (EDM) is considered to be one of the most important processes for machining intricate and complex shapes in various electrically conductive materials, including high-strength, temperature-resistant (HSTR) alloys, especially in aeronautical and automotive industries. For achieving the best performance of the EDM process, it is imperative to carry out parametric design which involves characterization of multiple process responses, such as material removal rate, tool wear rate, surface finish and surface integrity, heat affected zone, etc., with respect to different machining parameters, like peak current, pulse-on time, duty factor, gap voltage, and dielectric flushing pressure, followed by parametric optimization of the process. This article focuses on the application of the biogeography-based optimization (BBO) algorithm for single and multiobjective optimization of the responses of two EDM processes. The optimization performance of the BBO algorithm is compared with that of other population-based algorithms, e. g., genetic algorithm (GA), ant colony optimization (ACO), and artificial bee colony (ABC) algorithm. It is observed that the BBO algorithm performs better than the others with respect to the optimal process response values. 2012 * 122(<-591): Optimization of a reverse osmosis system using genetic algorithm Reverse Osmosis (RO) has found extensive application in industry as a highly efficient separation process. In most cases, it is required to select the optimum set of operating variables such that the performance of the system is maximized. In this work, an attempt has been made to optimize the performance of RO system with a cellulose acetate membrane to separate NaCl-Water system using Genetic Algorithm (GA). The GAs are faster and more efficient than conventional gradient based optimization techniques. The optimization problem was to maximize the observed rejection of the solute by varying the feed flowrate and overall permeate flux across the membrane for a constant feed concentration. To model the system, a well-established transport model for RO system, the Spiegler-Kedem model was used. It was found that the GA converged rapidly to the optimal solution at the 8th generation. The effect of varying GA parameters like size of population, crossover probability, and mutation probability on the result was also studied. The algorithm converged to the optimum solution set at the 8th generation. It was also seen that varying the computational parameters significantly affected the results. 2006 * 123(<-530): Multi-objective optimization in combinatorial chemistry applied to the selective catalytic reduction of NO with C3H6 A high-throughput approach, aided by multi-objective experimental design of experiments based on a genetic algorithm, was used to optimize the combinations and concentrations of a noble metal-free solid catalyst system active in the selective catalytic reduction of NO with C3H6. The optimization framework is based on PISA [S. Bleuler, M. Laumanns, L. Thiele, E. Zitzler, Proc. of EMO'03 (2003) 4941, and two state-of-the-art evolutionary multi-objective algorithms-SPEA2 [E. Zitzler, M. Laumanns, L. Thiele, in: K.C. Giannakoglou, et al. (Eds.), Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems (EUROGEN 2001), International Center for Numerical Methods in Engineering (CIMNE), 2002, p. 95] and IBEA [E. Zitzler, S. Kunzli, Conference on Parallel Problem Solving from Nature (PPSN VIII), 2004, p. 832]-were used for optimization. Constraints were satisfied by using so-called "repair algorithms." The results show that evolutionary algorithms are valuable tools for screening and optimization of huge search spaces and can be easily adapted to direct the search towards multiple objectives. The best noble metal free catalysts found by this method are combinations of Cu, Ni, and Al. Other catalysts active at low temperature include Co and Fe. (C) 2007 Elsevier Inc. All rights reserved. 2007 * 124(<-554): Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis-especially for applications technically run in honeycomb structures-the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pretreatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals. 2007 * 125(<-205): Development of an effective data-driven model for hourly typhoon rainfall forecasting In this paper, we proposed a new typhoon rainfall forecasting model to improve hourly typhoon rainfall forecasting. The proposed model integrates multi-objective genetic algorithm with support vector machines. In addition to the rainfall data, the meteorological parameters are also considered. For each lead time forecasting, the proposed model can subjectively determine the optimal combination of input variables including rainfall and meteorological parameters. For 1- to 6-h ahead forecasts, an application to high- and low-altitude metrological stations has shown that the proposed model yields the best performance as compared to other models. It is found that meteorological parameters are useful. However, the use of the optimal combination of input variables determined by the proposed model yields more accurate forecasts than the use of all input variables. The proposed model can significantly improve hourly typhoon rainfall forecasting, especially for the long lead time forecasting. (C) 2013 Elsevier B.V. All rights reserved. 2013 * 126(<-293): Predicting torsional strength of RC beams by using Evolutionary Polynomial Regression A new view for the analytical formulation of torsional ultimate strength for reinforced concrete (RC) beams by experimental data is explored by using a new hybrid regression method termed Evolutionary Polynomial Regression (EPR). In the case of torsion in RC elements, the poor assumptions in physical models often result into poor agreement with experimental results. Nonetheless, existing models have simple and compact mathematical expressions since they are used by practitioners as building codes provisions. EPR combines the best features of conventional numerical regression techniques with the effectiveness of genetic programming for constructing symbolic expressions of regression models. The EPR modeling paradigm allows to figure out existing patterns in recorded data in terms of compact mathematical expressions, according to the available physical knowledge on the phenomenon (if any). The procedure output is represented by different formulae to predict torsional strength of RC beam. The multi-objective search paradigm used by EPR allows developing a set of formulae showing different complexity of mathematical expressions as resulting into different agreement with experimental data. The efficiency of such approach is tested using experimental data of 64 rectangular RC beams reported in technical literature. The input parameters affecting the torsional strength were selected as cross-sectional area of beams, cross-sectional area of one-leg of closed stirrup, spacing of stirrups, area of longitudinal reinforcement, yield strength of stirrup and longitudinal reinforcement, concrete compressive strength. Those results are finally compared with previous studies and existing building codes for a complete comparison considering formulation complexity and experimental data fitting. (C) 2011 Elsevier Ltd. All rights reserved. 2012 * 127(<-451): Combining support vector regression and cellular genetic algorithm for multi-objective optimization of coal-fired utility boilers Support vector regression (SVR) was employed to establish mathematical models for the NO, emissions and carbon burnout of a 300 MW coal-fired utility boiler. Combined with the SVR models, the cellular genetic algorithm for multi-objective optimization (MOCell) was used for multi-objective optimization of the boiler combustion. Meanwhile, the comparison between MOCell and the improved non-dominated sorting genetic algorithm (NSGA-II) shows that MOCell has superior performance to NSGA-II regarding the problem. The field experiments were carried out to verify the accuracy of the results obtained by MOCell, the results were in good agreement with the measurement data. The proposed approach provides an effective tool for multi-objective optimization of coal combustion performance, whose feasibility and validity are experimental validated. A time period of less than 4 s was required for a run of optimization under a PC system, which is suitable for the online application. (c) 2009 Elsevier Ltd. All rights reserved. 2009 * 128(<- 35): Optimization of gear blank preforms based on a new R-GPLVM model utilizing GA-ELM The determination of the key dimensions of gear blank preforms with complicated geometries is a highly nonlinear optimization task. To determine critical design dimensions, we propose a novel and efficient dimensionality reduction (DR) model that adapts Gaussian process regression (GPR) to construct a topological constraint between the design latent variables (LVs) and the regression space. This procedure is termed the regression-constrained Gaussian process latent variables model (R-GPLVM), which overcomes GPLVM's drawback of ignoring the regression constrains. To determine the appropriate sub-manifolds of the high-dimensional sample space, we combine the maximum a posteriori method with the scaled conjugate gradient (SCG) algorithm. This procedure can estimate the coordinates of preform samples in the space of LVs. Numerical experiments reveal that the R-GPLVM outperforms the pure GPR in various dimensional spaces, when the proper hyper-parameters and kernel functions are solved for. Results using an extreme learning model (ELM) obtain a better prediction precision than the back propagation method (BP), when the dimensions are reduced to seven and a Gaussian kernel function is adopted. After the seven key variables are screened out, the ELM model will be constructed with realistic inputs and obtains improved prediction accuracy. However, since the ELM has a problem with validity of the prediction, a genetic algorithm (GA) is exploited to optimize the connection parameters between each network layer to improve the reliability and generalization. In terms of prediction accuracy for testing datasets, GA has a better performance compared to the differential evolution (DE) approach, which motivates the choice to use the genetic algorithm-extreme learning model (GA-ELM). Moreover, GA-ELM is employed to measure the aforementioned DR using engineering criteria. In the end, to obtain the optimal geometry, a parallel selection method of multi-objective optimization is proposed to obtain the Pareto-optimal solution, while the maximum finisher forming force (MFFF) and the maximum finisher die stress (MFDS) are both minimized. Comparative analysis with other numerical models including finite element model (FEM) simulation is conducted using the GA optimized preform. Results show that the values of MFFF and MFDS predicted by GA-ELM and R-GPLVM agree well with the experimental results, which validates the feasibility of our proposed methods. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 129(<-389): Optimization of Viscoelastic Systems Combining Robust Condensation and Metamodeling The effective design of viscoelastic dampers as applied to real-world complex engineering structures can be conveniently carried out by using modern multiobjective numerical optimization techniques. The large number of evaluations of the cost functions normally combined with the typically high dimensions of finite element models of industrial structures makes multiobjective optimization very costly, sometimes unfeasible. Those difficulties motivate the study reported in this paper, in which a strategy is proposed consisting in the use of evolutionary algorithms specially adapted to multiobjective optimization of viscoelastic systems, combined with robust condensation and metamodeling. After the discussion of various theoretical aspects, a numerical application is presented to illustrate the use and demonstrate the effectiveness of the methodology proposed for the optimal design of viscoelastic constrained layers. 2010 * 130(<-430): Response surface methodology using Gaussian processes: Towards optimizing the trans-stilbene epoxidation over Co2+-NaX catalysts Response surface methodology (RSM) relies on the design of experiments and empirical modelling techniques to find the optimum of a process when the underlying fundamental mechanism of the process is largely unknown. This paper proposes an iterative RSM framework, where Gaussian process (GP) regression models are applied for the approximation of the response surface. GP regression is flexible and capable of modelling complex functions, as opposed to the restrictive form of the polynomial models that are used in traditional RSM. As a result, GP models generally attain high accuracy of approximating the response surface, and thus provide great chance of identifying the optimum. In addition, GP is capable of providing both prediction mean and variance, the latter being a measure of the modelling uncertainty. Therefore, this uncertainty can be accounted for within the optimization problem, and thus the process optimal conditions are robust against the modelling uncertainty. The developed method is successfully applied to the optimization of trans-stilbene conversion in the epoxidation of trans-stilbene over cobalt ion-exchanged faujasite zeolites (Co2+-NaX) catalysts using molecular oxygen. (C) 2009 Elsevier B.V. All rights reserved. 2010 * 131(<-606): A hybrid numerical approach for multi-responses optimization of process parameters and catalyst compositions in CO2OCM process over CaO-MnO/CeO2 catalyst A new hybrid numerical approach, using Weighted Sum of Squared Objective Functions (WSSOF) algorithm, was developed for multi-responses optimization of carbon dioxide oxidative coupling of methane (CO2 OCM). The optimization was aimed to obtain optimal process parameters and catalyst compositions with high catalytic performances. The hybrid numerical approach combined the single-response modeling and optimization using Response Surface Methodology (RSM) and WSSOF technique of multi-responses optimization. The hybrid algorithm resulted in Pareto-optimal solutions and an additional criterion was proposed over the solutions to obtain a final unique optimal solution. The simultaneous maximum responses of C, selectivity and yield were obtained at the corresponding optimal independent variables. The results of the multi-response optimization could be used to facilitate in recommending the suitable operating conditions and catalyst compositions for the CO2 OCM process. (c) 2004 Elsevier B.V. All rights reserved. 2005 * 132(<- 6): CF-Rank: Learning to rank by classifier fusion on click-through data Ranking as a key functionality of Web search engines, is a user-centric process. However, click-through data, which is the source of implicit feedback of users, are not included in almost all of datasets published for the task of ranking. This limitation is also observable in the majority of benchmark datasets prepared for the learning to rank which is a new and promising trend in the information retrieval literature. In this paper, inspiring from the click-through data concept, the notion of click-through features is introduced. Click-through features could be derived from the given primitive dataset even in the absence of click-through data in the utilized benchmark dataset These features are categorized into three different categories and are either related to the users' queries, results of searches or clicks of users. With the use of click-through features, in this research, a novel learning to rank algorithm is proposed. By taking into account informativeness measures such as MAP, NDCG, InformationGain and OneR, at its first step, the proposed algorithm generates a classifier for each category of click-through features. Thereafter, these classifiers are fused together by using exponential ordered weighted averaging operators. Experimental results obtained from a plenty of investigations on WCL2R and LETOR4.0 benchmark datasets, demonstrate that the proposed method can substantially outperform well-known ranking methods in the presence of explicit click-through data based on MAP and NDCG criteria. Specifically, such an improvement is more noticeable on the top of ranked lists, which usually attract users' attentions more than other parts of these lists. This betterment on WCL2R dataset is about 20.25% for P@1 and 5.68% for P@3 in comparison with SVMRank, which is a well-known learning to rank algorithm. CF-Rank can also obtain higher or comparable performance with baseline methods even in the absence of explicit click-through data in utilized primitive datasets. In this regard, the proposed method on the LETOR4.0 dataset has achieved an improvement of about 2.7% on MAP measure compared to AdaRank-NDCG algorithm. (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 133(<-135): Nonadditive similarity-based single-layer perceptron for multi-criteria collaborative filtering The main aim of the popular collaborative filtering approaches for recommender systems is to recommend items that users with similar preferences have liked in the past. Although single-criterion recommender systems have been successfully used in several applications, multi-criteria rating systems that allow users to specify ratings for various content attributes for individual items are gaining in importance. To measure the overall similarity between any two users for multi-criteria collaborative filtering, the indifference relation in outranking relation theory, which can justify discrimination between any two patterns, is suitable for multi-criteria decision making (MCDM). However, nonadditive indifference indices that address interactions among criteria should be taken into account. This paper proposes a novel similarity-based perceptron using nonadditive indifference indices to estimate an overall rating that a user would give to a specific item. The applicability of the proposed model to recommendation of initiators on a group-buying website was examined. Experimental results demonstrate that the proposed model performs well in terms of generalization ability compared to other multi-criteria collaborative filtering approaches. (C) 2013 Elsevier B.V. All rights reserved. 2014 * 134(<-180): Interest in intermediate soft-classified maps in land change model validation: suitability versus transition potential This study compares two types of intermediate soft-classified maps. The first type uses land use/cover suitability maps based on a multi-criteria evaluation (MCE). The second type focuses on the transition potential between land use/cover categories based on a multi-layer perceptron (MLP). The concepts and methodological approaches are illustrated in a comparable manner using a Corine data set from the Murcia region (2300km(2), Spain) in combination with maps of drivers that were created with two stochastic, discretely operating, commonly used tools (MCE in CA_MARKOV and MLP in Land Change Modeler). The importance of the different approaches and techniques for the obtained results is illustrated by comparing the specific characteristics of both approaches by validating the suitability versus transition potential maps to each other using a Spearman correlation matrix and, between the Corine maps, using classical ROC (receiver operating characteristic) statistics. Then, we propose a new use of ROC statistics to compare these intermediate soft-classified maps with their respective hard-classified maps of the models for each category. The validation of these results can be beneficial in choosing a suitable model and provide a better understanding of the implications of the different modeling steps and the advantages and limitations of the modeling tools. 2013 * 135(<-616): Multiobjective generation dispatch through a neuro-fuzzy technique The multiobjective generation dispatch in electric power system treats economy and emission impact as competing objectives which requires some form of conflict resolution to arrive at a solution. This paper presents an integrated approach combining a fuzzy coordination method and a radial basis function ANN along with a heuristic rule based search algorithm to solve multiobjective generation dispatch problem. The algorithm developed is simple to use and can effectively obtain the well-coordinated optimal solution while allowing more flexibility in operation. Adaptability of the performance indices composed of fuel cost and emission level are measured by the membership functions. Combining the adaptability indices a fuzzy decision making (FDM) function is obtained and the two-objective optimization is then solved by maximizing the FDM function. Then, a radial basis function ANN is developed to reach a preliminary schedule. Since, some practical constraints may be violated in the preliminary schedule, a heuristic rule based search algorithm is developed to reach a feasible best compromising generation schedule which satisfies all practical constraints. The proposed neuro-fuzzy technique has been applied to IEEE-14-bus and 30-bus test systems and the results are presented to illustrate the performance and applicability of the technique. 2004 * 136(<-183): Soft computing techniques in advancement of structural metals Current trends in the progress of technology demand availability of materials resources ahead of the advancing fronts of the application areas. During the last couple of decades, significant progress has been made in computational and experimental design of materials. Among the potential computational techniques, soft computing stands in distinction due to the inherent flexibility in capturing the complexity of the problem in global scale. Since 1990s remarkable success has been achieved in soft computing activities in different facets of materials science and engineering. Extensive efforts have been devoted in design of metals and alloys based on composition-process-microstructure-property correlation. The present review aims to address the contribution of soft computing in the field of structural metals and alloys including processing and joining. The critical issues concerning applicability of particular techniques in specific materials problem have been particularly emphasised encompassing the scope of integrating the gradual progress in different techniques in hybrid and tandem framework to address greater complexities in larger length and time scale. Attempt has also been made to emphasise on the evolution of newer knowledge and materials through soft computing activities. Finally, the potential of soft computing techniques in futuristic design approaches has been critically enumerated. 2013 * 137(<-275): A two-stage evolutionary algorithm based on sensitivity and accuracy for multi-class problems The machine learning community has traditionally used correct classification rates or accuracy (C) values to measure classifier performance and has generally avoided presenting classification levels of each class in the results, especially for problems with more than two classes. C values alone are insufficient because they cannot capture the myriad of contributing factors that differentiate the performance of two different classifiers. Receiver Operating Characteristic (ROC) analysis is an alternative to solve these difficulties, but it can only be used for two-class problems. For this reason, this paper proposes a new approach for analysing classifiers based on two measures: C and sensitivity (S) (i.e., the minimum of accuracies obtained for each class). These measures are optimised through a two-stage evolutionary process. It was conducted by applying two sequential fitness functions in the evolutionary process, including entropy (E) for the first stage and a new fitness function, area (A), for the second stage. By using these fitness functions, the C level was optimised in the first stage, and the S value of the classifier was generally improved without significantly reducing C in the second stage. This two-stage approach improved S values in the generalisation set (whereas an evolutionary algorithm (EA) based only on the S measure obtains worse S levels) and obtained both high C values and good classification levels for each class. The methodology was applied to solve 16 benchmark classification problems and two complex real-world problems in analytical chemistry and predictive microbiology. It obtained promising results when compared to other competitive multiclass classification algorithms and a multi-objective alternative based on E and S. (C) 2012 Elsevier Inc. All rights reserved. 2012 * 138(<-320): A Multi-Objective Approach to Force Field Optimization: Structures and Spin State Energetics of d(6) Fe(II) Complexes The next generation of force fields (FFs), regardless of the accuracy of the potential energy representation, will always have parameters that must be fitted in order to reproduce experimental and/or ab initio data accurately. Single objective methods have been used for many years to automate the obtaining of parameters, but this leads to ambiguity. The solution depends on the chosen weights and is therefore not unique. There have been few advances in solving this problem, which thus remains a major hurdle for the development of empirical FF methods. We propose a solution based on multi-objective evolutionary algorithms (MOEAs). MOEAs allow the FF to be tuned against the desired objectives and offer a powerful, efficient, and automated means to reparameterize FFs, or even discover the parameters for a new potential. Here, we illustrate the application of MOEAs by reparameterizing the ligand field molecular mechanics (LFMM) FF recently reported for modeling spin crossover in iron-(II)-amine complexes (Deeth et al. J. Am. Chem. Soc. 2010, 132, 6876). We quickly recover the performance of the original parameter set and then significantly improve it to reproduce the geometries and spin state energy differences of an extended series of complexes with RMSD errors in Fe-N and N-N distances reduced from 0.06 angstrom to 0.03 angstrom and spin state energy difference RMSDs reduced from 1.5 kcal mol(-1) to 0.2 kcal mol(-1). The new parameter sets highlight, and help resolve, shortcomings both in the non-LFMM FF parameters and in the interpretation of experimental data for several other Fe(II)N-6 amine complexes not used in the FF optimization. 2012 * 139(<-322): Dynamic multi-criteria evaluation of co-evolution strategies for solving stock trading problems Risk and return are interdependent in a stock portfolio. To achieve the anticipated return, comparative risk should be considered simultaneously. However, complex investment environments and dynamic change in decision making criteria complicate forecasts of risk and return for various investment objects. Additionally, investors often fail to maximize their profits because of improper capital allocation. Although stock investment involves multi-criteria decision making (MCDM), traditional MCDM theory has two shortfalls: first, it is inappropriate for decisions that evolve with a changing environment; second, weight assignments for various criteria are often oversimplified and inconsistent with actual human thinking processes. In 1965, Rechenberg proposed evolution strategies for solving optimization problems involving real number parameters and addressed several flaws in traditional algorithms, such as their use of point search only and their high probability of falling into optimal solution area. In 1992, Hillis introduced the co-evolutionary concept that the evolution of living creatures is interactive with their environments (multi-criteria) and constantly improves the survivability of their genes, which then expedites evolutionary computation. Therefore, this research aimed to solve multi-criteria decision making problems of stock trading investment by integrating evolutionary strategies into the co-evolutionary criteria evaluation model. Since co-evolution strategies are self-calibrating, criteria evaluation can be based on changes in time and environment. Such changes not only correspond with human decision making patterns (i.e., evaluation of dynamic changes in criteria), but also address the weaknesses of multi-criteria decision making (i.e., simplified assignment of weights for various criteria). Co-evolutionary evolution strategies can identify the optimal capital portfolio and can help investors maximize their returns by optimizing the preoperational allocation of limited capital. This experimental study compared general evolution strategies with artificial neural forecast model, and found that co-evolutionary evolution strategies outperform general evolution strategies and substantially outperform artificial neural forecast models. The co-evolutionary criteria evaluation model avoids the problem of oversimplified adaptive functions adopted by general algorithms and the problem of favoring weights but failing to adaptively adjust to environmental change, which is a major limitation of traditional multi-criteria decision making. Doing so allows adaptation of various criteria in response to changes in various capital allocation chromosomes. Capital allocation chromosomes in the proposed model also adapt to various criteria and evolve in ways that resemble thinking patterns. (C) 2011 Elsevier Inc. All rights reserved. 2011 * 140(<-384): Modeling and Optimal Design of Machining-Induced Residual Stresses in Aluminium Alloys Using a Fast Hierarchical Multiobjective Optimization Algorithm The residual stresses induced during shaping and machining play an important role in determining the integrity and durability of metal components. An important issue of producing safety critical components is to find the machining parameters that create compressive surface stresses or to minimize tensile surface stresses. In this article, a systematic data-driven fuzzy modeling methodology is proposed, which allows constructing transparent fuzzy models considering both accuracy and interpretability attributes of fuzzy systems. The new method employs a hierarchical optimization structure to improve the modeling efficiency, where two learning mechanisms cooperate together: the Nondominated Sorting Genetic Algorithm II (NSGA-II) is used to improve the model's structure, while the gradient descent method is used to optimize the numerical parameters. This hybrid approach is then successfully applied to the problem that concerns the prediction of machining induced residual stresses in aerospace aluminium alloys. Based on the developed reliable prediction models, NSGA-II is further applied to the multiobjective optimal design of aluminium alloys in a oreverse-engineeringo fashion. It is revealed that the optimal machining regimes to minimize the residual stress and the machining cost simultaneously can be successfully located. 2011 * 141(<-398): Brain-Computer Evolutionary Multiobjective Optimization: A Genetic Algorithm Adapting to the Decision Maker The centrality of the decision maker (DM) is widely recognized in the multiple criteria decision-making community. This translates into emphasis on seamless human-computer interaction, and adaptation of the solution technique to the knowledge which is progressively acquired from the DM. This paper adopts the methodology of reactive search optimization (RSO) for evolutionary interactive multiobjective optimization. RSO follows to the paradigm of "learning while optimizing," through the use of online machine learning techniques as an integral part of a self-tuning optimization scheme. User judgments of couples of solutions are used to build robust incremental models of the user utility function, with the objective to reduce the cognitive burden required from the DM to identify a satisficing solution. The technique of support vector ranking is used together with a k-fold cross-validation procedure to select the best kernel for the problem at hand, during the utility function training procedure. Experimental results are presented for a series of benchmark problems. 2010 * 142(<-496): A systematic decision criterion for the elimination of useless overpasses The Seoul Metropolitan Government (SMG) recently considered eliminating some useless overpasses that had once played a significant role in maintaining continuous traffic flow but soon lost their original, positive function and became an environmental burden. SMG is in the process of identifying such types of overpasses out of 22 installations. The aim of this study was to develop a definite criterion that can be used to identify overpasses to be eliminated. All of the 22 existing overpasses were investigated in terms of functionality, structural stability and conflicts with other sustainable policies of SMG. Functionality was interpreted based on the traffic efficiency, environmental amenity and traffic safety. The weights of these functionality sub-factors were derived from a pair-wise comparison technique used in analytic hierarchy process (AHIP) methodology. The remaining two aspects were not subdivided but evaluated directly. Final judgments were made based an the three aspects with the assistance of well-known classification methodologies such as K-means and a support vector machine ISM). As a result, five overpasses in Seoul were identified to be eliminable. (C) 2008 Elsevier Ltd. All rights reserved. 2008 * 143(<-506): Combined effect of solvent content, temperature and pH on the chromatographic behaviour of ionisable compounds - II: Benefits of the simultaneous optimisation A previously reported eight-parameter mechanistic model [Part I of this work,J. Chromatogr. A 1163 (2007) 49] was applied to optimise the separation of 11 ionisable compounds (nine diuretics and two beta-blockers), considering solvent content, temperature and pH as experimental factors. The data from 21 experiments, arranged in a central composite design, were used to model the retention. Local models were used to predict efficiency and peak asymmetry. The optimisation strategy, based on the use of peak purity as chromatographic objective function and derived concepts, was able to find the most suitable experimental conditions yielding full resolution in reasonable analysis times. It also allowed a detailed inspection of the separation capability of the studied factors, and of the consequences of the shifts in the protonation constants originated by changes in solvent content and temperature. The size of the resolution structures suggested that the ranked importance of the factors was pH, organic solvent and temperature, giving rise to relatively narrow domains of full resolution. The three factors were found, however, worthwhile in the optimisation of selectivity. Predicted optimal conditions corresponding to two different optimal resolution regions were verified experimentally. In spite of the difficulties associated to the use of pH as optimisation factor, satisfactory agreement was found in both cases. (c) 2008 Elsevier B.V. All rights reserved. 2008 * 144(<- 7): Multiple criteria decision aiding for finance: An updated bibliographic survey Finance is a popular field for applied and methodological research involving multiple criteria decision aiding (MCDA) techniques. In this study we present an up-to-date bibliographic survey of the contributions of MCDA in financial decision making, focusing on the developments during the past decade. The survey covers all main areas of financial modeling as well as the different methodological approaches in MCDA and its connections with other analytical fields. On the basis of the survey results, we discuss the contributions of MCDA in different areas of financial decision making and identify established and emerging research topics, as well as future opportunities and challenges. (C) 2015 Elsevier B.V. and Association of European Operational Research Societies (EURO) within the International Federation of Operational Research Societies (IFORS). All rights reserved. 2015 * 145(<-187): Software Effort Estimation as a Multiobjective Learning Problem Ensembles of learning machines are promising for software effort estimation (SEE), but need to be tailored for this task to have their potential exploited. A key issue when creating ensembles is to produce diverse and accurate base models. Depending on how differently different performance measures behave for SEE, they could be used as a natural way of creating SEE ensembles. We propose to view SEE model creation as a multiobjective learning problem. A multiobjective evolutionary algorithm (MOEA) is used to better understand the tradeoff among different performance-measures by creating SEE models through the simultaneous optimisation of these measures. We show that the performance measures behave very differently, presenting sometimes even opposite trends. They are then used as a source of diversity for creating SEE ensembles. A good tradeoff among different measures can be obtained by using an ensemble of MOEA solutions. This ensemble performs similarly or better than a model that does not consider these measures explicitly. Besides, MOEA is also flexible, allowing emphasis of a particular measure if desired. In conclusion, MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models. 2013 * 146(<-365): Preference disaggregation and statistical learning for multicriteria decision support: A review Disaggregation methods have become popular in multicriteria decision aiding (MCDA) for eliciting preferential information and constructing decision models from decision examples. From a statistical point of view, data mining and machine learning are also involved with similar problems, mainly with regard to identifying patterns and extracting knowledge from data. Recent research has also focused on the introduction of specific domain knowledge in machine learning algorithms. Thus, the connections between disaggregation methods in MCDA and traditional machine learning tools are becoming stronger. In this paper the relationships between the two fields are explored. The differences and similarities between the two approaches are identified, and a review is given regarding the integration of the two fields. (C) 2010 Elsevier B.V. All rights reserved. 2011 * 147(<-402): A stochastic optimization approach for paper recycling reverse logistics network design under uncertainty One of the most important objectives of a manufacturing firm is the efficient design and operation of its supply chain to maximize profit. Paper is an example of a valuable material that can be recycled and recovered. Uncertainty is one of the characteristics of the real world. The methods that cope with uncertainty help researchers get realistic results. In this study, a two-stage stochastic programing model is proposed to determine a long term strategy including optimal facility locations and optimal flow amounts for large scale reverse supply chain network design problem under uncertainty. This network design problem includes optimal recycling and collection center locations and optimal flow amounts between the nodes in the multi-facility environment. Proposed model is suitable for recycling/manufacturing type of systems in reverse supply chain. All deterministic, stochastic models are mixed-integer programing models and are solved by commercial software GAMS 21.6/CPLEX 9.0. 2010 * 148(<-461): Pairs selection and outranking: An application to the S&P 100 index Pairs trading is a popular quantitative speculation strategy. This article proposes a general and flexible framework for pairs selection. The method uses multiple return forecasts based on bivariate information sets and multi-criteria decision techniques. Our approach can be seen as a sort of forecast combination but the output of the method is a ranking. It helps to detect potentially under- and overvalued stocks. A first application with S&P 100 index stocks provides promising results in terms of excess return and directional forecasting. (C) 2008 Elsevier B.V. All rights reserved. 2009 * 149(<-478): A memetic model of evolutionary PSO for computational finance applications Motivated by the compensatory property of EA and PSO, where the latter can enhance solutions generated from the evolutionary operations by exploiting their individual memory and social knowledge of the swarm, this paper examines the implementation of PSO as a local optimizer for fine tuning in evolutionary search. The proposed approach is evaluated on applications from the field of computational finance, namely portfolio optimization and time series forecasting. Exploiting the structural similarity between these two problems and the non-linear fractional knapsack problem. an instance of the latter is generalized and implemented as the preliminary test platform for the proposed EA-PSO hybrid model. The experimental results demonstrate the positive effects of this memetic synergy and reveal general design guidelines for the implementation of PSO as a local optimizer. Algorithmic performance improvements are similarly evident when extending to the real-world optimization problems under the appropriate integration of PSO with EA. (C) 2008 Elsevier Ltd. All rights reserved. 2009 * 150(<-660): Land-use suitability analysis in the United States: Historical development and promising technological achievements Various methods of spatial analysis are commonly used in land-use plans and site selection studies. A historical overview and discussion of contemporary developments of land-use suitability analysis are presented. The paper begins with an exploration into the early 20th century with the infancy of documented applications of the technique, The article then travels through the 20th century, documenting significant milestones. Concluding with present explorations of advanced technologies such as neural computing and evolutionary programming, this work is meant to serve as a foundation for literature review and a premise for the exploration of new advancements as we enter into the 21st century. 2001 * 151(<-142): A new multicriteria approach for the analysis of efficiency in the Spanish olive oil sector by modelling decision maker preferences The efficiency in production is often analysed as technical efficiency using the production frontier function. Efficiency scores are usually based on distance computations to the frontier in an m + s-dimensional space, where m inputs produce s outputs. In addition, efficiency improvements consider the total consumption of each input. However, in many cases, the "consumption" of each input can be divided into input-consumption sections (ICSs), and trade-off among the ICSs is possible. This share framework can be used for computing efficiency. This analysis provides information about both the total optimal consumption of each input, as does data envelopment analysis, and the most efficient allocation of the "consumption" among the ICSs. This paper studies technical efficiency using this approach and applies it to the olive oil sector in Andalusia (Spain). A non-parametrical methodology is presented, and an input-oriented Multi-Criteria Linear Programming model (MLP) is proposed. The analysis is developed at global, input and ICSs levels, defining the extent of satisfaction achieved at all these levels for each company, in accordance with their own preferences. The companies' preferences are modelled with their utility function and their set of weights. MLP offers more detailed information to assist decision makers than other models previously proposed in the literature. In addition to this application, it is concluded that there is room for improvement in the olive oil sector, particularly in the management of the skilled labour. Additionally, the solutions with two opposite scenarios indicate that the model is suitable for the intended decision making process. (C) 2013 Elsevier B.V. All rights reserved. 2014 * 152(<-155): Multi-Objective Genetic Algorithms and Genetic Programming Models for Minimizing Input Carbon Rates in a Blast Furnace Compared with a Conventional Analytic Approach Data-driven models were constructed for the Productivity, CO2 emission, and Si content for an operational Blast furnace using evolutionary approaches that involved two recent strategies based upon bi-objective genetic Programming and neural nets evolving through Genetic Algorithms. The models were utilized to compute the optimum tradeoff between the level of CO2 emission and productivity at different Si levels, using a Predator-Prey Genetic Algorithm, well tested for computing the Pareto-optimality. The results were pitted against some similar calculations performed with commercial softwares and also compared with the results of thermodynamics-based analytical models. 2014 * 153(<-626): A fuzzy multi-criteria decision approach for software development strategy selection This study proposes a methodology to improve the quality of decision-making in the software development project under uncertain conditions. To deal with the uncertainty and vagueness from subjective perception and experience of humans in the decision process, a methodology based on the extent fuzzy analytic hierarchy process modeling to assess the adequate economic ( tangible) and quality ( intangible) balance is applied. Two key factors of economic and quality are evaluated separately by fuzzy approaches and both factors' estimates are combined to obtain the preference degree associated with each software development project strategy alternative for selecting the most appropriate one. Using the proposed approach, the ambiguities involved in the assessment data can be effectively represented and processed to assure a more convincing and effective decision-making. Finally, a real case-study is given to demonstrate the potential of the methodology. 2004 * 154(<-673): Multiobjective Linear Programming model on injection oilfield recovery system This paper proposes a Multiobjective Linear Programming (MLP) model on injection oilfield recovery system. A modified interior-point algorithm to MLP problems has been constructed by using concepts of Kamarkar's interior point algorithm and the Analytic Hierarchy Process (AHP). This algorithm is shown to likely be more efficient than other MLP's algorithms in the application of decision making on the petroleum industry through the demonstration of a numerical example. The MLP model's optimal solution allows decision makers to optimally design the developing plan of the injection oilfield recovery system. (C) 1998 Elsevier Science Ltd. All rights reserved. 1998 * 155(<-680): PERCEPTRONS PLAY THE REPEATED PRISONERS-DILEMMA We examine the implications of bounded rationality in repeated games by modeling the repeated game strategies as perceptrons (F. Rosenblatt, ''Principles of Neurodynamics,'' Spartan Books, and M. Minsky and S. A. Papert, ''Perceptions: An Introduction to Computational Geometry,'' MIT Press, Cambridge, MA, 1988). In the prisoner's dilemma game, if the cooperation outcome is Pareto efficient, then we can establish the folk theorem by perceptrons with single associative units (Minsky and Papert), whose computational capability barely exceeds what we would expect from players capable of fictitious plays (e.g., L. Shapley, Some topics in two-person games, Adv. Game Theory 5 (1964), 1-28). (C) 1995 Academic Press, Inc. 1995 * 156(<-206): Genetic Algorithms, a Nature-Inspired Tool: A Survey of Applications in Materials Science and Related Fields: Part II Genetic algorithms (GAs) are a helpful tool in optimization, simulation, modelling, design, and prediction purposes in various domains of science including materials science, medicine, technology, economy, industry, environment protection, etc. Reported uses of GAs led to solving of numerous complex computational tasks. In materials science and related fields of science and technology, GAs are routinely used for materials modeling and design, for optimization of material properties, the method is also useful in organizing the material or device production at the industrial scale. Here, the most recent (years 2008-2012) applications of GAs in materials science and in related fields (solid state physics and chemistry, crystallography, production, and engineering) are reviewed. The representative examples selected from recent literature show how broad is the usefulness of this computational method. 2013 * 157(<-300): Dynamic equivalence by an optimal strategy Due to the curse of dimensionality, dynamic equivalence remains a computational tool that helps to analyze large amount of power systems' information. In this paper, a robust dynamic equivalence is proposed to reduce the computational burden and time consuming that the transient stability studies of large power systems represent. The technique is based on a multi-objective optimal formulation solved by a genetic algorithm. A simplification of the Mexican interconnected power system is tested. An index is used to assess the proximity between simulations carried out using the full and the reduced model. Likewise, it is assumed the use of information stemming from power measurements units (PMUs), which gives certainty to such information, and gives rise to better estimates. (C) 2011 Elsevier B.V. All rights reserved. 2012 * 158(<-394): A hybrid model using supporting vector machine and multi-objective genetic algorithm for processing parameters optimization in micro-EDM In micro-electrical discharge machining (EDM), processing parameters greatly affect processing efficiency and stability. However, the complexity of micro-EDM makes it difficult to determine optimal parameters for good processing performance. The important output objectives are processing time (PT) and electrode wear (EW). Since these parameters influence the output objectives in quite an opposite way, it is not easy to find an optimized combination of these processing parameters which make both PT and EW minimum. To solve this problem, supporting vector machine is adopted to establish a micro-EDM process model based on the orthogonal test. A new multi-objective optimization genetic algorithm (GA) based on the idea of non-dominated sorting is proposed to optimize the processing parameters. Experimental results demonstrate that the proposed multi-objective GA method is precise and effective in obtaining Pareto-optimal solutions of parameter settings. The optimized parameter combinations can greatly reduce PT while making EW relatively small. Therefore, the proposed method is suitable for parameter optimization of micro-EDM and can also enhance the efficiency and stability of the process. 2010 * 159(<-487): A Study on Uncertainty-Complexity Tradeoffs for Dynamic Nonlinear Sensor Compensation In this paper, we focus on the design of reduced-complexity sensor compensation modules based on learning-from-examples techniques. A multiobjective optimization design framework is proposed, where system complexity and compensation uncertainty are considered as two conflicting costs to be jointly minimized. In addition, suitable statistical techniques are applied to cope with the variability in the uncertainty estimation arising from the limited availability of data at design time. Numerical simulations are provided on a set of synthetic models to show the validity of the proposed methodology. 2009 * 160(<-558): Optimization strategies in ion chromatography The ion chromatographer is often concerned with the separation of complex mixtures with a variable behavior of their components, which makes good resolution and reasonable analysis time sometimes extremely difficult. Several optimization strategies have been proposed to solve this problem. The most reliable and less time consuming strategies apply resolution criteria based on theoretical or empirical retention models to describe the retention of particular components. This review focuses on optimization strategies in ion chromatography with a detailed description of the ion chromatographic retention model, objective functions, multi criteria decision making, and peak modeling. 2007 * 161(<-625): Developing sorting models using preference disaggregation analysis: An experimental investigation Within the field of multicriteria decision aid, sorting refers to the assignment of a set of alternatives into predefined homogenous groups defined in an ordinal way. The real-world applications of this type of problem extend to a wide range of decision-making fields. Preference disaggregation analysis provides the framework for developing sorting models through the analysis of the global judgment of the decision-maker using mathematical programming techniques. However, the automatic elicitation of preferential information through the preference disaggregation analysis raises several issues regarding the impact of the parameters involved in the model development process on the performance and the stability of the developed models. The objective of this paper is to shed light on this issue. For this purpose the UTADIS preference disaggregation sorting method (UTilites Additives DIScriminantes) is considered. The conducted analysis is based on an extensive Monte Carlo simulation and useful findings are obtained on the aforementioned issues. (C) 2003 Elsevier B.V. All rights reserved. 2004 * 162(<-646): Decision support to assist environmental sedimentology modelling Attention is drawn to the importance of spatial aspects when adopting a modelling approach to predict the likely character of sediment. This requires an understanding of the processes controlling transport, deposition and remobilization, singly and I in combination. The advantages of incorporating expert systems are examined alongside recently developed GIS techniques utilising multiple criteria and fuzzy sets. 2003 * 163(<- 88): Pareto Front Estimation for Decision Making The set of available multi-objective optimisation algorithms continues to grow. This fact can be partially attributed to their widespread use and applicability. However, this increase also suggests several issues remain to be addressed satisfactorily. One such issue is the diversity and the number of solutions available to the decision maker (DM). Even for algorithms very well suited for a particular problem, it is difficult-mainly due to the computational cost-to use a population large enough to ensure the likelihood of obtaining a solution close to the DM's preferences. In this paper we present a novel methodology that produces additional Pareto optimal solutions from a Pareto optimal set obtained at the end run of any multi-objective optimisation algorithm for two-objective and three-objective problem instances. 2014 * 164(<-306): Memetic algorithms and memetic computing optimization: A literature review Memetic computing is a subject in computer science which considers complex structures such as the combination of simple agents and memes, whose evolutionary interactions lead to intelligent complexes capable of problem-solving. The founding cornerstone of this subject has been the concept of memetic algorithms, that is a class of optimization algorithms whose structure is characterized by an evolutionary framework and a list of local search components. This article presents a broad literature review on this subject focused on optimization problems. Several classes of optimization problems, such as discrete, continuous, constrained, multi-objective and characterized by uncertainties, are addressed by indicating the memetic "recipes" proposed in the literature. In addition, this article focuses on implementation aspects and especially the coordination of memes which is the most important and characterizing aspect of a memetic structure. Finally, some considerations about future trends in the subject are given. (C) 2011 Elsevier B.V. All rights reserved. 2012 * 165(<-511): Pareto-based multiobjective machine learning: An overview and case studies Machine learning is inherently a multiobjective task. Traditionally, however, either only one of the objectives is adopted as the cost function or multiple objectives are aggregated to a scalar cost function. This can be mainly attributed to the fact that most conventional learning algorithms can only deal with a scalar cost function. Over the last decade, efforts on solving machine learning problems using the Pareto-based multiobjective optimization methodology have gained increasing impetus, particularly due to the great success of multiobjective optimization using evolutionary algorithms and other population-based stochastic search methods. It has been shown that Pareto-based multiobjective learning approaches are more powerful compared to learning algorithms with a scalar cost function in addressing various topics of machine learning, such as clustering, feature selection, improvement of generalization ability, knowledge extraction, and ensemble generation. One common benefit of the different multiobjective learning approaches is that a deeper insight into the learning problem can be gained by analyzing the Pareto front composed of multiple Pareto-optimal solutions. This paper provides an overview of the existing research on multiobjective machine learning, focusing on supervised learning. In addition, a number of case studies are provided to illustrate the major benefits of the Pareto-based approach to machine learning, e.g., how to identify interpretable models and models that can generalize on unseen data from the obtained Pareto-optimal solutions. Three approaches to Pareto-based multiobjective ensemble generation are compared and discussed in detail. Finally, potentially interesting topics in multiobjective machine learning are suggested. 2008 * 166(<-628): Digital filter design using multiple pareto fronts Evolutionary approaches have been used in a large variety of design domains, from aircraft engineering to the designs of analog filters. Many of these approaches use measures to improve the variety of solutions in the population. One such measure is clustering. In this paper, clustering and Pareto optimisation are combined into a single evolutionary design algorithm. The population is split into a number of clusters, and parent and offspring selection, as well as fitness calculation, are performed on a per-cluster basis. The objective of this is to prevent the system from converging prematurely to a local minimum and to encourage a number of different designs that fulfil the design criteria. Our approach is demonstrated in the domain of digital filter design. Using a polar coordinate based pole-zero representation, two different lowpass filter design problems are explored. The results are compared to designs created by a human expert. They demonstrate that the evolutionary process is able to create designs that are competitive with those created using a conventional design process by a human expert. They also demonstrate that each evolutionary run can produce a number of different designs with similar fitness values, but very different characteristics. 2004 * 167(<-126): Parameter identification and calibration of the Xin'anjiang model using the surrogate modeling approach Practical experience has demonstrated that single objective functions, no matter how carefully chosen, prove to be inadequate in providing proper measurements for all of the characteristics of the observed data. One strategy to circumvent this problem is to define multiple fitting criteria that measure different aspects of system behavior, and to use multi-criteria optimization to identify non-dominated optimal solutions. Unfortunately, these analyses require running original simulation models thousands of times. As such, they demand prohibitively large computational budgets. As a result, surrogate models have been used in combination with a variety of multi-objective optimization algorithms to approximate the true Pareto-front within limited evaluations for the original model. In this study, multi-objective optimization based on surrogate modeling (multivariate adaptive regression splines, MARS) for a conceptual rainfall-runoff model (Xin'anjiang model, XAJ) was proposed. Taking the Yanduhe basin of Three Gorges in the upper stream of the Yangtze River in China as a case study, three evaluation criteria were selected to quantify the goodness-of-fit of observations against calculated values from the simulation model. The three criteria chosen were the Nash-Sutcliffe efficiency coefficient, the relative error of peak flow, and runoff volume (REPF and RERV). The efficacy of this method is demonstrated on the calibration of the XAJ model. Compared to the single objective optimization results, it was indicated that the multi-objective optimization method can infer the most probable parameter set. The results also demonstrate that the use of surrogate-modeling enables optimization that is much more efficient; and the total computational cost is reduced by about 92.5%, compared to optimization without using surrogate modeling. The results obtained with the proposed method support the feasibility of applying parameter optimization to computationally intensive simulation models, via reducing the number of simulation runs required in the numerical model considerably. 2014 * 168(<-129): A comparison study of three statistical downscaling methods and their model-averaging ensemble for precipitation downscaling in China This study evaluated the performance of three frequently applied statistical downscaling tools including SDSM, SVM, and LARS-WG, and their model-averaging ensembles under diverse moisture conditions with respect to the capability of reproducing the extremes as well as mean behaviors of precipitation. Daily observed precipitation and NCEP reanalysis data of 30 stations across China were collected for the period 1961-2000, and model parameters were calibrated for each season at individual site with 1961-1990 as the calibration period and 1991-2000 as the validation period. A flexible framework of multi-criteria model averaging was established in which model weights were optimized by the shuffled complex evolution algorithm. Model performance was compared for the optimal objective and nine more specific metrics. Results indicate that different downscaling methods can gain diverse usefulness and weakness in simulating various precipitation characteristics under different circumstances. SDSM showed more adaptability by acquiring better overall performance at a majority of the stations while LARS-WG revealed better accuracy in modeling most of the single metrics, especially extreme indices. SVM provided more usefulness under drier conditions, but it had less skill in capturing temporal patterns. Optimized model averaging, aiming at certain objective functions, can achieve a promising ensemble with increasing model complexity and computational cost. However, the variation of different methods' performances highlighted the tradeoff among different criteria, which compromised the ensemble forecast in terms of single metrics. As the superiority over single models cannot be guaranteed, model averaging technique should be used cautiously in precipitation downscaling. 2014 * 169(<-268): Multiresponse Metamodeling in Simulation-Based Design Applications The optimal design of complex systems in engineering requires the availability of mathematical models of system's behavior as a function of a set of design variables; such models allow the designer to search for the best solution to the design problem. However, system models (e.g., computational fluid dynamics (CFD) analysis, physical prototypes) are usually time-consuming and expensive to evaluate, and thus unsuited for systematic use during design. Approximate models of system behavior based on limited data, also known as metamodels, allow significant savings by reducing the resources devoted to modeling during the design process. In this work in engineering design based on multiple performance criteria, we propose the use of multi-response Bayesian surrogate models (MR-BSM) to model several aspects of system behavior jointly, instead of modeling each individually. To this end, we formulated a family of multiresponse correlation functions, suitable for prediction of several response variables that are observed simultaneously from the same computer simulation. Using a set of test functions with varying degrees of correlation, we compared the performance of MR-BSM against metamodels built individually for each response. Our results indicate that MR-BSM outperforms individual metamodels in 53% to 75% of the test cases, though the relative performance depends on the sample size, sampling scheme and the actual correlation among the observed response values. In addition, the relative performance of MR-BSM versus individual metamodels was contingent upon the ability to select an appropriate covariance/correlation function for each application, a task for which a modified version of Akaike's Information Criterion was observed to be inadequate. [DOI: 10.1115/1.4006996] 2012 * 170(<-428): Multiobjective global surrogate modeling, dealing with the 5-percent problem When dealing with computationally expensive simulation codes or process measurement data, surrogate modeling methods are firmly established as facilitators for design space exploration, sensitivity analysis, visualization, prototyping and optimization. Typically the model parameter (=hyperparameter) optimization problem as part of global surrogate modeling is formulated in a single objective way. Models are generated according to a single objective (accuracy). However, this requires an engineer to determine a single accuracy target and measure upfront, which is hard to do if the behavior of the response is unknown. Likewise, the different outputs of a multi-output system are typically modeled separately by independent models. Again, a multiobjective approach would benefit the domain expert by giving information about output correlation and enabling automatic model type selection for each output dynamically. With this paper the authors attempt to increase awareness of the subtleties involved and discuss a number of solutions and applications. In particular, we present a multiobjective framework for global surrogate model generation to help tackle both problems and that is applicable in both the static and sequential design (adaptive sampling) case. 2010 * 171(<-469): Incorporating prior model into Gaussian processes regression for WEDM process modeling Sufficient sampling is usually time-consuming and expensive but also is indispensable for supporting high precise data-driven modeling of wire-cut electrical discharge machining (WEDM) process. Considering the natural way to describe the behavior of a WEDM process by IF-THEN rules drawn from the field experts, engineering knowledge and experimental work. in this paper, the fuzzy logic model is chosen as prior knowledge to leverage the predictive performance. Focusing on the fusion between rough fuzzy system and very scarce noisy samples, a simple but effective re-sampling algorithm based on piecewise relational transfer interpolation is presented and it is integrated with Gaussian processes regression (GPR) for WEDM process modeling. First, by using re-sampling algorithm encoded derivative regularization, the prior model is translated into a pseudo training dataset, and then the dataset is trained by the Gaussian processes. An empirical study on two benchmark datasets intuitively demonstrates the feasibility and effectiveness of this approach. Experiments on high-speed WEDM (DK7725B) are conducted for validation of nonlinear relationship between the design variables (i.e., workpiece thickness, peak current, ontime and off-time) and the responses (i.e., material removal rate and surface roughness). The experimental result shows that combining very rough fuzzy prior model with training examples still significantly improves the predictive performance of WEDM process modeling. even with very limited training dataset. That is, given the generalized prior model, the samples needed by GPR model could be reduced greatly meanwhile keeping precise. (c) 2008 Elsevier Ltd. All rights reserved. 2009 * 172(<-556): Soft combination of local models in a multi-objective framework Conceptual hydrologic models are useful tools as they provide an interpretable representation of the hydrologic behaviour of a catchment. Their representation of catchment's hydrological processes and physical characteristics, however, implies a simplification of the complexity and heterogeneity of reality. As a result, these models may show a lack of flexibility in reproducing the vast spectrum of catchment responses. Hence, the accuracy in reproducing certain aspects of the system behaviour may be paid in terms of a lack of accuracy in the representation of other aspects. By acknowledging the structural limitations of these models, we propose a modular approach to hydrological simulation. Instead of using a single model to reproduce the full range of catchment responses, multiple models are used, each of them assigned to a specific task. While a modular approach has been previously used in the development of data driven models, in this study we show an application to conceptual models. The approach is here demonstrated in the case where the different models are associated with different parameter realizations within a fixed model structure. We show that using a 'composite' model, obtained by a combination of individual 'local' models, the accuracy of the simulation is improved. We argue that this approach can be useful because it partially overcomes the structural limitations that a conceptual model may exhibit. The approach is shown in application to the discharge simulation of the experimental Alzette River basin in Luxembourg, with a conceptual model that follows the structure of the HBV model. 2007 * 173(<- 53): OR models with stochastic components in disaster operations management: A literature survey The increasing number of affected people due to disasters, the complexity and unpredictability of these phenomena and the different problems encountered in the planning and response in different scenarios, establish a need to find better measures and practices in order to reduce the human and economic loss in this kind of events. However this is not an easy task considering the great uncertainty these phenomena present including the multiple number of possible scenarios in terms of location, probability of occurrence and impact, the difficulty in estimating the demand and supply, the complexity of determining the number and type of resources both available and needed and the intricacy to establish the exact location of the demand, the supply and the possible damaged infrastructure, among many others. Disaster Operations Management has become very popular and, considering the properties of disasters, the use of tools and methodologies such as OR have been given a lot of attention in recent years. The present work provides a literature review on the OR models with some stochastic component applied to Disaster Operations Management (DOM), along with an analysis of these stochastic components and the techniques used by different authors to cope with them as well as a detailed database on the consulted papers, which differentiates this research from other reviews developed during the same period, in order to give an insight on the state of the art in the topic and determine possible future research directions. (C) 2014 Elsevier Ltd. All rights reserved. 2015 * 174(<- 91): Multi-objective ecological reservoir operation based on water quality response models and improved genetic algorithm: A case study in Three Gorges Reservoir, China This study proposes a self-adaptive GA-aided multi-objective ecological reservoir operation model (SMEROM) and applies it to water quality management in the Xiangxi River near to the Three Gorges Reservoir, China. The SMEROM integrates statistical water quality models, multi-objective reservoir operations, and a self-adaptive GA within a general framework. Among them, the statistical water quality models of the Xiangxi River are formulated to deal with the relationships between reservoir operation and water quality, which are embedded in constraints of the SMEROM. The multiple objective functions, including maximizing hydropower generation, minimizing loss of flood control, minimizing rate of flood risk, maximizing the average remaining capacity of flood control and maximizing the benefit of shipping, are considered simultaneously to obtain comprehensive benefit among the environment, society and economy. The weighting method is employed to convert the multiple objectives to a single objective. To solve the complex SMEROM, an improved self-adaptive GA is employed through incorporating simulated binary crossover and self-adaptive mutation. To demonstrate the advantage of the developed SMEROM model, the solutions through ecological reservoir operation are compared with those through the traditional reservoir operation and the practical operation in 2011, in terms of water quality, reservoir operation and objective function values. The results show that most of benefit in the ecological operation is better than that in the traditional or practical operations except for the hydropower benefit and loss benefit of flood control. This is because flood control and environmental protection are reasonably considered in the ecological operation. (C) 2014 Elsevier Ltd. All rights reserved. 2014 * 175(<-112): Simulation-optimization modeling for conjunctive water use management Good quality surface water and groundwater resources are limited furthermore they are shrinking because of the urbanization, contamination, and climate change impacts. In this backdrop, the proper allocation and management of these resources is a critical challenge for satisfying the rising water demands of agricultural sector. Because irrigated agriculture is the largest user of all the developed water resources and consumes over 70% of the abstracted freshwater globally. The computer-based models are useful tools for achieving the optimal allocation of limited water resources for the conjunctive use planning and management in irrigated agriculture. Various simulation and optimization modeling approaches have been used to solve the water allocation problems. Optimization models have been shown to be of great importance when used with simulation models and the combined use of these two approaches gives the best results. The reviews on the combined applications of simulation and optimization modeling for the conjunctive use planning and management of surface water and groundwater resources for sustainable irrigated agriculture are done and presented in this paper. Conclusions are provided based on this review which could be useful for all the stakeholders. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 176(<-227): A Multiobjective Optimisation Model for Groundwater Remediation Design at Petroleum Contaminated Sites This study proposes a fuzzy multi-objective model for groundwater remediation in petroleum-contaminated aquifers. The optimisation system is designed based on the PAT technology, and includes two objectives (i.e. total pumping rate and average post-remedial contaminant concentration). The relationship between pumping rates and contamination concentrations at all monitoring wells after remediation are determined by a proxy model, which integrates simulation, inference, and optimisation technologies and is composed of intercept, linear, interactive, and quadratic options. Fuzzy algorithms are used to solve the formulated multi-objective optimisation problem to find optimal solutions. The model is then applied to a petroleum-contaminated aquifer in western Canada. The trade-off and lambda analyses of the results indicate that the fuzzy multi-objective model has great potential in groundwater remediation applications as it can: (1) provide reliable groundwater remediation strategies, (2) reduce computational costs in the optimization processes, and (3) balance the trade-off between remediation costs and remediation outcomes. 2013 * 177(<-257): System optimization for eco-design by using monetization of environmental impacts: a strategy to convert bi-objective to single-objective problems Eco-design is an essential way to reduce the environmental impacts and economic cost of processes and systems, as well as products. Until now, the majority of eco-design approaches have employed multi-objective optimization methods to balance between environmental and economic performances. However, the methods have limitations because multi-objective optimization requires decision makers to subjectively assign weighting factors for objectives, i.e., environmental impacts and economic cost. This implies that, depending on decision makers' preference and knowledge, different design solutions can be engendered for the same design problem. Thus, this study proposes an eco-design method which can generate a single design solution by developing mathematical optimization models with a single-objective function for environmental impacts and economic cost. For the formulation of the single-objective function, environmental impacts are monetized to external cost by using the Environmental Priority Strategies. This enables the tradeoffs between environmental impacts and economic cost in the same unit, i.e., monetary unit. As a case study, the proposed method is applied to the eco-design of a water reuse system in an industrial plant. This study can contribute to improving the eco-efficiency of various products, processes, and systems. (C) 2012 Elsevier Ltd. All rights reserved. 2013 * 178(<-364): Paradigm shift in urban energy systems through distributed generation: Methods and models The path towards energy sustainability is commonly referred to the incremental adoption of available technologies, practices and policies that may help to decrease the environmental impact of energy sector, while providing an adequate standard of energy services. The evaluation of trade-offs among technologies, practices and policies for the mitigation of environmental problems related to energy resources depletion requires a deep knowledge of the local and global effects of the proposed solutions. While attempting to calculate such effects for a large complex system like a city, an advanced multidisciplinary approach is needed to overcome difficulties in modeling correctly real phenomena while maintaining computational transparency, reliability, interoperability and efficiency across different levels of analysis. Further, a methodology that rationally integrates different computational models and techniques is necessary to enable collaborative research in the field of optimization of energy efficiency strategies and integration of renewable energy systems in urban areas. For these reasons, a selection of currently available models for distributed generation planning and design is presented and analyzed in the perspective of gathering their capabilities in an optimization framework to support a paradigm shift in urban energy systems. This framework embodies the main concepts of a local energy management system and adopts a multicriteria perspective to determine optimal solutions for providing energy services through distributed generation. (C) 2010 Elsevier Ltd. All rights reserved. 2011 * 179(<-397): Tolerance design optimization on cost-quality trade-off using the Shapley value method Part tolerance design is important in the manufacturing process of many complex products because it directly affects manufacturing cost and product quality. It is significant to develop a reasonable tolerance scheme considering the demands of cost and quality to reduce the production risk and provide a guide for supplier management. Traditionally, some kinds of cost objective functions or variation propagation models are often applied in part tolerance design. Moreover, designers usually solve the tolerance design problem by constructing a single-objective model, dealing with several single-objective problems, or establishing a comprehensive evaluating function combining several optimization objectives with different weights. These approaches may not adequately consider the interdependent and the interactional relations of various demands and balance them. This paper presents a kind of tolerance design approach at the early design stage of automotive parts based on the Shapley value method (SVM) of coalitional game theory considering the demands of manufacturing cost and product quality. First the part tolerance design problem is defined. The measuring data in regular production is collected instead of working on specific objective functions or design models. Then how the SVM is adopted to solve the tolerance design problem is discussed. Lastly, a tolerance design example of a vehicle front lamp demonstrates the application and the performance of the proposed method. (C) 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved. 2010 * 180(<-605): Multi-criteria decision analysis for the optimal management of nitrate contamination of aquifers We present an integrated methodology for the optimal management of nitrate contamination of ground water combining environmental assessment and economic cost evaluation through multi-criteria decision analysis. The proposed methodology incorporates an integrated physical modeling framework accounting for on-ground nitrogen loading and losses, soil nitrogen dynamics, and fate and transport of nitrate in ground water to compute the sustainable on-ground nitrogen loading such that the maximum contaminant level is not violated. A number of protection alternatives to stipulate the predicted sustainable on-ground nitrogen loading are evaluated using the decision analysis that employs the importance order of criteria approach for ranking and selection of the protection alternatives. The methodology was successfully demonstrated for the Sumas-Blaine aquifer in Washington State. The results showed the importance of using this integrated approach which predicts the sustainable on-ground nitrogen loadings and provides an insight into the economic consequences generated in satisfying the environmental constraints. The results also show that the proposed decision analysis framework, within certain limitations, is effective when selecting alternatives with competing demands. (c) 2004 Elsevier Ltd. All rights reserved. 2005 * 181(<-466): Parameter identification of a non-associative elastoplastic constitutive model using ANN and multi-objective optimization This paper deals with the identification of material parameters using a hybrid method of multi-objective optimization. This approach was used in a previous work to identify the Hill'48 criterion under the associative normality assumption and the Voce law hardening parameters of the Stainless Steel AISI 304. In this work, we apply the proposed method in order to identify the orthotropic criterion of Hill'48 under the non-associative normality assumption. The two models are compared and analysed using several experimental tests. 2009 * 182(<-284): Planning of Groundwater Supply Systems Subject to Uncertainty Using Stochastic Flow Reduced Models and Multi-Objective Evolutionary Optimization The typical modeling approach to groundwater management relies on the combination of optimization algorithms and subsurface simulation models. In the case of groundwater supply systems, the management problem may be structured into an optimization problem to identify the pumping scheme that minimizes the total cost of the system while complying with a series of technical, economical, and hydrological constraints. Since lack of data on the subsurface system most often reflects upon the development of groundwater flow models that are inherently uncertain, the solution to the groundwater management problem should explicitly consider the tradeoff between cost optimality and the risk of not meeting the management constraints. This work addresses parameter uncertainty following a stochastic simulation (or Monte Carlo) approach, in which a sufficiently large ensemble of parameter scenarios is used to determine representative values selected from the statistical distribution of the management objectives, that is, minimizing cost while minimizing risk. In particular, the cost of the system is estimated as the expected value of the cost distribution sampled through stochastic simulation, while the risk of not meeting the management constraints is quantified as the expected value of the intensity of constraint violation. The solution to the multi-objective optimization problem is addressed by combining a multi-objective evolutionary algorithm with a stochastic model simulating groundwater flow in confined aquifers. Evolutionary algorithms are particularly appropriate in optimization problems characterized by non-linear and discontinuous objective functions and constraints, although they are also computationally demanding and require intensive analyses to tune input parameters that guarantee optimality to the solutions. In order to drastically reduce the otherwise overwhelming computational cost, a novel stochastic flow reduced model is thus developed, which practically allows for averting the direct inclusion of the full simulation model in the optimization loop. The computational efficiency of the proposed framework is such that it can be applied to problems characterized by large numbers of decision variables. 2012 * 183(<-618): Feasibility study of beam orientation class-solutions for prostate IMRT IMRT is being increasingly used for treatment of prostate cancer. In practice, however, the beam orientations used for the treatments are still selected empirically, without any guideline. The purpose of this work was to investigate interpatient variation of the optimal beam configuration and to facilitate intensity modulated radiation therapy (IMRT) prostate treatment planning by proposing a set of beam orientation class-solutions for a range of numbers of incident beams. We used fifteen prostate cases to generate the beam orientation class-solutions. For each patient and a given number of incident beams, a multiobjective optimization engine was employed to provide optimal beam directions. For the fifteen cases considered, the gantry angle of any of the optimized plans were all distributed within a certain range The angular distributions of the optimal beams were analyzed and the most selected directions are identified as optimal directions. The optimal directions for all patients are averaged to obtain the class-solution. The class-solution gantry angles for prostate IMRT were found to be: three beams (0degrees, 120degrees, 240degrees), five beams (35degrees, 110degrees, 180degrees, 250degrees, 325degrees), six beams (0degrees, 60degrees, 120degrees, 180degrees, 240degrees, 300degrees), seven beams (25degrees, 75degrees, 130degrees, 180', 230degrees, 285degrees, 335degrees), eight beams (20degrees, 70degrees, 110degrees, 150degrees, 200degrees, 250degrees, 290degrees, 340degrees), and nine beams (20degrees, 60degrees, 100degrees, 140degrees, 180degrees, 220degrees, 260degrees, 300degrees, 340degrees). The level of validity of the class-solutions was tested using an additional clinical prostate case by comparing with the individually optimized beam configurations. The difference between the plans obtained with class-solutions and patient-specific optimizations was found to be clinically insignificant. (C) 2004 American Association of Physicists in Medicine. 2004 * 184(<-562): On proto-differentiability of generalized perturbation maps This paper is devoted to the sensitivity analysis in optimization problems and variational inequalities. The concept of proto-differentiability of set-valued maps (see [R.T. Rockafellar, Proto-differentiability of set-valued mappings and its applications in optimization, Ann. Inst. H. Poincare Anal. Non Lineaire 6 (1989) 449-482]) plays the key role in our investigation. It is proved that, under some suitable qualification conditions, the generalized perturbation maps (that is, the solution set map to a parameterized constraint system, to a parameterized variational inequality, or to a parameterized optimization problem) are proto-differentiable. (c) 2006 Elsevier Inc. All rights reserved. 2006 * 185(<-111): Concepts of efficiency for uncertain multi-objective optimization problems based on set order relations In this paper we present new concepts of efficiency for uncertain multi-objective optimization problems. We analyze the connection between the concept of minmax robust efficiency presented by Ehrgott et al. (Eur J Oper Res, 2014, doi:10.1016/j.ejor.2014.03.013) and the upper set less order relation <=(u)(s) introduced by Kuroiwa (1998, 1999). From this connection we derive new concepts of efficiency for uncertain multi-objective optimization problems by replacing the set ordering with other set orderings. Those are namely the lower set less ordering (see Kuroiwa 1998, 1999), the set less ordering (see Nishnianidze in Soobshch Akad Nauk Gruzin SSR 114(3):489-491, 1984; Young in Math Ann 104(1):260-290, 1931, doi:10.1007/BF01457934; Eichfelder and Jahn in Vector Optimization. Springer, Berlin, 2012), the certainly less ordering (see Eichfelder and Jahn in Vector Optimization. Springer, Berlin, 2012), and the alternative set less ordering (see Ide et al. in Fixed Point Theory Appl, 2014, doi:10.1186/1687-1812-2014-83; Kobis 2014). We analyze the resulting concepts of efficiency and present numerical results on the occurrence of the various concepts. We conclude the paper with a short comparison between the concepts, and an outlook to further work. 2014 * 186(<-570): A fiscal regime solving the incentive inconsistency problem This paper proposes a fiscal taxation/subsidy regime, which can mitigate the incentive inconsistency problem in the selection of a price policy (Yao and Lai, Ann Reg Sci, 2006). Through this regime, a Pareto improvement may be achieved. The results show that the efficacy of governmental intervention is more direct in the case of rectangular demand than that in linear demand. The distortion due to the incentive inconsistency cannot be easily regulated. 2006 * 187(<-636): Geometrical properties of Pareto distribution The differential-geometrical framework for analyzing statistical problems related to Pareto distribution, is given. A classical and intuitive way of description the relationship between the differential geometry and the statistics, is introduced [Publicationes Mathematicae Debrecen, Hungary, vol. 61 (2002) 1-14; RAAG Mem. 4 (1968) 373; Ann. Statist. 10 (2) (1982) 357; Springer Lecture Notes in Statistics, 1985; Tensor, N.S. 57 (1996) 282; Commun. Statist. Theor. Meth. 29 (4) (2000) 859; Tensor, N.S. 33 (1979) 347; Int. J. Eng. Sci. 19 (1981) 1609; Tensor, N.S. 57 (1996) 300; Differential Geometry and Statistics, 1993], but in a slightly modified manner. This is in order to provide an easier introduction for readers not familiar with differential geometry. The parameter space of the Pareto distribution using its Fisher's matrix is defined. The Riemannian and scalar curvatures to parameter space are calculated. The differential equations of the geodesics are obtained and solved. The J-divergence, the geodesic distance and the relations between of them in that space are found. A development of the relation between the J-divergence and the geodesic distance is illustrated. The scalar curvature of the J-space is represented. (C) 2002 Elsevier Inc. All rights reserved. 2003 * 188(<-491): Positive- versus zero-sum majoritarian ultimatum games: An experimental study Politics can involve a Movement from a position off the Pareto frontier to a point on it (a positive-sum game as exemplified in the classic [Buchanan, J.M., Tullock, G., 1962. The Calculus of Consent. University of Michigan Press, Ann Arbor] work), OF a movement along the Pareto frontier (a zero-SLIM game as exemplified in the classic [Riker, W., 1962. The theory of political coalitions. Yale University Press, New Haven] work). In this paper we shed light on their differentiation experimentally by making a comparison between a positive-sum and a zero-sum majoritarian ultimatum game. Our main findings include (I) the fraction Of Subjects who adopted minimum winning rather than oversized coalitions increases significantly as the game form varies from positive-sum to zero-sum, (ii) oversized coalitions are attributable to non-strategic considerations, and (iii) subjects who choose to adopt the minimum winning coalition have a tendency to seek cheaper responders as their partners in the zero-sum game, but there is no evidence Of Such a tendency in the positive-sum game. Overall, the weight of the evidence revealed by our experimental data indicates that relative scarcity (embodied in the zero-sum game) promotes behavior more in line with the predictions of economics. (C) 2008 Elsevier B.V. All rights reserved. 2008 * 189(<-650): Extremist vs. centrist decision behavior: quasi-convex utility functions for interactive multi-objective linear programming problems This paper presents the fundamental theory and algorithms for identifying the most preferred alternative for a decision maker (DM) having a non-centrist (or extremist) preferential behavior. The DM is requested to respond to a set of questions in the form of paired comparison of alternatives. The approach is different than other methods that consider the centrist preferential behavior. In this paper, an interactive approach is presented to solve the multiple objective linear programming (MOLP) problem. The DM's underlying preferential function is represented by a quasi-convex value (utility) function, which is to be maximized. The method presented in this paper solves MOLP problems with quasi-convex value (utility) functions by using paired comparison of alternatives in the objective space. From the mathematical point of view, maximizing a quasi-convex (or a convex) function over a convex set is considered a difficult problem to solve, while solutions for quasi-concave (or concave) functions are currently available. We prove that our proposed approach converges to the most preferred alternative. We demonstrate that the most preferred alternative is an extreme point of the MOLP problem, and we develop an interactive method that guarantees obtaining the global most preferred alternative for the MOLP problem. This method requires only a finite number of pivoting operations using a simplex-based method, and it asks only a limited number of paired comparison questions of alternatives in the objective space. We develop a branch and bound algorithm that extends a tree of solutions at each iteration until the MOLP problem is solved. At each iteration, the decision maker has to identify the most preferred alternatives from a given subset of efficient alternatives that are adjacent extreme points to the current basis. Through the branch and bound algorithm, without asking many questions from the decision maker, all branches of the tree Eire implicitly enumerated until the most preferred alternative is obtained. An example is provided to show the details of the algorithm. Some computational experiments are also presented. 2002 * 190(<-622): Estimating catastrophic quantile levels for heavy-tailed distributions Estimation of the occurrence of extreme events is of prime interest for actuaries. Heavy-tailed distributions are used to model large claims and losses. Within this setting we present a new extreme quantile estimator based on an exponential regression model that was introduced by Feuerverger and Hall [Ann. Stat. 27 (1999) 760] and Beirlant et al. [Extremes 2 (1999) 177]. We also discuss how this approach is to be adjusted in the presence of right censoring. This adaptation can also be linked to robust quantile estimation as this solution is based on a Winsorized mean of extreme order statistics which replaces the classical Hill estimator. We also propose adaptive threshold selection procedures for Weissman's [J. Am. Stat. Assoc. 73 (1978) 812] quantile estimator which can be used both with and without censoring. Finally some asymptotic results are presented, while small sample properties are compared in a simulation study. (C) 2004 Elsevier B.V. All rights reserved. 2004 * 191(<-634): On a bivariate lack of memory property under binary associative operation A binary operation over real numbers is said to be associative if (x * y) * z = x * (y * z) and it is said to be reducible if x * y = x * z or y * w = z * w if and only if z = y. The operation * is said to have an identity element (e) over tilde if x * (e) over tilde = x. Roy [Roy, D. (2002). On bivariate lack of memory property and a new definition. Ann. Inst. Statist. Math. 54:404-410] introduced a new definition for bivariate lack of memory property and characterized the bivariate exponential distribution introduced by Gumbel [Gumbel, E. (1960). Bivariate exponential distributions. J Am. Statist. Assoc. 55:698-707] under the condition that each of the conditional distributions should have the univariate lack of memory property. We generalize this definition and characterize different classes of bivariate probability distributions under binary associative operations between random variables. 2004 * 192(<-395): Modulated Branching Processes, Origins of Power Laws, and Queueing Duality Power law distributions have been repeatedly observed in a wide variety of socioeconomic, biological, and technological areas. In many of the observations, e. g., city populations and sizes of living organisms, the objects of interest evolve because of the replication of their many independent components, e. g., births and deaths of individuals and replications of cells. Furthermore, the rates of replications are often controlled by exogenous parameters causing periods of expansion and contraction, e. g., baby booms and busts, economic booms and recessions, etc. In addition, the sizes of these objects often have reflective lower boundaries, e. g., cities do not fall below a certain size, low-income individuals are subsidized by the government, companies are protected by bankruptcy laws, etc. Hence, it is natural to propose reflected modulated branching processes as generic models for many of the preceding observations. Indeed, our main results show that the proposed mathematical models result in power law distributions under quite general polynomial Gartner-Ellis conditions, the generality of which could explain the ubiquitous nature of power law distributions. In addition, on a logarithmic scale, we establish an asymptotic equivalence between the reflected branching processes and the corresponding multiplicative ones. The latter, as recognized by Goldie [Goldie, C. M. 1991. Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab. 1(1) 126-166], is known to be dual to queueing/additive processes. We emphasize this duality further in the generality of stationary and ergodic processes. 2010 * 193(<-406): LAMPERTI-TYPE LAWS This paper explores various distributional aspects of random variables defined as the ratio of two independent positive random variables where one variable has an alpha-stable law, for 0 < alpha < 1, and the other variable has the law defined by polynomially tilting the density of an alpha-stable random variable by a factor theta > -alpha. When theta = 0, these variables equate with the ratio investigated by Lamperti [Trans. Amer. Math. Soc. 88 (1958) 380-387] which, remarkably, was shown to have a simple density. This variable arises in a variety of areas and gains importance from a close connection to the stable laws. This rationale, and connection to the PD(alpha, theta) distribution, motivates the investigations of its generalizations which we refer to as Lamperti-type laws. We identify and exploit links to random variables that commonly appear in a variety of applications. Namely Linnik, generalized Pareto and z-distributions. In each case we obtain new results that are of potential interest. As some highlights, we then use these results to (i) obtain integral representations and other identities for a class of generalized Mittag-Leffler functions, (ii) identify explicitly the Levy density of the semigroup of stable continuous state branching processes (CSBP) and hence corresponding limiting distributions derived in Slack and in Zolotarev [Z. Wahrsch. Verw. Gebiete 9 (1968) 139-145, Teor. Veroyatn. Primen. 2 (1957) 256-266], which are related to the recent work by Berestycki, Berestycki and Schweinsberg, and Bertoin and LeGall [Ann. Inst. H. Poincare Probab. Statist. 44 (2008) 214-238, Illinois J. Math. 50 (2006) 147-181] on beta coalescents. (iii) We obtain explicit results for the occupation time of generalized Bessel bridges and some interesting stochastic equations for PD(alpha, theta)-bridges. In particular we obtain the best known results for the density of the time spent positive of a Bessel bridge of dimension 2 - 2 alpha. 2010 * 194(<-455): Maximum likelihood estimation of extreme value index for irregular cases A method in analyzing extremes is to fit a generalized Pareto distribution to the exceedances over a high threshold. By varying the threshold according to the sample size [Smith, R.L., 1987. Estimating tails of probability distributions. Ann. Statist. 15, 1174-1207] and [Drees, H., Ferreira, A., de Haan, L., 2004. On maximum likelihood estimation of the extreme value index. Ann. Appl. Probab. 14,1179-1201] derived the asymptotic properties of the maximum likelihood estimates (MLE) when the extreme value index is larger than -1/2. Recently Zhou [2009. Existence and consistency of the maximum likelihood estimator for the extreme value index. J. Multivariate Anal. 100, 794-815] showed that the MLE is consistent when the extreme value index is larger than -1. In this paper, we study the asymptotic distributions of MLE when the extreme value index is in between -1 and -1/2 (including -1/2). Particularly, we consider the MLE for the endpoint of the generalized Pareto distribution and the extreme value index and show that the asymptotic limit for the endpoint estimate is non-normal, which connects with the results in Woodroofe [1974. Maximum likelihood estimation of translation parameter of truncated distribution II. Ann. Statist. 2, 474-488]. Moreover, we show that same results hold for estimating the endpoint of the underlying distribution, which generalize the results in Hall [1982. On estimating the endpoint of a distribution. Ann. Statist. 10, 556-568] to irregular case. and results in Woodroofe [1974. Maximum likelihood estimation of translation parameter of truncated distribution II. Ann. Statist. 2,474-488] to the case of unknown extreme value index. (C) 2009 Elsevier B.V. All rights reserved. 2009 * 195(<-516): Peaks-over-threshold stability of multivariate generalized Pareto distributions It is well-known that the univariate generalized Pareto distributions (GPD) are characterized by their peaks-over-threshold (POT) stability. We extend this result to multivariate GPDs. It is also shown that this POT stability is asymptotically shared by distributions which are in a certain neighborhood of a multivariate GPD. A multivariate extreme value distribution is a typical example. The usefulness of the results is demonstrated by various applications. We immediately obtain, for example, that the excess distribution of a linear portfolio Sigma(i <= d) a(i)U(i) with positive weights a(i), i <= d, is independent of the weights, if (U(1),...,U(d)) follows a multivariate GPD with identical univariate polynomial or Pareto margins, which was established by Macke [On the distribution of linear combinations of multivariate EVD and GPD distributed random vectors with an application to the expected shortfall of portfolios, Diploma Thesis, University of Wurzburg, 2004, (in German)] and Falk and Michel [Testing for tail independence in extreme value models. Ann. Inst. Statist. Math. 58 (2006) 261-290]. This implies, for instance, that the expected shortfall as a measure of risk fails in this case. (c) 2007 Elsevier Inc. All rights reserved. 2008 * 196(<-517): Extreme value theory for space-time processes with heavy-tailed distributions Many real-life time series exhibit clusters of outlying observations that cannot be adequately modeled by a Gaussian distribution. Heavy-tailed distributions such as the Pareto distribution have proved useful in modeling a wide range of bursty phenomena that occur in areas as diverse as finance, insurance, telecommunications, meteorology, and hydrology. Regular variation provides a convenient and unified background for studying multivariate extremes when heavy tails are present. In this paper, we study the extreme value behavior of the space-time process given by X(t)(s)=(infinity)Sigma(i=0)psi(i)(s)Z(t-i)(s), s epsilon [0,1](d), where (Zt)(t epsilon Z) is an iid sequence of random fields on [0, 1](d) with values in the Skorokhod space D([0, 1](d)) of cadlag functions on [0, 1](d) equipped with the J(1)-topology. The coefficients Psi(i) are deterministic real-valued fields on D([0, 1](d)). The indices s and t refer to the observation of the process at location s and time t. For example, X(t) (s), t = 1, 2,,.., could represent the time series of annual maxima of ozone levels at location s. The problem of interest is determining the probability that the maximum ozone level over the entire region [0, 1](2) does not exceed a given standard level f epsilon D([0, 1](2)) in n years. By establishing a limit theory for point processes based on (Xt (s)), t = 1,..., n, we are able to provide approximations for probabilities of extremal events. This theory builds on earlier results of de Haan and Lin [L. de Haan, T. Lin, On convergence toward an extreme value distribution in C[0, 1], Ann. Probab. 29 (2001) 467-483] and Hult and Lindskog [H. Hult, F. Lindskog, Extremal behavior of regularly varying stochastic processes, Stochastic Process. Appl. 115 (2) (2005) 249-274] for regular variation on D([0, 1](d)) and Davis and Resnick [R.A. Davis, S.I. Resnick, Limit theory for moving averages of random variables with regularly varying tail probabilities, Ann. Probab. 13 (1985) 179-195] for extremes of linear processes with heavy-tailed noise. (C) 2007 Elsevier B.V. All rights reserved. 2008 * 197(<- 40): Max-stable processes and the functional D-norm revisited Aulbach et al. (Extremes 16, 255283, 2013) introduced a max-domain of attraction approach for extreme value theory in C[0,1] based on functional distribution functions, which is more general than the approach based on weak convergence in de Haan and Lin (Ann. Probab. 29, 467483, 2001). We characterize this new approach by decomposing a process into its univariate margins and its copula process. In particular, those processes with a polynomial rate of convergence towards a max-stable process are considered. Furthermore we investigate the concept of differentiability in distribution of a max-stable processes. 2015 * 198(<-643): The generalized extreme value distribution This paper determines the type of asymptotic distribution for the extreme changes in stock prices, foreign exchange rates and interest rates. To find the correct limiting distribution for the maximal and minimal changes in market variables, a more general extreme value distribution is introduced using the Box-Cox transformation. Both the generalized Pareto distribution of Pickands [Ann. Stat. 3 (1975) 119] and the generalized extreme value distribution of Jenkinson [Q. J. R. Meteorol. Soc. 87 (1955) 145] are strongly rejected in favor of the newly proposed Box-Cox-GEV distribution. (C) 2003 Elsevier Science B.V. All rights reserved. 2003 * 199(<- 50): Landslide susceptibility mapping using GIS-based statistical models and Remote sensing data in tropical environment This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. 2015 * 200(<- 82): GIS-based multicriteria evaluation with multiscale analysis to characterize urban landslide susceptibility in data-scarce environments Landslides can have a severe negative impact on the socio-economic and environmental state of individuals and their communities. Minimizing these impacts is dependent on the effective identification of risk areas using a susceptibility analysis process. In such a process, output maps are generated to determine various levels of threat to human populations. However, the reliability of the process is controlled by critical factors such as data availability and data quality. In data-scarce environments, susceptibility analysis done at multiple interlocking geographic scales can provide a convergence of evidence to reliably identify risk areas. In this study, multiscale analysis and fuzzy sets are combined with GIS-based multicriteria evaluation (MCE) to determine landslide susceptibility levels for areas of the Metro Vancouver region, British Columbia, Canada. Landslide-conditioning parameters are chosen based on their relevance and effect on a particular scale of analysis. These parameters are derived for three geographic scales using digital elevation models, drainage networks and road networks. An analytical hierarchy process (AHP) analysis provides relative weights of importance to combine variables. The landslide susceptibility analysis is done for regional, municipal and local scales at resolutions of 50 m, 10 m, and 1 m respectively. At each scale, susceptibility models are validated against real inventory data using the seed cell area index (SCAI) method. The strong inverse correlation between the map classes and the SCAI adds to confidence in the results. The developed approach can enable analysts in data-scarce environments to reliably identify susceptible areas thereby improving hazard mitigation, emergency services targeting, and overall community planning. (C) 2014 Elsevier Ltd. All rights reserved. 2015 * 201(<-123): Landslide susceptibility mapping using GIS-based multi-criteria decision analysis, support vector machines, and logistic regression Identification of landslides and production of landslide susceptibility maps are crucial steps that can help planners, local administrations, and decision makers in disaster planning. Accuracy of the landslide susceptibility maps is important for reducing the losses of life and property. Models used for landslide susceptibility mapping require a combination of various factors describing features of the terrain and meteorological conditions. Many algorithms have been developed and applied in the literature to increase the accuracy of landslide susceptibility maps. In recent years, geographic information system-based multi-criteria decision analyses (MCDA) and support vector regression (SVR) have been successfully applied in the production of landslide susceptibility maps. In this study, the MCDA and SVR methods were employed to assess the shallow landslide susceptibility of Trabzon province (NE Turkey) using lithology, slope, land cover, aspect, topographic wetness index, drainage density, slope length, elevation, and distance to road as input data. Performances of the methods were compared with that of widely used logistic regression model using ROC and success rate curves. Results showed that the MCDA and SVR outperformed the conventional logistic regression method in the mapping of shallow landslides. Therefore, multi-criteria decision method and support vector regression were employed to determine potential landslide zones in the study area. 2014 * 202(<-128): GIS-based landslide susceptibility mapping with probabilistic likelihood ratio and spatial multi-criteria evaluation models (North of Tehran, Iran) The aim of this study is to produce landslide susceptibility mapping by probabilistic likelihood ratio (PLR) and spatial multi-criteria evaluation (SMCE) models based on geographic information system (GIS) in the north of Tehran metropolitan, Iran. The landslide locations in the study area were identified by interpretation of aerial photographs, satellite images, and field surveys. In order to generate the necessary factors for the SMCE approach, remote sensing and GIS integrated techniques were applied in the study area. Conditioning factors such as slope degree, slope aspect, altitude, plan curvature, profile curvature, surface area ratio, topographic position index, topographic wetness index, stream power index, slope length, lithology, land use, normalized difference vegetation index, distance from faults, distance from rivers, distance from roads, and drainage density are used for landslide susceptibility mapping. Of 528 landslide locations, 70 % were used in landslide susceptibility mapping, and the remaining 30 % were used for validation of the maps. Using the above conditioning factors, landslide susceptibility was calculated using SMCE and PLR models, and the results were plotted in ILWIS-GIS. Finally, the two landslide susceptibility maps were validated using receiver operating characteristic curves and seed cell area index methods. The validation results showed that area under the curve for SMCE and PLR models is 76.16 and 80.98 %, respectively. The results obtained in this study also showed that the probabilistic likelihood ratio model performed slightly better than the spatial multi-criteria evaluation. These landslide susceptibility maps can be used for preliminary land use planning and hazard mitigation purpose. 2014 * 203(<-130): GIS-based groundwater spring potential assessment and mapping in the Birjand Township, southern Khorasan Province, Iran Three statistical models-frequency ratio (FR), weights-of-evidence (WofE) and logistic regression (LR)-produced groundwater-spring potential maps for the Birjand Township, southern Khorasan Province, Iran. In total, 304 springs were identified in a field survey and mapped in a geographic information system (GIS), out of which 212 spring locations were randomly selected to be modeled and the remaining 92 were used for the model evaluation. The effective factors-slope angle, slope aspect, elevation, topographic wetness index (TWI), stream power index (SPI), slope length (LS), plan curvature, lithology, land use, and distance to river, road, fault-were derived from the spatial database. Using these effective factors, groundwater spring potential was calculated using the three models, and the results were plotted in ArcGIS. The receiver operating characteristic (ROC) curves were drawn for spring potential maps and the area under the curve (AUC) was computed. The final results indicated that the FR model (AUC = 79.38 %) performed better than the WofE (AUC = 75.69 %) and LR (AUC = 63.71 %) models. Sensitivity and factor analyses concluded that the bivariate statistical index model (i.e. FR) can be used as a simple tool in the assessment of groundwater spring potential when a sufficient number of data are obtained. 2014 * 204(<-299): A comparison of landslide susceptibility maps produced by logistic regression, multi-criteria decision, and likelihood ratio methods: a case study at Izmir, Turkey The main purpose of this study is to compare the use of logistic regression, multi-criteria decision analysis, and a likelihood ratio model to map landslide susceptibility in and around the city of Izmir in western Turkey. Parameters, such as lithology, slope gradient, slope aspect, faults, drainage lines, and roads, were considered. Landslide susceptibility maps were produced using each of the three methods and then compared and validated. Before the modeling and validation, the observed landslides were separated into two groups. The first group was for training, and the other group was for validation steps. The accuracy of models was measured by fitting them to a validation set of observed landslides. For validation process, the area under curvature (AUC) approach was applied. According to the AUC values of 0.810, 0.764, and 0.710 for logistic regression, likelihood ratio, and multi-criteria decision analysis, respectively, logistic regression was determined to be the most accurate method among the other used landslide susceptibility mapping methods. Based on these results, logistic regression and likelihood ratio models can be used to mitigate hazards related to landslides and to aid in land-use planning. 2012 * 205(<-407): Landslide susceptibility mapping for Ayvalik (Western Turkey) and its vicinity by multicriteria decision analysis This paper presents the results of geographical information system (GIS)-based landslide susceptibility mapping in AyvalA +/- k, western Turkey using multi-criteria decision analysis. The methodology followed in the study includes data production, standardization, and analysis stages. A landslide inventory of the study area was compiled from aerial photographs, satellite image interpretations, and detailed field surveys. In total, 45 landslides were recorded and mapped. The areal extent of the landslides is 1.75 km(2). The identified landslides are mostly shallow-seated, and generally exhibit progressive character. They are mainly classified as rotational, planar, and toppling failures. In all, 51, 45, and 4% of the landslides mapped are rotational, planar, and toppling types, respectively. Morphological, geological, and land-use data were produced using existing topographical and relevant thematic maps in a GIS framework. The considered landslide-conditioning parameters were slope gradient, slope aspect, lithology, weathering state of the rocks, stream power index, topographical wetness index, distance from drainage, lineament density, and land-cover and vegetation density. These landslide parameters were standardized in a common data scale by fuzzy membership functions. Then, the degree to which each parameter contributed to landslides was determined using the analytical hierarchy process method, and the weight values of these parameters were calculated. The weight values obtained were assigned to the corresponding parameters, and then the weighted parameters were combined to produce a landslide susceptibility map. The results obtained from the susceptibility map were evaluated with the landslide location data to assess the reliability of the map. Based on the findings obtained in this study, it was found that 5.19% of the total area was prone to landsliding due to the existence of highly and completely weathered lithologic units and due to the adverse effects of topography and improper land use. 2010 * 206(<-510): Landslide susceptibility mapping for a landslide-prone area (Findikli, NE of Turkey) by likelihood-frequency ratio and weighted linear combination models Landslides are very common natural problems in the Black Sea Region of Turkey due to the steep topography, improper use of land cover and adverse climatic conditions for landslides. In the western part of region, many studies have been carried out especially in the last decade for landslide susceptibility mapping using different evaluation methods such as deterministic approach, landslide distribution, qualitative, statistical and distribution-free analyses. The purpose of this study is to produce landslide susceptibility maps of a landslide-prone area (Findikli district, Rize) located at the eastern part of the Black Sea Region of Turkey by likelihood frequency ratio (LRM) model and weighted linear combination (WLC) model and to compare the results obtained. For this purpose, landslide inventory map of the area were prepared for the years of 1983 and 1995 by detailed field surveys and aerial-photography studies. Slope angle, slope aspect, lithology, distance from drainage lines, distance from roads and the land-cover of the study area are considered as the landslide-conditioning parameters. The differences between the susceptibility maps derived by the LRM and the WLC models are relatively minor when broad-based classifications are taken into account. However, the WLC map showed more details but the other map produced by LRM model produced weak results. The reason for this result is considered to be the fact that the majority of pixels in the LRM map have high values than the WLC-derived susceptibility map. In order to validate the two susceptibility maps, both of them were compared with the landslide inventory map. Although the landslides do not exist in the very high susceptibility class of the both maps, 79% of the landslides fall into the high and very high susceptibility zones of the WLC map while this is 49% for the LRM map. This shows that the WLC model exhibited higher performance than the LRM model. 2008 * 207(<-629): A new extreme quantile estimator for heavy-tailed distributions The classical estimation method for extreme quantiles of heavy-tailed distributions was presented by Weissman (J. Amer. Statist. Assoc. 73 (1978) 812-815) and makes use of the Hill estimator (Ann. Statist. 3 (1975) 1163-1174) for the positive extreme value index. This index estimator can be interpreted as all estimator of the slope in the Pareto quantile plot in case one considers regression lines passing through a fixed anchor point. In this Note we propose a new extreme quantile estimator based on an unconstrained least squares estimator of the index, introduced by Kratz and Resnick (Comm. Statist. Stochastic Models 12 (1996) 699-724) and Schultze and Steinebach (Statist. Decisions 14 (1996) 353-372) and we Study its asymptotic behavior. (C) 2004 Academie des sciences. Published by Elsevier SAS. All rights reserved. 2004 * 208(<- 69): Improvement in estimation of soil water retention using fractal parameters and multiobjective group method of data handling Soil water retention characteristic is required for modeling of water and substance movement in unsaturated soils and need to be estimated using indirect methods. Point pedotransfer functions (PTFs) for prediction of soil water content at matric suctions of 1, 5, 25, 50, and 1500kPa were developed and validated using a data-set of 148 soil samples from Hamedan and Guilan provinces, Iran, by multiobjective group method of data handling (mGMDH). In addition to textural and structural properties, fractal parameters of the power-law fractal models for both particles and aggregates distributions were also included as predictors. Their inclusion significantly improved the PTFs' accuracy and reliability. The aggregate size distribution fractal parameters ranked next to the particle size distribution (PSD) in terms of prediction accuracy. The mGMDH-derived PTFs were significantly more reliable than those by artificial neural networks but their accuracies were practically the same. Similarity between the fractal behavior of particle and void size distributions may contribute to the improvement of the derived PTFs using PSD fractal parameters. It means that both distributions of the pore and particle size represent the fractal behavior and can be described by fractal models. 2015 * 209(<-251): Artificial Bee Colony approach to information granulation-based fuzzy radial basis function neural networks for image fusion This paper mainly proposed a novel method of Artificial Bee Colony (ABC) optimized fuzzy radial basis function neural networks with information granulation (IG-FRBFNNs) for solving the image fusion problem. Image fusion is the process of combining relevant information from two or more images into a single image. The fuzzy RBF neural networks exploit the Fuzzy C-Means (FCM) clustering to form the premise part of the rules. As the consequent part of the model (being the local model representing input output relation in the corresponding sub-space), four types of polynomials are considered, with the ordinary least square (OLS) learning being exploited to estimate the values of the coefficients of the polynomial. Since the performance of the IG-FRBFNN model is directly affected by the parameters such as the fuzzification coefficient used in the FCM, the position of their centers and the values of the widths, ABC algorithm is exploited to carry out the structural and parametric optimization of the model respectively while the optimization is of multi-objective character as it is aimed at the simultaneous minimization of complexity and maximization of accuracy. Subsequently, the proposed approach can dynamically obtain optimal image fusion weights based on regional features, so as to optimize performance of image fusion. Series of experimental results are presented to verify the feasibility and effectiveness of the proposed approach. (C) 2012 Elsevier GmbH. All rights reserved. 2013 * 210(<-260): A Neural Network Based Intelligent Predictive Sensor for Cloudiness, Solar Radiation and Air Temperature Accurate measurements of global solar radiation and atmospheric temperature, as well as the availability of the predictions of their evolution over time, are important for different areas of applications, such as agriculture, renewable energy and energy management, or thermal comfort in buildings. For this reason, an intelligent, light-weight and portable sensor was developed, using artificial neural network models as the time-series predictor mechanisms. These have been identified with the aid of a procedure based on the multi-objective genetic algorithm. As cloudiness is the most significant factor affecting the solar radiation reaching a particular location on the Earth surface, it has great impact on the performance of predictive solar radiation models for that location. This work also represents one step towards the improvement of such models by using ground-to-sky hemispherical colour digital images as a means to estimate cloudiness by the fraction of visible sky corresponding to clouds and to clear sky. The implementation of predictive models in the prototype has been validated and the system is able to function reliably, providing measurements and four-hour forecasts of cloudiness, solar radiation and air temperature. 2012 * 211(<-324): Mobility Timing for Agent Communities, a Cue for Advanced Connectionist Systems We introduce a wait-and-chase scheme that models the contact times between moving agents within a connectionist construct. The idea that elementary processors move within a network to get a proper position is borne out both by biological neurons in the brain morphogenesis and by agents within social networks. From the former, we take inspiration to devise a medium-term project for new artificial neural network training procedures where mobile neurons exchange data only when they are close to one another in a proper space (are in contact). From the latter, we accumulate mobility tracks experience. We focus on the preliminary step of characterizing the elapsed time between neuron contacts, which results from a spatial process fitting in the family of random processes with memory, where chasing neurons are stochastically driven by the goal of hitting target neurons. Thus, we add an unprecedented mobility model to the literature in the field, introducing a distribution law of the intercontact times that merges features of both negative exponential and Pareto distribution laws. We give a constructive description and implementation of our model, as well as a short analytical form whose parameters are suitably estimated in terms of confidence intervals from experimental data. Numerical experiments show the model and related inference tools to be sufficiently robust to cope with two main requisites for its exploitation in a neural network: the nonindependence of the observed intercontact times and the feasibility of the model inversion problem to infer suitable mobility parameters. 2011 * 212(<-557): A neural stochastic multiscale optimization framework for sensor-based parameter estimation This work presents a novel neural stochastic optimization framework for reservoir parameter estimation that combines two independent sources of spatial and temporal data: oil production data and dynamic sensor data of flow pressures and concentrations. A parameter estimation procedure is realized by minimizing a multi-objective mismatch function between observed and predicted data. In order to be able to efficiently perform large-scale parameter estimations, the parameter space is decomposed in different resolution levels by means of the singular value decomposition (SVD) and a wavelet upscaling process. The estimation is carried out incrementally from low to higher resolution levels by means of a neural stochastic multilevel optimization approach. At a given resolution level, the parameter space is globally explored and sampled by the simultaneous perturbation stochastic approximation (SPSA) algorithm. The sampling yielded by SPSA serves as training points for an artificial neural network that allows for evaluating the sensitivity of different multi-objective function components with respect to the model parameters. The proposed approach may be suitable for different engineering and scientific applications wherever the parameter space results from discretizing a set of partial differential equations on a given spatial domain. 2007 * 213(<-167): Spatial modelling of site suitability assessment for hospitals using geographical information system-based multicriteria approach at Qazvin city, Iran Due to the population growth and continuous migration of people from rural areas to urban areas, it is important to identify the suitable locations for future development in order to find suitable sites for various kinds of facilities such as schools, hospital and fire stations for new and existing urban areas. Site suitability modelling is a complex process involving various kinds of objectives and issues. Such a complex process includes spatial analysis, use of several decision support tools such as high-spatial resolution remotely sensed data, geographical information system (GIS) and multi criteria analysis (MCA) such as analytical hierarchy process (AHP), and in some cases, prediction techniques like cellular automata (CA) or artificial neural networks (ANN). This paper presents a comparison between the results of AHP and the ordinary least square (OLS) evaluation model, based on various criteria, to select suitable sites for new hospitals in Qazvin city, Iran. Based on the obtained results, proximity to populated areas (0.3) and distance to air polluted areas (0.23-0.26) were the two highest important criteria with high weight value. The results show that these two techniques not only have similarity in size (in m(2)) for each suitability class but they also have similarity in spatial distribution of each class in the entire study area. Based on calculations of both techniques, 1-2%, 25%, 40-43%, 16-20% and 14% of study areas are assigned as 'not suitable', 'less suitable', 'moderately suitable', 'suitable' and 'most suitable' areas for construction of new hospitals. Results revealed that a 75% similarity was found in the distribution of suitability classes in Qazvin city using both techniques. Nineteen per cent (19%) of the study area are assigned as 'suitable' and ` most suitable' by both methods, so these areas can be considered as safe or secure areas for clinical purposes. Moreover, almost all (99.8%) suitable areas are located in district 3, because of its higher population, less numbers of existing hospitals and large numbers of barren land plots of acceptable size. 2014 * 214(<-230): Evaluation of several rainfall products used for hydrological applications over West Africa using two high-resolution gauge networks The evaluation of rainfall products over the West African region will be an important component of the Megha-Tropiques (MT) Ground Validation (GV) plan. In this paper, two dense research gauge networks from Benin and Niger, integrated in the MT GV plan, are presented and are used to evaluate several currently available global or regional satellite-based rainfall products. Eight productsthe Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), Climate Prediction Center Morphing method (CMORPH), Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 real-time and gauge-adjusted version, Global Satellite Mapping of Precipitation (GSMaP), Climate Prediction Center (CPC) African Rainfall Estimate (RFE), Estimation des Precipitation par SATellite (EPSAT), and Global Precipitation Climatology Project One Degree Daily estimate (GPCP-1DD)are compared to the ground reference. The comparisons are carried out at daily, 1 degrees resolution, over the rainy season (June-September), between the years 2003 and 2010. The work focuses on the ability of the various products to reproduce salient features of the rainfall regime that impact the hydrological response. The products are analysed on a multi-criteria basis, focusing in particular on the way they distribute the rainfall within the season and by rain rate class. Standard statistical diagnoses such as the correlation coefficient, bias, root mean square error and Nash skill score are computed and the inter-annual variability is documented. Two simplified hydrological models are used to illustrate how the nature and structure of the product error impact the model output in terms of runoff (calculated using the Soil Conservation Service method, SCS, in Niger) or outflow (calculated with the modele du Genie Rural a 4 parametres Journalier', GR4J model, in Benin). Copyright (c) 2013 Royal Meteorological Society 2013 * 215(<-503): Regionalisation of hydrological model parameters under parameter uncertainty: A case study involving TOPMODEL and basins across the globe In this paper, we present a method to account for modeling uncertainties while regionalising model parameters. Linking model. parameters to physical catchment attributes is a popular approach that enables the application of a conceptual model to an ungauged site. The functional relationship can be derived either from the calibrated model. parameters (direct calibration method) or by calibrating the functional. function (regional. calibration method). Both of these approaches are explored through a case study involving TOPMODEL and a number of small- to medium-sized humid basins located in various geographic and climatic regions around the globe. The predictive performance of the functional relationship derived using the direct calibration method (e.g., multiple regression, artificial neural. network and partial least square regression) varied among the different schemes. However, the average of the model. parameters estimated from regionatisation schemes based on direct calibration is found to be a better surrogate. Even with the use of a parsimonious hydrological model and with posing model calibration as a multi-objective problem, the model. parameter uncertainty and its effect on model. prediction were observed to be high and varied among the basins. Therefore, to avoid the effect of model parameter uncertainty on regionalization results, a regional calibration method that skips direct calibration of the hydrological model was implemented. This method was improved in order to take into account multiple objective criteria white calibrating regional parameters. The predictive performance of the improved regional calibration method was found to be superior to the direct calibration method, indicating that the identifiability of model parameters has an apparent effect on deriving predictive models for regionalisation. However, the regional calibration method was unable to uniquely identify the regional relationship, and the modeling uncertainties quantified using Pareto optimal regional relationships were considerable. Regionalisation schemes that are based on direct calibration do not explicitly account for the modeling uncertainties. Therefore, to account for these uncertainties in model parameters and regionalisation schemes, methods based on regionalisation of vectors of model parameters (i.e. regionalizing the vectors of equally likely values of model parameters) and posterior probability distribution of model parameters (i.e. estimating the posterior probability distribution of model parameters at ungauged sites by linking the entries of model parameters' covariance matrix and the posterior mean of model parameter to the catchment attributes) are introduced. The uncertainties in model prediction as quantified from both methods closely followed the prediction uncertainties quantified from calibrated posterior probability distributions of model parameters. Moreover, though the prediction uncertainties associated with the regional calibration method as quantified from the Pareto optimal regional relationship were comparatively higher than those obtained from the direct calibration schemes, they were in close agreement with the prediction uncertainties quantified from the calibrated posterior probability distribution. The ensemble of simulated flows realized from the model parameters sampled from regionalized posterior probability distributions for five ungauged basins are also presented as validation of the proposed methodology. (c) 2008 Elsevier B.V. All rights reserved. 2008 * 216(<- 19): Neural network river forecasting through baseflow separation and binary-coded swarm optimization The inclusion of expert knowledge in data-driven streamflow modeling is expected to yield more accurate estimates of river quantities. Modular models (MMs) designed to work on different parts of the hydrograph are preferred ways to implement such approach. Previous studies have suggested that better predictions of total streamflow could be obtained via modular Artificial Neural Networks (ANNs) trained to perform an implicit baseflow separation. These MMs fit separately the baseflow and excess flow components as produced by a digital filter, and reconstruct the total flow by adding these two signals at the output. The optimization of the filter parameters and ANN architectures is carried out through global search techniques. Despite the favorable premises, the real effectiveness of such MMs has been tested only on a few case studies, and the quality of the baseflow separation they perform has never been thoroughly assessed. In this work, we compare the performance of MM against global models (GMs) for nine different gaging stations in the northern United States. Binary-coded swarm optimization is employed for the identification of filter parameters and model structure, while Extreme Learning Machines, instead of ANN, are used to drastically reduce the large computational times required to perform the experiments. The results show that there is no evidence that MM outperform global GM for predicting the total flow. In addition, the baseflow produced by the MM largely underestimates the actual baseflow component expected for most of the considered gages. This occurs because the values of the filter parameters maximizing overall accuracy do not reflect the geological characteristics of the river basins. The results indeed show that setting the filter parameters according to expert knowledge results in accurate baseflow separation but lower accuracy of total flow predictions, suggesting that these two objectives are intrinsically conflicting rather than compatible. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 217(<-449): Multi-objective turbomachinery optimization using a gradient-enhanced multi-layer perceptron Response Surface models (RSMs) have found widespread use to reduce the overall Computational cost of turbomachinery blading design optimization. Recent developments have seen the successful use of gradient information alongside sampled response values in building accurate response surfaces. This paper describes the use of gradients to enhance the performance of the RSM provided by a multi-layer perceptron. Gradient information is included in the perceptron by modifying the error function such that the perceptron is trained to fit the gradients as well as the response values. As a consequence, the back-propagation scheme that assists the training is also changed. The paper formulates the gradient-enhanced multi-layer perceptron using algebraic notation, with an emphasis on the ease of use and efficiency of computer code implementation. To illustrate the benefit of using gradient information, the enhanced neural network model is used in a multi-objective transonic fan blade optimization exercise of engineering relevance. Copyright (C) 2008 John Wiley & Sons, Ltd. 2009 * 218(<-472): Stochastic sampling design using a multi-objective genetic algorithm and adaptive neural networks This paper presents a novel multi-objective genetic algorithm (MOGA) based on the NSGA-II algorithm, which uses metamodels to determine optimal sampling locations for installing pressure loggers in a water distribution system (WDS) when parameter uncertainty is considered. The new algorithm combines the multi-objective genetic algorithm with adaptive neural networks (MOGA-ANN) to locate pressure loggers. The purpose of pressure logger installation is to collect data for hydraulic model calibration. Sampling design is formulated as a two-objective optimization problem in this study. The objectives are to maximize the calibrated model accuracy and to minimize the number of sampling devices as a surrogate of sampling design cost. Calibrated model accuracy is defined as the average of normalized traces of model prediction covariance matrices, each of which is constructed from a randomly generated sampling set of calibration parameter values. This method of calculating model accuracy is called the 'full' fitness model. Within the genetic algorithm search process, the full fitness model is progressively replaced with the periodically (re)trained adaptive neural network metamodel where (re)training is done using the data collected by calling the full model. The methodology was first tested on a hypothetical (benchmark) problem to configure the setting requirement. Then the model was applied to a real case study. The results show that significant computational savings can be achieved by using the MOGA-ANN when compared to the approach where MOGA is linked to the full fitness model. When applied to the real case study, optimal solutions identified by MOGA-ANN are obtained 25 times faster than those identified by the full model without significant decrease in the accuracy of the final solution. (C) 2008 Elsevier Ltd. All rights reserved. 2009 * 219(<-483): Improved irrigation water demand forecasting using a soft-computing hybrid model Recently, Computational Neural Networks (CNNs) and fuzzy inference systems have been successfully applied to time series forecasting. In this study the performance of a hybrid methodology combining feed forward CNN, fuzzy logic and genetic algorithm to forecast one-day ahead daily water demands at irrigation districts considering that only flows in previous days are available for the calibration of the models were analysed. Individual forecasting models were developed using historical time series data from the Fuente Palmera irrigation district located in Andalucia, southern Spain. These models included univariate autoregressive CNNs trained with the Levenberg-Marquardt algorithm (LM). The individual models forecasting were then corrected via a fuzzy logic approach whose parameters were adjusted using a genetic algorithm in order to improve the forecasting accuracy. For the purpose of comparison, this hybrid methodology was also applied with univariate autoregressive CNN models trained with the Extended-Delta-Bar-Delta algorithm (EDBD) and calibrated in a previous study in the same irrigation district. A multicriteria evaluation with several statistics and absolute error measures showed that the hybrid model performed significantly better than univariate and multivariate autoregressive CNN's. (C) 2008 IAgrE. Published by Elsevier Ltd. All rights reserved. 2009 * 220(<-537): GIS and neural network method for potential road identification Global positioning system (GPS)-based vehicle tracking systems were used to track 20 vehicles involved in an 8-day field training exercise at Yakima Training Center, Washington. A 3-layer feed-forward artificial neural network (NN) with a backpropagation learning algorithm was developed to identify potential roads. The NN was trained using a subset of the GPS data that was supplemented with field observations that documented newly formed road segments resulting from concentrated vehicle traffic during the military training exercise. The NN was subsequently applied to the full vehicle movement data set to predict potential roads for the entire training exercise. Model predictions were validated using additional installation and site visit data. The first validation used the NN to identify the existing road network as represented in the Yakima Training Center GIS roads data layer Next, the NN was used to predict emerging road networks that had not previously existed. The NN method accurately classified approximately 94% of the training data, 85% of the on-road movement data, and 78% of potential roads. The proposed NN method more accurately classified potential roads than the previously used multicriteria method, which was able to identify 10 out of 17 potential road segments across the entire training center. 2007 * 221(<-442): On the possibility of non-invasive multilayer temperature estimation using soft-computing methods Objective and motivation: This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e. g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. Novelty aspects: The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. Materials and methods: In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2:0 W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. Results and discussion: At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. Conclusions: A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment. (C) 2009 Elsevier B.V. All rights reserved. 2010 * 222(<-577): Non-invasive temperature prediction of in vitro therapeutic ultrasound signals using neural networks In this paper, a novel black-box modelling scheme applied to non-invasive temperature prediction in a homogeneous medium subjected to therapeutic ultrasound is presented. It is assumed that the temperature in a point of the medium is non-linearly related to some spectral features and one temporal feature, extracted from the collected RF-lines. The black-box models used are radial basis functions neural networks (RBFNNs), where the best-fitted models were selected from the space of model structures using a genetic multi-objective strategy. The best-fitted predictive model presents a maximum absolute error less than 0.4 degrees C in a prediction horizon of approximately 2 h, in an unseen data sequence. This work demonstrates that this type of black-box model is well-suited for punctual and non-invasive temperature estimation, achieving, for a single point estimation, better results than the ones presented in the literature, encouraging research on multi-point non-invasive temperature estimation. 2006 * 223(<-225): Comparison of Artificial Neural Network Methods with L-moments for Estimating Flood Flow at Ungauged Sites: the Case of East Mediterranean River Basin, Turkey A regional flood frequency analysis based on the index flood method is applied using probability distributions commonly utilized for this purpose. The distribution parameters are calculated by the method of L-moments with the data of the annual flood peaks series recorded at gauging sections of 13 unregulated natural streams in the East Mediterranean River Basin in Turkey. The artificial neural networks (ANNs) models of (1) the multi-layer perceptrons (MLP) neural networks, (2) radial basis function based neural networks (RBNN), and (3) generalized regression neural networks (GRNN) are developed as alternatives to the L-moments method. Multiple-linear and multiple-nonlinear regression models (MLR and MNLR) are also used in the study. The L-moments analysis on these 13 annual flood peaks series indicates that the East Mediterranean River Basin is hydrologically homogeneous as a whole. Among the tried distributions which are the Generalized Logistic, Generalized Extreme Vaules, Generalized Normal, Pearson Type III, Wakeby, and Generalized Pareto, the Generalized Logistic and Generalized Extreme Values distributions pass the Z statistic goodness-of-fit test of the L-moments method for the East Mediterranean River Basin, the former performing yet better than the latter. Hence, as the outcome of the L-moments method applied by the Generalized Logistic distribution, two equations are developed to estimate flood peaks of any return periods for any un-gauged site in the study region. The ANNs, MLR and MNLR models are trained and tested using the data of these 13 gauged sites. The results show that the predicting performance of the MLP model is superior to the others. The application of the MLP model is performed by a special Matlab code, which yields logarithm of the flood peak, Ln(Q(T)), versus a desired return period, T. 2013 * 224(<-240): Prediction of the baseline toxicity of non-polar narcotic chemical mixtures by QSAR approach Environmental contaminants are frequently encountered as mixtures, and research on mixture toxicity is a hot topic until now. In the present study, the mixture toxicity of non-polar narcotic chemical was modeled by linear and nonlinear statistical methods, that is to say, by forward stepwise multilinear regression (MLR) and radial basis function neural networks (RBFNNs) from molecular descriptors that are calculated and be defined as composite descriptors according to the fractional concentrations of the mixture components. The statistical parameters provided by the MLR model were R-2 = 0.9512, RMS = 0.3792, F = 1402.214 and LOOq(2) = 0.9462 for the training set, and R-2 = 0.9453, RMS = 03458, F = 276.671 and q(ext)(2) = 0.9450 for the external test set. The RBFNN model gave the following statistical results, namely: R-2 = 0.9779, RMS = 0.2498, F = 3188.202 and LOOq(2) = 0.9746 for the training set, and R-2 = 0.9763, RMS = 0.2358, F = 660.631 and q(ext)(2) = 0.9745, for the external test set. Overall, these results suggest that the QSAR MLR-based model is a simple, reliable, credible and fast tool for the prediction mixture toxicity of non-polar narcotic chemicals. The RBFNN model gave even improved results. In addition, epsilon(LUMO+1) (the energy of the second lowest unoccupied molecular orbital) and PPSA (total charge weighted partial positively surface area) were found to have high correlation with the mixture toxicity. (c) 2012 Elsevier Ltd. All rights reserved. 2013 * 225(<-249): Landslide susceptibility estimation by random forests technique: sensitivity and scaling issues Despite the large number of recent advances and developments in landslide susceptibility mapping (LSM) there is still a lack of studies focusing on specific aspects of LSM model sensitivity. For example, the influence of factors such as the survey scale of the landslide conditioning variables (LCVs), the resolution of the mapping unit (MUR) and the optimal number and ranking of LCVs have never been investigated analytically, especially on large data sets. In this paper we attempt this experimentation concentrating on the impact of model tuning choice on the final result, rather than on the comparison of methodologies. To this end, we adopt a simple implementation of the random forest (RF), a machine learning technique, to produce an ensemble of landslide susceptibility maps for a set of different model settings, input data types and scales. Random forest is a combination of Bayesian trees that relates a set of predictors to the actual landslide occurrence. Being it a nonparametric model, it is possible to incorporate a range of numerical or categorical data layers and there is no need to select unimodal training data as for example in linear discriminant analysis. Many widely acknowledged landslide predisposing factors are taken into account as mainly related to the lithology, the land use, the geomorphology, the structural and anthropogenic constraints. In addition, for each factor we also include in the predictors set a measure of the standard deviation (for numerical variables) or the variety (for categorical ones) over the map unit. As in other systems, the use of RF enables one to estimate the relative importance of the single input parameters and to select the optimal configuration of the classification model. The model is initially applied using the complete set of input variables, then an iterative process is implemented and progressively smaller subsets of the parameter space are considered. The impact of scale and accuracy of input variables, as well as the effect of the random component of the RF model on the susceptibility results, are also examined. The model is tested in the Arno River basin (central Italy). We find that the dimension of parameter space, the mapping unit (scale) and the training process strongly influence the classification accuracy and the prediction process. This, in turn, implies that a careful sensitivity analysis making use of traditional and new tools should always be performed before producing final susceptibility maps at all levels and scales. 2013 * 226(<-423): Neural Networks for the Prediction of Species-Specific Plot Volumes Using Airborne Laser Scanning and Aerial Photographs Parametric and nonparametric modeling methods have been widely used for the estimation of forest attributes from airborne laser-scanning data and aerial photographs. However, the methods adopted suffered from complex remote-sensed data structures involving high dimensions, nonlinear relationships, different statistical distributions, and outliers. In this context, artificial neural networks (ANNs) are of interest as they have many clear benefits over conventional modeling methods and could then enhance the accuracy of current forest-inventory methods. This paper examines the ability of common ANN modeling techniques for the prediction of species-specific forest attributes, as exemplified here with the prediction stem volumes (cubic meters per hectare) at the field plot and forest stand levels. Three modeling methods were evaluated, namely, the multilayer perceptron (MLP), support vector regression (SVR), and self-organizing map, and intercompared with the corresponding nonparametric k most similar neighbor method using cross-validated statistical performance indexes. To decrease the number of model-input variables, a multiobjective input-selection method based on genetic algorithm is adopted. The numerical results obtained in the study suggest that ANNs are appropriate and accurate methods for the assessment of species-specific forest attributes, which can be used as alternatives to multivariate linear regression and nonparametric nearest neighbor models. Among the ANN models, SVR and MLP provide the best choices for prediction purposes as they yielded high prediction accuracies for species-specific tree volumes throughout. 2010 * 227(<-505): Learning based brain emotional intelligence as a new aspect for development of an alarm system The multi criteria and purposeful prediction approach has been introduced and is implemented by the fast and efficient behavioral based brain emotional learning method. On the other side, the emotional learning from brain model has shown good performance and is characterized by high generalization property. New approach is developed to deal with low computational and memory resources and can be used with the largest available data sets. The scope of paper is to reveal the advantages of emotional learning interpretations of brain as a purposeful forecasting system designed to warning; and to make a fair comparison between the successful neural (MLP) and neurofuzzy (ANFIS) approaches in their best structures and according to prediction accuracy, generalization, and computational complexity. The auroral electrojet (AE) index are used as practical examples of chaotic time series and introduced method used to make predictions and warning of geomagnetic disturbances and geomagnetic storms based on AE index. 2008 * 228(<-523): Incorporating anthropogenic variables into a species distribution model to map gypsy moth risk This paper presents a novel methodology for multi-scale and multi-type spatial data integration in support of insect pest risk/vulnerability assessment in the contiguous United States. Probability of gypsy moth (Lymantria dispar L.) establishment is used as a case study. A neural network facilitates the integration of variables representing dynamic anthropogenic interaction and ecological characteristics. Neural network model (back-propagation network [BPN]) results are compared to logistic regression and multi-criteria evaluation via weighted linear combination, using the receiver operating characteristic area under the curve (AUC) and a simple threshold assessment. The BPN provided the most accurate infestation-forecast predictions producing an AUC of 0.93, followed by multi-criteria evaluation (AUC = 0.92) and logistic regression (AUC = 0.86) when independently validating using post model infestation data. Results suggest that BPN can provide valuable insight into factors contributing to introduction for invasive species whose propagation and establishment requirements are not fully understood. The integration of anthropogenic and ecological variables allowed production of an accurate risk model and provided insight into the impact of human activities. (C) 2007 Elsevier B.V. All rights reserved. 2008 * 229(<-525): Study of the potential of alternative crops by integration of multisource data using a neuro-fuzzy technique This work proposes a neuro-fuzzy method for suggesting alternative crop production over a region using integrated data obtained from land-survey maps as well as satellite imagery. The methodology proposed here uses an artificial neural network (multilayer perceptron, MLP) to predict alternative crop production. For each pixel, the MLP takes vector input comprising elevation, rainfall and goodness values of different existing crops. The first two components of the aforementioned input, that is, elevation and rainfall, are determined from contour information of land-survey maps. The other components, such as goodness values of different existing crops, are based on the productivity estimates of soil determined by fuzzyfication and expert opinion (on soil) along with production quality by the Normalized Difference Vegetation Index (NDVI) obtained from satellite imagery. The methodology attempts to ensure that the suggested crop will also be a high productivity crop for that region. 2008 * 230(<-595): Evaluation of an integrated modelling system containing a multi-layer perceptron model and the numerical weather prediction model HIRLAM for the forecasting of urban airborne pollutant concentrations In this paper, a multi-layer perceptron (MLP) model and the Finnish variant of the numerical weather prediction model HIRLAM (High Resolution Limited Area Model) were integrated and evaluated for the forecasting in time of urban pollutant concentrations. The forecasts of the combination of the MLP and HIRLAM models are compared with the corresponding forecasts of the MLP models that utilise meteorologically pre-processed input data. A. novel input selection method based on the use of a multi-objective genetic algorithm (MOGA) is applied in conjunction with the sensitivity analysis to reduce the excessively large number of potential meteorological input variables; its use improves the performance of the MLP model. The computed air quality forecasts contain the sequential hourly time series of the concentrations of nitrogen dioxide (NO2) and fine particulate matter (PM2.5) from May 2000 to April 2003; the corresponding concentrations have also been measured at two urban air quality stations in Helsinki. The results obtained with the MLP models that use HIRLAM forecasts show fairly good overall agreement for both pollutants. The model performance is substantially better, when the HIRLAM forecasts are used, compared with those obtained both using either HIRLAM analysis data or meteorological pre-processor, for both pollutants. The performance of the currently widely used statistical forecasting methods (such as those based on neural networks) could therefore be significantly improved by using the forecasts of NWP models, instead of the conventionally utilised directly measured or meteorological pre-processed input data. However, the performance of all operational models considered is relatively worse in the course of air pollution episodes. (c) 2005 Elsevier Ltd. All rights reserved. 2005 * 231(<- 18): Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a MBFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 232(<- 34): Improving event-based rainfall-runoff simulation using an ensemble artificial neural network based hybrid data-driven model An ensemble artificial neural network (ENN) based hybrid function approximator (named PEK), integrating the partial mutual information (PMI) based separate input variable selection (IVS) scheme, ENN-based output estimation, and K-nearest neighbor regression based output error estimation, has been proposed to improve event-based rainfall-runoff (RR) simulation. A hybrid data-driven RR model, named non-updating PEK (NU-PEK), is also developed on the basis of the PEK approximator. The rainfall and simulated antecedent discharges input variables for the NU-PEK model are selected separately by using a PMI-based IVS algorithm. A newly proposed candidate rainfall input set, sliding window cumulative rainfall is also proposed. These two methods are integrated to make a good compromise between the adequacy and parsimony of the input information and make contribution to the understandings of the hydrologic responses to the regional precipitation. The number of component networks and the topology and parameter settings of each component network are optimized simultaneously by using the multi-objective NSGA-II optimization algorithm and the early stopping Levenberg-Marquardt algorithm. The optimal combination weights of the ENN are obtained according to the Akaike information criterions of component networks. By combining all these methods, the simulation accuracy and generalization property of the PEK approximator are much better than traditional artificial neural network. The NU-PEK model is constructed by combining the PEK approximator with a newly proposed non-updating modeling approach to improve event-based RR simulation. The NU-PEK model was applied to three Chinese catchments for RR simulation and compared with two popular RR models, including the conceptual Xinanjiang model and the conceptual-data-driven IHACRES model. The results of simulation and sensitivity analysis indicate that the developed model generally outperforms the other two models. The NU-PEK model is capable of producing high accuracy non-updating RR simulation without the use of the real-time information, e.g. the observed discharges at previous time steps. 2015 * 233(<-103): Improved Neural Network Model and Its Application in Hydrological Simulation When applying a back-propagation neural network (BPNN) model in hydrological simulation, researchers generally face three problems. The first one is that real-time correction mode must be adopted when forecasting basin outlet flow, i.e., observed antecedent outlet flows must be utilized as part of the inputs of the BPNN model. Under this mode, outlet flow can only be forecasted one time step ahead, i.e., continuous simulation cannot be implemented. The second one is that topology, weights, and biases of BPNN cannot be optimized simultaneously by traditional training methods. Topology designed by the trial-and-error method and weights and biases trained by back-propagation (BP) algorithm are not always global optimal and the optimizations are experience-based. The third one is that simulation accuracy for the validation period is usually much lower than that for the calibration period, i.e., generalization property of BPNN is not good. To solve these problems, a novel coupled black-box model named BK (BP-KNN) and a new methodology of calibration are proposed in this paper. The BK model was developed by coupling BPNN model with K-nearest neighbor (KNN) algorithm. Unlike the traditional BPNN model previously reported, the BK model implemented continuous simulation under nonreal-time correction mode. Observed antecedent outlet flows were substituted by simulated values. The simulated values were calculated by the BPNN model first and then corrected based on the KNN algorithm, historical simulation error, and other relevant factors. According to the calculation process, parameters of the BK model were divided into three hierarchies and each hierarchy was calibrated respectively by the NSGA-II multiobjective optimization algorithm. This new methodology of calibration ensured higher accuracy and efficiency, and enhanced the generalization property of the BPNN. The accuracy of flow concentration module of Xinanjiang model is not always high enough, in order to combine advantages of conceptual and black-box models, XBK and XSBK models were proposed. The XBK model was constituted by coupling runoff generation module of Xinanjiang model with BK flow concentration model and the XSBK model was constituted by coupling runoff generation and separation module of Xinanjiang model with BK flow concentration model. BK, XBK, XSBK, and Xinanjiang models were applied in Chengcun, Dongwan, and Dage watersheds. The simulation results indicated that improved models obtained higher accuracies than Xinanjiang model and can overcame limitations of traditional BPNN model. (C) 2014 American Society of Civil Engineers. 2014 * 234(<-349): Evaluation of modelling techniques for forest site productivity prediction in contrasting ecoregions using stochastic multicriteria acceptability analysis (SMAA) Accurate estimation of site productivity is crucial for sustainable forest resource management. In recent years, a variety of modelling approaches have been developed and applied to predict site index from a wide range of environmental variables, with varying success. The selection, application and comparison of suitable modelling techniques remains therefore a meticulous task, subject to ongoing research and debate. In this study, the performance of five modelling techniques was compared for the prediction of forest site index in two contrasting ecoregions: the temperate lowland of Flanders, Belgium, and the Mediterranean mountains in SW Turkey. The modelling techniques include statistical (multiple linear regression - MLR, classification and regression trees - CART, generalized additive models - GAM), as well as machine-learning (artificial neural networks - ANN) and hybrid techniques (boosted regression trees - BRT). Although the selected predictor variables differed largely, with mainly topographic predictor variables in the mountain area versus soil and humus variables in the lowland area, the techniques performed comparatively similar in both ecoregions. Stochastic multicriteria acceptability analysis (SMAA) was found a well-suited multicriteria evaluation method to evaluate the performance of the modelling techniques. It has been applied on the individual species models of Flanders, as well as a species-independent evaluation, combining all developed models from the two contrasting ecoregions. We came to the conclusion that non-parametric models are better suited for predicting site index than traditional MLR. GAM and BRT are the preferred alternatives for a wide range of weight preferences. CART is preferred when very high weight is given to user-friendliness, whereas ANN is recommended when most weight is given to pure predictive performance. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 235(<-416): Comparison and ranking of different modelling techniques for prediction of site index in Mediterranean mountain forests Forestry science has a long tradition of studying the relationship between stand productivity and abiotic and biotic site characteristics, such as climate, topography, soil and vegetation. Many of the early site quality modelling studies related site index to environmental variables using basic statistical methods such as linear regression. Because most ecological variables show a typical non-linear course and a non-constant variance distribution, a large fraction of the variation remained unexplained by these linear models. More recently, the development of more advanced non-parametric and machine learning methods provided opportunities to overcome these limitations. Nevertheless, these methods also have drawbacks. Due to their increasing complexity they are not only more difficult to implement and interpret, but also more vulnerable to overfitting. Especially in a context of regionalisation, this may prove to be problematic. Although many non-parametric and machine learning methods are increasingly used in applications related to forest site quality assessment, their predictive performance has only been assessed for a limited number of methods and ecosystems. In this study, five different modelling techniques are compared and evaluated, i.e. multiple linear regression (MLR), classification and regression trees (CART), boosted regression trees (BRT), generalized additive models (GAM), and artificial neural networks (ANN). Each method is used to model site index of homogeneous stands of three important tree species of the Taurus Mountains (Turkey): Pinus brutia, Pinus nigra and Cedrus libani. Site index is related to soil, vegetation and topographical variables, which are available for 167 sample plots covering all important environmental gradients in the research area. The five techniques are compared in a multi-criteria decision analysis in which different model performance measures, ecological interpretability and user-friendliness are considered as criteria. When combining these criteria, in most cases GAM is found to outperform all other techniques for modelling site index for the three species. BRT is a good alternative in case the ecological interpretability of the technique is of higher importance. When user-friendliness is more important MLR and CART are the preferred alternatives. Despite its good predictive performance, ANN is penalized for its complex, non-transparent models and big training effort. (C) 2010 Elsevier B.V. All rights reserved. 2010 * 236(<-371): Two-dimensional fingerprinting approach for comparison of complex substances analysed by HPLC-UV and fluorescence detection This work is concerned with the research and development of methodology for analysis of complex mixtures such as pharmaceutical or food samples, which contain many analytes. Variously treated samples (swill washed, fried and scorched) of the Rhizoma atractylodis macrocephalae (RAM) traditional Chinese medicine (TCM) as well as the common substitute, Rhizoma atractylodis (RA) TCM were chosen as examples for analysis. A combined data matrix of chromatographic 2-D HPLC-DAD-FLD (two-dimensional high performance liquid chromatography with diode array and fluorescence detectors) fingerprint profiles was constructed with the use of the HPLC-DAD and HPLC-FLD individual data matrices; the purpose was to collect maximum information and to interpret this complex data with the use of various chemometrics methods e. g. the rank-ordering multi-criteria decision making (MCDM) PROMETHEE and GAIA, K-nearest neighbours (KNN), partial least squares (PLS), back propagation-artificial neural networks (BP-ANN) methods. The chemometrics analysis demonstrated that the combined 2-D HPLC-DAD-FLD data matrix does indeed provide more information and facilitates better performing classification/prediction models for the analysis of such complex samples as the RAM and RA ones noted above. It is suggested that this fingerprint approach is suitable for analysis of other complex, multi-analyte substances. 2011 * 237(<-445): Photochemistry and chemometrics-An overview Photochemistry has made significant contributions to our understanding of many important natural processes as well as the scientific discoveries of the man-made world. The measurements from such studies are often complex and may require advanced data interpretation with the use of multivariate or chemometrics methods. In general, such methods have been applied successfully for data display, classification, multivariate curve resolution and prediction in analytical chemistry, environmental chemistry, engineering, medical research and industry. However, in photochemistry, by comparison, applications of such multivariate approaches were found to be less frequent although a variety of methods have been used, especially with spectroscopic photochemical applications. The methods include Principal Component Analysis (PCA; data display), Partial Least Squares (PLS; prediction), Artificial Neural Networks (ANN; prediction) and several models for multivariate curve resolution related to Parallel Factor Analysis (PARAFAC; decomposition of complex responses). Applications of such methods are discussed in this overview and typical examples include photodegradation of herbicides, prediction of antibiotics in human fluids (fluorescence spectroscopy), non-destructive in- and on-line monitoring (near infrared spectroscopy) and fast-time resolution of spectroscopic signals from photochemical reactions. It is also quite clear from the literature that the scope of spectroscopic photochemistry was enhanced by the application of chemometrics. To highlight and encourage further applications of chemometrics in photochemistry, several additional chemometrics approaches are discussed using data collected by the authors. The use of a PCA biplot is illustrated with an analysis of a matrix containing data on the performance of photocatalysts developed for water splitting and hydrogen production. In addition, the applications of the Multi-Criteria Decision Making (MCDM) ranking methods and Fuzzy Clustering are demonstrated with an analysis of water quality data matrix. Other examples of topics include the application of simultaneous kinetic spectroscopic methods for prediction of pesticides, and the use of response fingerprinting approach for classification of medicinal preparations. In general, the overview endeavours to emphasise the advantages of chemometrics' interpretation of multivariate photochemical data, and an Appendix of references and summaries of common and less usual chemometrics methods noted in this work, is provided. Crown Copyright (C) 2010 Published by Elsevier B.V. All rights reserved. 2009 * 238(<-568): Authentication of vegetable oils on the basis of their physico-chemical properties with the aid of chemometrics In food production, reliable analytical methods for confirmation of purity or degree of spoilage are required by growers, food quality assessors, processors, and consumers. Seven parameters of physico-chemical properties, such as acid number, colority, density, refractive index, moisture and volatility, saponification value and peroxide value, were measured for quality and adulterated soybean, as well as quality and rancid rapeseed oils. Chemometrics methods were then applied for qualitative and quantitative discrimination and prediction of the oils by methods such exploratory principal component analysis (PCA), partial least squares (PLS), radial basis function-artificial neural networks (RBF-ANN), and multi-criteria decision making methods (MCDM), PROMETHEE and GAIA. In general, the soybean and rapeseed oils were discriminated by PCA, and the two spoilt oils behaved differently with the rancid rapeseed samples exhibiting more object scatter on the PC-scores plot, than the adulterated soybean oil. For the PLS and RBF-ANN prediction methods, suitable training models were devised, which were able to predict satisfactorily the category of the four different oil samples in the verification set. Rank ordering with the use of MCDM models indicated that the oil types can be discriminated on the PROMETHEE 11 scale. For the first time, it was demonstrated how ranking of oil objects with the use of PROMETHEE and GAIA could be utilized as a versatile indicator of quality performance of products on the basis of a standard selected by the stakeholder. In principle, this approach provides a very flexible method for assessment of product quality directly from the measured data. (c) 2006 Elsevier B.V. All rights reserved. 2006 * 239(<-145): [Genetic algorithm based multi-objective least square support vector machine for simultaneous determination of multiple components by near infrared spectroscopy]. The near infrared (NIR) spectrum contains a global signature of composition, and enables to predict different proper ties of the material. In the present paper, a genetic algorithm and an adaptive modeling technique were applied to build a multiobjective least square support vector machine (MLS-SVM), which was intended to simultaneously determine the concentrations of multiple components by NIR spectroscopy. Both the benchmark corn dataset and self-made Forsythia suspense dataset were used to test the proposed approach. Results show that a genetic algorithm combined with adaptive modeling allows to efficiently search the LS-SVM hyperparameter space. For the corn data, the performance of multi-objective LS-SVM was significantly better than models built with PLS1 and PLS2 algorithms. As for the Forsythia suspense data, the performance of multi-objective LS-SVM was equivalent to PLS1 and PLS2 models. In both datasets, the over-fitting phenomena were observed on RBFNN models. The single objective LS-SVM and MLS-SVM didn't show much difference, but the one-time modeling convenience al lows the potential application of MLS-SVM to multicomponent NIR analysis. 2014 * 240(<-146): Genetic Algorithm Based Multi-Objective Least Square Support Vector Machine for Simultaneous Determination of Multiple Components by Near Infrared Spectroscopy The near infrared (NIR) spectrum contains a global signature of composition, and enables to predict different properties of the material. In the present paper, a genetic algorithm and an adaptive modeling technique were applied to build a multi-objective least square support vector machine (MLS-SVM), which was intended to simultaneously determine the concentrations of multiple components by NIR spectroscopy. Both the benchmark corn dataset and self-made Forsythia suspense dataset were used to test the proposed approach. Results show that a genetic algorithm combined with adaptive modeling allows to efficiently search the LS-SVM hyperparameter space. For the corn data, the performance of multi-objective LS-SVM was significantly better than models built with PLS1 and PLS2 algorithms. As for the Forsythia suspense data, the performance of multi-objective LS-SVM was equivalent to PLS1 and PLS2 models. In both datasets, the over-fitting phenomena were observed on RBFNN models. The single objective LS-SVM and MLS-SVM didn't show much difference, but the one-time modeling convenience allows the potential application of MLS-SVM to multicomponent NIR analysis. 2014 * 241(<-405): Neural network ensembles: immune-inspired approaches to the diversity of components This work applies two immune-inspired algorithms, namely opt-aiNet and omni-aiNet, to train multi-layer perceptrons (MLPs) to be used in the construction of ensembles of classifiers. The main goal is to investigate the influence of the diversity of the set of solutions generated by each of these algorithms, and if these solutions lead to improvements in performance when combined in ensembles. omni-aiNet is a multi-objective optimization algorithm and, thus, explicitly maximizes the components' diversity at the same time it minimizes their output errors. The opt-aiNet algorithm, by contrast, was originally designed to solve single-objective optimization problems, focusing on the minimization of the output error of the classifiers. However, an implicit diversity maintenance mechanism stimulates the generation of MLPs with different weights, which may result in diverse classifiers. The performances of opt-aiNet and omni-aiNet are compared with each other and with that of a second-order gradient-based algorithm, named MSCG. The results obtained show how the different diversity maintenance mechanisms presented by each algorithm influence the gain in performance obtained with the use of ensembles. 2010 * 242(<-504): The Q-norm complexity measure and the minimum gradient method: A novel approach to the machine learning structural risk minimization problem This paper presents a novel approach for dealing with the structural risk minimization (SRM) applied to a general setting of the machine learning problem. The formulation is based on the fundamental concept that supervised learning is a bi-objective optimization problem in which two conflicting objectives should be minimized. The objectives are related to the empirical training error and the machine complexity. In this paper, one general Q-norm method to compute the machine complexity is presented, and, as a particular practical case, the minimum gradient method (MGM) is derived relying on the definition of the fat-shattering dimension. A practical mechanism for parallel layer perceptron (PLP) network training, involving only quasi-convex functions, is generated using the aforementioned definitions. Experimental results on 15 different benchmarks are presented, which show the potential of the proposed ideas. 2008 * 243(<-543): Controlling the parallel layer perceptron complexity using a multiobjective learning algorithm This paper deals with the parallel layer perceptron (PLP) complexity control, bias and variance dilemma, using a multiobjective (MOBJ) training algorithm. To control the bias and variance the training process is rewritten as a bi-objective problem, considering the minimization of both training error and norm of the weight vector, which is a measure of the network complexity. This method is, applied to regression and classification problems and compared with several other training procedures and topologies. The results show that the PLP MOBJ training algorithm presents good generalization results, outperforming traditional methods in the tested examples. 2007 * 244(<-548): Improving generalization of MLPs with sliding mode control and the Levenberg-Marquardt algorithm A variation of the well-known Levenberg-Marquardt for training neural networks is proposed in this work. The algorithm presented restricts the norm of the weights vector to a preestablished norm value and finds the minimum error solution for that norm value. The norm constrain controls the neural networks degree of freedom. The more the norm increases, the more flexible is the neural model. Therefore, more fitted to the training set. A range of different norm solutions is generated and the best generalization solution is selected according to the validation set error. The results show the efficiency of the algorithm in terms of generalization performance. (c) 2006 Elsevier B.V. All rights reserved. 2007 * 245(<-553): A genetic algorithms based multi-objective neural net applied to noisy blast furnace data A genetic algorithms based multi-objective optimization technique was utilized in the training process of a feed forward neural network, using noisy data from an industrial iron blast furnace. The number of nodes in the hidden layer, the architecture of the lower part of the network, as well as the weights used in them were kept as variables, and a Pareto front was effectively constructed by minimizing the training error along with the network size. A predator-prey algorithm efficiently performed the optimization task and several important trends were observed. (C) 2005 Elsevier B.V. All rights reserved. 2007 * 246(<-560): Many-objective training of a multi-layer perceptron In this paper, a many-objective training scheme for a multi-layer feed-forward neural network is studied. In this scheme, each training data set, or the average over sub-sets of the training data, provides a single objective. A recently proposed group of evolutionary many-objective optimization algorithms based on the NSGA-II algorithm have been examined with respect to the handling of such problem cases. A modified NSGA-II algorithm, using the norm of an individual as a secondary ranking assignment method, appeared to give the best results, even for a large number of objectives (up to 50 in this study). However, there was no notable increase in performance against the standard backpropagation algorithm, and a remarkable drop in performance for higher-dimensional feature spaces (dimension 30 in this study). 2007 * 247(<-645): Training neural networks with a multi-objective sliding mode control algorithm This paper presents a new sliding mode control algorithm that is able to guide the trajectory of a multi-layer perceptron within the plane formed by the two objective functions: training set error and norm of the weight vectors. The results show that the neural networks obtained are able to generate an approximation to the Pareto set, from which an improved generalization performance model is selected. (C) 2002 Elsevier Science B.V. All rights reserved. 2003 * 248(<-661): Recent advances in the MOBJ algorithm for training artificial neural networks. This paper presents a new scheme for training MLPs which employs a relaxation method for multi-objective optimization. The algorithm works by obtaining a reduced set of solutions, from which the one with the best generalization is selected. This approach allows balancing between the training error and norm of network weight vectors, which are the two objective functions of the multi-objective optimization problem. The method is applied to classification and regression problems and compared with Weight Decay (WD), Support Vector Machines (SVMs) and standard Backpropagation (BP). It is shown that the systematic procedure for training proposed results on good generalization neural models, and outperforms traditional methods. 2001 * 249(<-665): Improving generalization of MLPs with multi-objective optimization This paper presents a new learning scheme for improving generalization of multilayer perceptrons. The algorithm uses a multi-objective optimization approach to balance between the error of the training data and the norm of network weight vectors to avoid overfitting. The results are compared with support vector machines and standard backpropagation. (C) 2000 Elsevier Science B.V. All rights reserved. 2000 * 250(<- 14): An Interval-Valued Neural Network Approach for Uncertainty Quantification in Short-Term Wind Speed Prediction We consider the task of performing prediction with neural networks (NNs) on the basis of uncertain input data expressed in the form of intervals. We aim at quantifying the uncertainty in the prediction arising from both the input data and the prediction model. A multilayer perceptron NN is trained to map interval-valued input data onto interval outputs, representing the prediction intervals (PIs) of the real target values. The NN training is performed by nondominated sorting genetic algorithm-II, so that the PIs are optimized both in terms of accuracy (coverage probability) and dimension (width). Demonstration of the proposed method is given in two case studies: 1) a synthetic case study, in which the data have been generated with a 5-min time frequency from an autoregressive moving average model with either Gaussian or Chi-squared innovation distribution and 2) a real case study, in which experimental data consist of wind speed measurements with a time step of 1 h. Comparisons are given with a crisp (single-valued) approach. The results show that the crisp approach is less reliable than the interval-valued input approach in terms of capturing the variability in input. 2015 * 251(<- 85): Time series forecasting by neural networks: A knee point-based multiobjective evolutionary algorithm approach In this paper, we investigate the problem of time series forecasting using single hidden layer feedforward neural networks (SLFNs), which is optimized via multiobjective evolutionary algorithms. By utilizing the adaptive differential evolution (JADE) and the knee point strategy, a nondominated sorting adaptive differential evolution (NSJADE) and its improved version knee point-based NSJADE (KP-NSJADE) are developed for optimizing SLFNs. JADE aiming at refining the search area is introduced in nondominated sorting genetic algorithm II (NSGA-II). The presented NSJADE shows superiority on multimodal problems when compared with NSGA-II. Then NSJADE is applied to train SLFNs for time series forecasting. It is revealed that individuals with better forecasting performance in the whole population gather around the knee point. Therefore, KP-NSJADE is proposed to explore the neighborhood of the knee point in the objective space. And the simulation results of eight popular time series databases illustrate the effectiveness of our proposed algorithm in comparison with several popular algorithms. (C) 2014 Elsevier Ltd. All rights reserved. 2014 * 252(<-124): An analysis of accuracy-diversity trade-off for hybrid combined system with multiobjective predictor selection This study examines the contribution of diversity under a multi-objective context for the promotion of learners in an evolutionary system that generates combinations of partially trained learners. The examined system uses a grammar-driven genetic programming to evolve hierarchical, multi-component combinations of multilayer perceptrons and support vector machines for regression. Two advances are studied. First, a ranking formula is developed for the selection probability of the base learners. This formula incorporates both a diversity measure and the performance of learners, and it is tried over a series of artificial and real-world problems. Results show that when the diversity of a learner is incorporated with equal weights to the learner performance in the evolutionary selection process, the system is able to provide statistically significantly better generalization. The second advance examined is a substitution phase for learners that are over-dominated, under a multi-objective Pareto domination assessment scheme. Results here show that the substitution does not improve significantly the system performance, thus the exclusion of very weak learners, is not a compelling task for the examined framework. 2014 * 253(<-502): Multiobjective training of artificial neural networks for rainfall-runoff modeling This paper presents results on the application of various optimization algorithms for the training of artificial neural network rainfall-runoff models. Multilayered feed-forward networks for forecasting discharge from two mesoscale catchments in different climatic regions have been developed for this purpose. The performances of the multiobjective algorithms Multi Objective Shuffled Complex Evolution Metropolis-University of Arizona (MOSCEM-UA) and Nondominated Sorting Genetic Algorithm II (NSGA-II) have been compared to the single-objective Levenberg-Marquardt and Genetic Algorithm for training of these models. Performance has been evaluated by means of a number of commonly applied objective functions and also by investigating the internal weights of the networks. Additionally, the effectiveness of a new objective function called mean squared derivative error, which penalizes models for timing errors and noisy signals, has been explored. The results show that the multiobjective algorithms give competitive results compared to the single-objective ones. Performance measures and posterior weight distributions of the various algorithms suggest that multiobjective algorithms are more consistent in finding good optima than are single-objective algorithms. However, results also show that it is difficult to conclude if any of the algorithms is superior in terms of accuracy, consistency, and reliability. Besides the training algorithm, network performance is also shown to be sensitive to the choice of objective function(s), and including more than one objective function proves to be helpful in constraining the neural network training. 2008 * 254(<-612): Evolutionary multiobjective optimization approach for evolving ensemble of intelligent paradigms for stock market modeling The use of intelligent systems for stock market predictions has been widely established. This paper introduces a genetic programming technique (called Multi-Expression programming) for the prediction of two stock indices. The performance is then compared with an artificial neural network trained using Levenberg-Marquardt algorithm, support vector machine, Takagi-Sugeno neuro-fuzzy model and a difference boosting neural network. As evident from the empirical results, none of the five considered techniques could find an optimal solution for all the four performance measures. Further the results obtained by these five techniques are combined using an ensemble and two well known Evolutionary Multiobjective Optimization (EMO) algorithms namely Non-dominated Sorting Genetic Algorithm II (NSGA II) and Pareto Archive Evolution Strategy (PAES) algorithms in order to obtain an optimal ensemble combination which could also optimize the four different performance measures (objectives). We considered Nasdaq-100 index of Nasdaq Stock Market and the S&P CNX NIFTY stock index as test data. Empirical results reveal that the resulting ensemble obtain the best results. 2005 * 255(<-649): Designing a phenotypic distance index for radial basis function neural networks MultiObjective Evolutionary Algorithms (MOEAs) may cause a premature convergence if the selective pressure is too large, so, MOEAs usually incorporate a niche-formation procedure to distribute the population over the optimal solutions and let the population evolve until the Pareto-optimal region is completely explored. This niche-formation scheme is based on a distance index that measures the similarity between two solutions in order to decide if both may share the same niche or not. The similarity criterion is usually based on a Euclidean norm (given that the two solutions are represented with a vector), nevertheless, as this paper will explain, this kind of metric is not adequate for RBFNNs, thus being necessary a more suitable distance index. The experimental results obtained show that a MOEA including the proposed distance index is able to explore sufficiently the Pareto-optimal region and provide the user a wide variety of Pareto-optimal solutions. 2003 * 256(<-658): Hierarchical genetic algorithm for near optimal feedforward neural network design. In this paper, we propose a genetic algorithm based design procedure for a multi layer feed forward neural network. A hierarchical genetic algorithm is used to evolve both the neural networks topology and weighting parameters. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies, including a feasibility check highlighted in literature. A multi objective cost function is used herein to optimize the performance and topology of the evolved neural network simultaneously. In the prediction of Mackey Glass chaotic time series, the networks designed by the proposed approach prove to be competitive, or even superior, to traditional learning algorithms for the multi layer Perceptron networks and radial basis function networks. Based upon the chosen cost function, a linear weight combination decision making approach has been applied to derive an approximated Pareto optimal solution set. Therefore, designing a set of neural networks can be considered as solving a two objective optimization problem. 2002 * 257(<-125): Robust parameter design optimization using Kriging, RBF and RBFNN with gradient-based and evolutionary optimization techniques The dual response surface methodology is one of the most commonly used approaches in robust parameter design to simultaneously optimize the mean value and keep the variance minimum. The commonly used meta-model is the quadratic polynomial regression. For highly nonlinear input/output relationship, the accuracy of the fitted model is limited. Many researchers recommended to use more complicated surrogate models. In this study, three surrogate models will replace the second order polynomial regression, namely, ordinary Kriging, radial basis function approximation (RBF) and radial basis function artificial neural network (RBFNN). The results show that the three surrogate model present superior accuracy in comparison with the quadratic polynomial regression. The mean squared error (MSE) approach is widely used to link the mean and variance in one cost function. In this study, a new approach has been proposed using multi-objective optimization. The new approach has two main advantages over the classical method. First, the conflicting nature of the two objectives can be efficiently handled. Second, the decision maker will have a set of Pareto-front design points to select from. (C) 2014 Elsevier Inc. All rights reserved. 2014 * 258(<-214): Modeling and optimization of biodiesel engine performance using advanced machine learning methods This study aims to determine optimal biodiesel ratio that can achieve the goals of fewer emissions, reasonable fuel economy and wide engine operating range. Different advanced machine learning techniques, namely ELM (extreme learning machine), LS-SVM (least-squares support vector machine) and RBFNN (radial-basis function neural network), are used to create engine models based on experimental data. Logarithmic transformation of dependent variables is used to alleviate the problems of data scarcity and data exponentiality simultaneously. Based on the engine models, two optimization methods, namely SA (simulated annealing) and PSO (particle swarm optimization), are employed and a flexible objective function is designed to determine the optimal biodiesel ratio subject to various user-defined constraints. A case study is presented to verify the modeling and optimization framework. Moreover, two comparisons are conducted, where one is among the modeling techniques and the other is among the optimization techniques. Experimental results show that, in terms of the model accuracy and training time, ELM with the logarithmic transformation is better than LS-SVM and RBFNN with/without the logarithmic transformation. The results also show that PSO outperforms SA in terms of fitness and standard deviation, with an acceptable computational time. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 259(<-252): Optimization and experimental test of a miniature permanent magnet structure for a microfluidic magnetic resonance chip We propose a general global optimal algorithm to optimize the miniature permanent magnet structure of a micro magnetic resonance chip (mu NMR-chip). For this purpose, we analyze the sensitivity of the permanent magnet structure to the design variables and determine the optimization variables. After this, radial basis function neural networks (RBFNNs) are constructed to model the objective functions, and the nondominated sorting genetic algorithm II (NSGA II) is improved by introducing a different weighting factor for each objective function in calculating the crowding distance. Combining the RBFNN with the improved NSGA II optimizes the miniature permanent magnet structure. Through comparison, the optimization solutions are proven effective. Finally, the optimized permanent magnet structure is manufactured and tested experimentally. After optimization, the volume of the permanent magnet block is reduced by 39%, and the permanent magnet becomes easier to manufacture. 2013 * 260(<-453): Parallel multiobjective memetic RBFNNs design and feature selection for function approximation problems The design of radial basis function neural networks (RBFNNs) still remains as a difficult task when they are applied to classification or to regression problems. The difficulty arises when the parameters that define an RBFNN have to be set, these are: the number of RBFs, the position of their centers and the length of their radii. Another issue that has to be faced when applying these models to real world applications is to select the variables that the RBFNN will use as inputs. The literature presents several methodologies to perform these two tasks separately, however, due to the intrinsic parallelism of the genetic algorithms, a parallel implementation will allow the algorithm proposed in this paper to evolve solutions for both problems at the same time. The parallelization of the algorithm not only consists in the evolution of the two problems but in the specialization of the crossover and mutation operators in order to evolve the different elements to be optimized when designing RBFNNs. The subjacent genetic algorithm is the non-sorting dominated genetic algorithm II (NSGA-II) that helps to keep a balance between the size of the network and its approximation accuracy in order to avoid overfitted networks. Another of the novelties of the proposed algorithm is the incorporation of local search algorithms in three stages of the algorithm: initialization of the population, evolution of the individuals and final optimization of the Pareto front. The initialization of the individuals is performed hybridizing clustering techniques with the mutual information (MI) theory to select the input variables. As the experiments will show, the synergy of the different paradigms and techniques combined by the presented algorithm allow to obtain very accurate models using the most significant input variables. (C) 2009 Published by Elsevier B.V. 2009 * 261(<-544): A new hybrid methodology for cooperative-coevolutionary optimization of radial basis function networks This paper presents a new multiobjective cooperative-coevolutive hybrid algorithm for the design of a Radial Basis Function Network (RBFN). This approach codifies a population of Radial Basis Functions (RBFs) (hidden neurons), which evolve by means of cooperation and competition to obtain a compact and accurate RBFN. To evaluate the significance of a given RBF in the whole network, three factors have been proposed: the basis function's contribution to the network's output, the error produced in the basis function radius, and the overlapping among RBFs. To achieve an RBFN composed of RBFs with proper values for these quality factors our algorithm follows a multiobjective approach in the selection process. In the design process, a Fuzzy Rule Based System (FRBS) is used to determine the possibility of applying operators to a certain RBF. As the time required by our evolutionary algorithm to converge is relatively small, it is possible to get a further improvement of the solution found by using a local minimization algorithm (for example, the Levenberg-Marquardt method). In this paper the results of applying our methodology to function approximation and time series prediction problems are also presented and compared with other alternatives proposed in the bibliography. 2007 * 262(<-638): Multiobjective evolutionary optimization of the size, shape, and position parameters of radial basis function networks for function approximation This paper presents a multiobjective evolutionary algorithm to optimize radial basis function neural networks (RBFNNs) in order to approach target functions from a set of input-output pairs. The procedure allows the application of heuristics to improve the solution of the problem at hand by including some new genetic operators in the evolutionary process. These new operators are based on two well-known matrix transformations: singular value decomposition (SVD) and orthogonal least squares (OLS), which have been used to define new mutation operators that produce local or global modifications in the radial basis functions (RBFs) of the networks (the individuals in the population in the evolutionary, procedure). After analyzing the efficiency of the different operators, we have shown that the global mutation operators yield an improved procedure to adjust the parameters of the RBFNNs. 2003 * 263(<-204): Memetic multiobjective particle swarm optimization-based radial basis function network for classification problems This paper presents a new multiobjective evolutionary algorithm applied to a radial basis function (RBF) network design based on multiobjective particle swarm optimization augmented with local search features. The algorithm is named the memetic multiobjective particle swarm optimization RBF network (MPSON) because it integrates the accuracy and structure of an RBF network. The proposed algorithm is implemented on two-class and multiclass pattern classification problems with one complex real problem. The experimental results indicate that the proposed algorithm is viable, and provides an effective means to design multiobjective RBF networks with good generalization capability and compact network structure. The accuracy and complexity of the network obtained by the proposed algorithm are compared with the memetic non-dominated sorting genetic algorithm based RBF network (MGAN) through statistical tests. This study shows that MPSON generates RBF networks coming with an appropriate balance between accuracy and simplicity, outperforming the other algorithms considered. (C) 2013 Elsevier Inc. All rights reserved. 2013 * 264(<-221): Predicting patient survival after liver transplantation using evolutionary multi-objective artificial neural networks Objective: The optimal allocation of organs in liver transplantation is a problem that can be resolved using machine-learning techniques. Classical methods of allocation included the assignment of an organ to the first patient on the waiting list without taking into account the characteristics of the donor and/or recipient. In this study, characteristics of the donor, recipient and transplant organ were used to determine graft survival. We utilised a dataset of liver transplants collected by eleven Spanish hospitals that provides data on the survival of patients three months after their operations. Methods and material: To address the problem of organ allocation, the memetic Pareto evolutionary non-dominated sorting genetic algorithm 2 (MPENSGA2 algorithm), a multi-objective evolutionary algorithm, was used to train radial basis function neural networks, where accuracy was the measure used to evaluate model performance, along with the minimum sensitivity measurement. The neural network models obtained from the Pareto fronts were used to develop a rule-based system. This system will help medical experts allocate organs. Results: The models obtained with the MPENSGA2 algorithm generally yielded competitive results for all performance metrics considered in this work, namely the correct classification rate (C), minimum sensitivity (MS), area under the receiver operating characteristic curve (AUC), root mean squared error (RMSE) and Cohen's kappa (Kappa). In general, the multi-objective evolutionary algorithm demonstrated a better performance than the mono-objective algorithm, especially with regard to the MS extreme of the Pareto front, which yielded the best values of MS (48.98) and AUC (0.5659). The rule-based system efficiently complements the current allocation system (model for end-stage liver disease, MELD) based on the principles of efficiency and equity. This complementary effect occurred in 55% of the cases used in the simulation. The proposed rule-based system minimises the prediction probability error produced by two sets of models (one of them formed by models guided by one of the objectives (entropy) and the other composed of models guided by the other objective (MS)), such that it maximises the probability of success in liver transplants, with success based on graft survival three months post-transplant. Conclusion: The proposed rule-based system is objective, because it does not involve medical experts (the expert's decision may be biased by several factors, such as his/her state of mind or familiarity with the patient). This system is a useful tool that aids medical experts in the allocation of organs; however, the final allocation decision must be made by an expert. (C) 2013 Elsevier B.V. All rights reserved. 2013 * 265(<-325): Memetic Elitist Pareto Differential Evolution algorithm based Radial Basis Function Networks for classification problems This paper presents a new multi-objective evolutionary hybrid algorithm for the design of Radial Basis Function Networks (RBFNs) for classification problems. The algorithm, MEPDEN, Memetic Elitist Pareto evolutionary approach based on the Non-dominated Sorting Differential Evolution (NSDE) multiobjective evolutionary algorithm which has been adapted to design RBFNs, where the NSDE algorithm is augmented with a local search that uses the Back-propagation algorithm. The MEPDEN is tested on two-class and multiclass pattern classification problems. The results obtained in terms of Mean Square Error (MSE), number of hidden nodes, accuracy (ACC), sensitivity (SEN), specificity (SPE) and Area Under the receiver operating characteristics Curve (AUC), show that the proposed approach is able to produce higher prediction accuracies with much simpler network structures. The accuracy and complexity of the network obtained by the proposed algorithm are compared with Memetic Eilitist Pareto Non-dominated Sorting Genetic Algorithm based RBFN (MEPGAN) through statistical tests. This study showed that MEPDEN obtains RBFNs with an appropriate balance between accuracy and simplicity, outperforming the other method considered. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 266(<-378): Memetic Pareto Evolutionary Artificial Neural Networks to determine growth/no-growth in predictive microbiology The main objective of this work is to automatically design neural network models with sigmoid basis units for binary classification tasks. The classifiers that are obtained achieve a double objective: a high classification level in the dataset and a high classification level for each class. We present MPENSGA2, a Memetic Pareto Evolutionary approach based on the NSGA2 multiobjective evolutionary algorithm which has been adapted to design Artificial Neural Network models, where the NSGA2 algorithm is augmented with a local search that uses the improved Resilient Backpropagation with backtracking-IRprop+ algorithm. To analyze the robustness of this methodology, it was applied to four complex classification problems in predictive microbiology to describe the growth/no-growth interface of food-borne microorganisms such as Listeria monocytogenes, Escherichia coli R31, Staphylococcus aureus and Shigella flexneri. The results obtained in Correct Classification Rate (CCR), Sensitivity (S) as the minimum of sensitivities for each class, Area Under the receiver operating characteristic Curve (AUC), and Root Mean Squared Error (RMSE), show that the generalization ability and the classification rate in each class can be more efficiently improved within a multiobjective framework than within a single-objective framework. (C) 2009 Elsevier B.V. All rights reserved. 2011 * 267(<-263): Multi-objective evolutionary algorithm for donor-recipient decision system in liver transplants This paper reports on a decision support system for assigning a liver from a donor to a recipient on a waiting-list that maximises the probability of belonging to the survival graft class after a year of transplant and/or minimises the probability of belonging to the non-survival graft class in a two objective framework. This is done with two models of neural networks for classification obtained from the Pareto front built by a multi-objective evolutionary algorithm - called MPENSGA2. This type of neural network is a new model of the generalised radial basis functions for obtaining optimal values in C (Correctly Classified Rate) and MS (Minimum Sensitivity) in the classifier, and is compared to other competitive classifiers. The decision support system has been proposed using, as simply as possible, those models which lead to making the correct decision about receptor choice based on efficient and impartial criteria. (c) 2012 Elsevier B.V. All rights reserved. 2012 * 268(<-414): Sensitivity Versus Accuracy in Multiclass Problems Using Memetic Pareto Evolutionary Neural Networks This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity ( extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class. 2010 * 269(<-379): Radial basis function network based on time variant multi-objective particle swarm optimization for medical diseases diagnosis This paper proposes an adaptive evolutionary radial basis function (RBF) network algorithm to evolve accuracy and connections (centers and weights) of RBF networks simultaneously. The problem of hybrid learning of RBF network is discussed with the multi-objective optimization methods to improve classification accuracy for medical disease diagnosis. In this paper, we introduce a time variant multi-objective particle swarm optimization(TVMOPSO) of radial basis function (RBF) network for diagnosing the medical diseases. This study applied RBF network training to determine whether RBF networks can be developed using TVMOPSO, and the performance is validated based on accuracy and complexity. Our approach is tested on three standard data sets from UCI machine learning repository. The results show that our approach is a viable alternative and provides an effective means to solve multi-objective RBF network for medical disease diagnosis. It is better than RBF network based on MOPSO and NSGA-II, and also competitive with other methods in the literature. (C) 2010 Elsevier B.V. All rights reserved. 2011 * 270(<-418): An Adaptive Multiobjective Approach to Evolving ART Architectures In this paper, we present the evolution of adaptive resonance theory (ART) neural network architectures (classifiers) using a multiobjective optimization approach. In particular, we propose the use of a multiobjective evolutionary approach to simultaneously evolve the weights and the topology of three well-known ART architectures; fuzzy ARTMAP (FAM), ellipsoidal ARTMAP (EAM), and Gaussian ARTMAP (GAM). We refer to the resulting architectures as MO-GFAM, MO-GEAM, and MO-GGAM, and collectively as MO-GART. The major advantage of MO-GART is that it produces a number of solutions for the classification problem at hand that have different levels of merit [accuracy on unseen data (generalization) and size (number of categories created)]. MO-GART is shown to be more elegant (does not require user intervention to define the network parameters), more effective (of better accuracy and smaller size), and more efficient (faster to produce the solution networks) than other ART neural network architectures that have appeared in the literature. Furthermore, MO-GART is shown to be competitive with other popular classifiers, such as classification and regression tree (CART) and support vector machines (SVMs). 2010 * 271(<-578): Applications of multi-objective structure optimization We present applications of multi-objective evolutionary optimization of feed-forward neural networks (NN) to two real world problems, car and face classification. The possibly conflicting requirements on the NNs are speed and classification accuracy, both of which can enhance the embedding systems as a whole. We compare the results to the outcome of a greedy optimization heuristic (magnitude-based pruning) coupled with a multi-objective performance evaluation. For the car classification problem, magnitude-based pruning yields competitive results, whereas for the more difficult face classification, we find that the evolutionary approach to NN design is clearly preferable. (c) 2006 Elsevier B.V. All rights reserved. 2006 * 272(<-583): Differentiation of syndromes with SVM Differentiation of syndromes is the kernel theory of Traditional Chinese Medicine (TCM). How to diagnose syndromes correctly with scientific means according to symptoms is the first problem in TCM. Several modem approaches have been applied, but no satisfied results have been obtained because of the complexity of diagnosis procedure. Support Vector Machine (SVM) is a new classification technique and has drawn much attention on this topic in recent years. In this paper, we combine non-linear Principle Component Analysis (PCA) neural network with multi-class SVM to realize differentiation of syndromes. Non-linear PCA is used to preprocess clinical data to save computational cost and reduce noise. The multi-class SVM takes the non-linear principle components as its inputs and determines a corresponding syndrome. Analyzing of a TCM example shows its effectiveness. 2006 * 273(<- 72): Multi-Criteria Decision Making: The Best Choice for the Modeling of Chemicals Against Hyper-Pigmentation? Classifier ensembles appeared to be powerful alternative for handling a difficult problem. It is rapidly growing and enjoying many attentions from pattern recognition and machine learning communities. In the present report, the potential of multi-criteria decision making via multiclassifier approaches is assessed by applying them in the modeling of chemicals against hyper-pigmentation. TOMOCOMD-CARDD atom-based quadratic indices are used as descriptors to parameterize the molecular structures. Support vector machine, artificial neural network, Bayesian network, binary logistic regression, instance-based learning and tree classification applied on two collected datasets are explored as standalone classifiers. Prediction sets (PSs) are used to assess the performance of multiclassifier systems (MCSs). A strategy exploiting the principal component analysis together with pairwise diversity measures is designed to select the most diverse base classifiers to combine. Various trainable and nontrainable systems are developed that aggregate, at the abstract and continuous levels, the outputs of base classifiers. The obtained results are rather encouraging since the MCSs generally enhance the performance of the base classifiers; e.g. the best MCS obtains global accuracy of 95.51%, 88.89% in the PS for the data I and II in regard to 94.12% and 85.93% of best individual classifier, respectively. Our results suggest that the MCSs could be the best choice till the moment to obtain suitable QSAR models for the prediction of depigmenting agents. Finally, we consider these approaches will aid improving the virtual screening procedures and increasing the practicality of data mining of chemical datasets for the discovery of novel lead compounds. 2015 * 274(<-113): Metrics to guide a multi-objective evolutionary algorithm for ordinal classification Ordinal classification or ordinal regression is a classification problem in which the labels have an ordered arrangement between them. Due to this order, alternative performance evaluation metrics are need to be used in order to consider the magnitude of errors. This paper presents a study of the use of a multi-objective optimization approach in the context of ordinal classification. We contribute a study of ordinal classification performance metrics, and propose a new performance metric, the maximum mean absolute error (MMAE). MMAE considers per-class distribution of patterns and the magnitude of the errors, both issues being crucial for ordinal regression problems. In addition, we empirically show that some of the performance metrics are competitive objectives, which justify the use of multi-objective optimization strategies. In our case, a multi-objective evolutionary algorithm optimizes an artificial neural network ordinal model with different pairs of metric combinations, and we conclude that the pair of the mean absolute error (MAE) and the proposed MMAE is the most favourable. A study of the relationship between the metrics of this proposal is performed, and the graphical representation in the two-dimensional space where the search of the evolutionary algorithm takes place is analysed. The results obtained show a good classification performance, opening new lines of research in the evaluation and model selection of ordinal classifiers. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 275(<- 89): A group decision classifier with particle swarm optimization and decision tree for analyzing achievements in mathematics and science Group decision making is a multi-criteria decision-making method applied in many fields. However, the use of group decision-making techniques in multi-class classification problems and rule generation is not explored widely. This investigation developed a group decision classifier with particle swarm optimization (PSO) and decision tree (GDCPSODT) for analyzing students' mathematic and scientific achievements, which is a multi-class classification problem involving rule generation. The PSO technique is employed to determine weights of condition attributes; the decision tree is used to generate rules. To demonstrate the performance of the developed GDCPSODT model, other classifiers such as the Bayesian classifier, the k-nearest neighbor (KNN) classifier, the back propagation neural networks classifier with particle swarm optimization (BPNNPSO) and the radial basis function neural networks classifier with PSO (RBFNNPSO) are used to cope with the same data. Experimental results indicated the testing accuracy of GDCPSODT is higher than the other four classifiers. Furthermore, rules and some improvement directions of academic achievements are provided by the GDCPSODT model. Therefore, the GDCPSODT model is a feasible and promising alternative for analyzing student-related mathematic and scientific achievement data. 2014 * 276(<-334): Weighting Efficient Accuracy and Minimum Sensitivity for Evolving Multi-Class Classifiers Recently, a multi-objective Sensitivity-Accuracy based methodology has been proposed for building classifiers for multi-class problems. This technique is especially suitable for imbalanced and multi-class datasets. Moreover, the high computational cost of multi-objective approaches is well known so more efficient alternatives must be explored. This paper presents an efficient alternative to the Pareto based solution when considering both Minimum Sensitivity and Accuracy in multi-class classifiers. Alternatives are implemented by extending the Evolutionary Extreme Learning Machine algorithm for training artificial neural networks. Experiments were performed to select the best option after considering alternative proposals and related methods. Based on the experiments, this methodology is competitive in Accuracy, Minimum Sensitivity and efficiency. 2011 * 277(<-561): A cooperative constructive method for neural networks for pattern recognition In this paper, we propose a new constructive method, based on cooperative coevolution, for designing automatically the structure of a neural network for classification. Our approach is based on a modular construction of the neural network by means of a cooperative evolutionary process. This process benefits from the advantages of coevolutionary computation as well as the advantages of constructive methods. The proposed methodology can be easily extended to work with almost any kind of classifier. The evaluation of each module that constitutes the network is made using a multiobjective method. So, each new module can be evaluated in a comprehensive way, considering different aspects, such as performance, complexity, or degree of cooperation with the previous modules of the network. In this way, the method has the advantage of considering not only the performance of the networks, but also other features. The method is tested on 40 classification problems from the UCI machine learning repository with very good performance. The method is thoroughly compared with two other constructive methods, cascade correlation and GMDH networks, and other classification methods, namely, SVM, C4.5, and k nearest-neighbours, and an ensemble of neural networks constructed using four different methods. (c) 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. 2007 * 278(<-599): Cooperative coevolution of artificial neural network ensembles for pattern classification This paper presents a cooperative coevolutive approach for designing neural network ensembles. Cooperative coevolution is a recent paradigm in evolutionary computation that allows the effective modeling of cooperative environments. Although theoretically, a single neural network with a sufficient number of neurons in the hidden layer would suffice to solve any problem, in practice many real-world problems are too hard to construct the appropriate network that solve them. In such problems, neural network ensembles are a successful alternative. Nevertheless, the design of neural network ensembles is a complex task. In this paper, we propose a general framework for designing neural network ensembles by means of cooperative coevolution. The proposed model has two main objectives: first, the improvement of the combination of the trained individual networks; second, the cooperative evolution of such networks, encouraging collaboration among them, instead of a separate training of each network. In order to favor the cooperation of the networks, each network is evaluated throughout the evolutionary process using a multiobjective method. For each network, different objectives are defined, considering not only its performance in the given problem, but also its cooperation with the rest of the networks. In addition, a population of ensembles is evolved, improving the combination of networks and obtaining subsets of networks to form ensembles that perform better than the combination of all the evolved networks. The proposed model is applied to ten real-world classification problems of a very different nature from the UCI machine learning repository and proben1 benchmark set. In all of them the performance of the model is better than the performance of standard ensembles in terms of generalization error. Moreover, the size of the obtained ensembles is also smaller. 2005 * 279(<-396): Towards the selection of best neural network system for intrusion detection Currently, network security is a critical issue because a single attack can inflict catastrophic damages to computers and network systems. Various intrusion detection approaches are available to adhere to this severe issue, but the dilemma is, which one is more suitable. Being motivated by this situation, in this paper, we evaluate and compare different neural networks (NNs). The current work presents an evaluation of different neural networks such as Self-organizing map (SOM), Adaptive Resonance Theory (ART), Online Backpropagation (OBPROP), Resilient Backpropagation (RPROP) and Support Vector Machine (SVM) towards intrusion detection mechanisms using Multi-criteria Decision Making (MCDM) technique. The results indicate that in terms of performance, supervised NNs are better, while unsupervised NNs are better regarding training overhead and aptitude towards handling varied and coordinated intrusion. Consequently, the combined, that is, hybrid approach of NNs is the optimal solution in the area of intrusion detection. The outcome of this work may help and guide the security implementers in two possible ways, either by using the results directly obtained in this paper or by extracting the results using other similar mechanism, but on different intrusion detection systems or approaches. 2010 * 280(<-448): Multi-objective simultaneous prediction of waterborne coating properties Multi-objective simultaneous prediction of waterborne coating properties was studied by the neural network combined with programming. The conditions of network with one input layer, three hidden layers and one output layer were confirmed. The monomers mass of BA, MMA, St and pigments mass of TiO(2) and CaCO(3) were used as input data. Four properties, which were hardness, adhesion, impact resistance and reflectivity, were used as network output. After discussing the hidden layer neurons, learn rate and the number of hidden layers, the best net parameters were confirmed. The results of experiment show that multi-hidden layers was advantageous to improve the accuracy of multi-objective simultaneous prediction. 36 kinds of coating formulations were used as the training subset and 9 acrylate waterborne coatings were used as testing subset in order to predict the performance. The forecast error of hardness was 8.02% and reflectivity was 0.16%. Both forecast accuracy of adhesion and impact resistance were 100%. 2009 * 281(<-464): A hybrid multi-objective genetic algorithm for evaluation of essential sets of medical diagnostic factors A hybrid algorithm that incorporates two biologically inspired computational intelligence methods was used for the assessment of abdominal pain. Namely, Genetic Algorithms (GA) where used in search for the optimal subset of clinical diagnostic factors that can be given as inputs for Probabilistic Neural Networks (PNN) to perform medical diagnosis based on the clinical data. Thus, the implemented GA was a two-objective one. The first objective was to minimize the number of diagnostic factors that were considered for medical diagnosis. The second objective was to minimize the Mean Square Error of the constructed PNN at the testing phase. The obtained results of the proposed hybrid algorithm are related favorably to the corresponding ones derived by applying Receiver Operating Characteristic analysis. Eventually, it was found that a number up to 60% of the diagnostic factors that are recorded in patient's history may be omitted without any loss in clinical assessment validity, while, at the same time, the performance of the genetically pruned PNN is improved in terms of execution speed and prediction accuracy. 2009 * 282(<- 57): Modelling commodity value at risk with Psi Sigma neural networks using open-high-low-close data The motivation for this paper is to investigate the use of a promising class of neural network models, Psi Sigma (PSI), when applied to the task of forecasting the one-day ahead value at risk (VaR) of the oil Brent and gold bullion series using open-high-low-close data. In order to benchmark our results, we also consider VaR forecasts from two different neural network designs, the multilayer perceptron and the recurrent neural network, a genetic programming algorithm, an extreme value theory model along with some traditional techniques such as an ARMA-Glosten, Jagannathan, and Runkle (1,1) model and the RiskMetrics volatility. The forecasting performance of all models for computing the VaR of the Brent oil and the gold bullion is examined over the period September 2001-August 2010 using the last year and half of data for out-of-sample testing. The evaluation of our models is done by using a series of backtesting algorithms such as the Christoffersen tests, the violation ratio and our proposed loss function that considers not only the number of violations but also their magnitude. Our results show that the PSI outperforms all other models in forecasting the VaR of gold and oil at both the 5% and 1% confidence levels, providing an accurate number of independent violations with small magnitude. 2015 * 283(<-569): Feature selection for ensembles applied to handwriting recognition Feature selection for ensembles has shown to be an effective strategy for ensemble creation clue to its ability of producing good subsets of features, which make the classifiers of the ensemble disagree on difficult cases. In this paper we present an ensemble feature selection approach based on a hierarchical multi-objective genetic algorithm. The underpinning paradigm is the "overproduce and choose". The algorithm operates in two levels. Firstly, it performs feature selection in order to generate a set of classifiers and then it chooses the best team of classifiers. In order to show its robustness, the method is evaluated in two different contexts: supervised and unsupervised feature selection. In the former, we have considered the problem of handwritten digit recognition and used three different feature sets and multi-layer perceptron neural networks as classifiers. In the latter, we took into account the problem of handwritten month word recognition and used three different feature sets and hidden Markov models as classifiers. Experiments and comparisons with classical methods, such as Bagging and Boosting, demonstrated that the proposed methodology brings compelling improvements when classifiers have to work with very low error rates. Comparisons have been done by considering the recognition rates only. 2006 * 284(<-210): A new approach to radial basis function-based polynomial neural networks: analysis and design In this study, we introduce a new topology of radial basis function-based polynomial neural networks (RPNNs) that is based on a genetically optimized multi-layer perceptron with radial polynomial neurons (RPNs). This paper offers a comprehensive design methodology involving various mechanisms of optimization, especially fuzzy C-means (FCM) clustering and particle swarm optimization (PSO). In contrast to the typical architectures encountered in polynomial neural networks (PNNs), our main objective is to develop a topology and establish a comprehensive design strategy of RPNNs: (a) The architecture of the proposed network consists of radial polynomial neurons (RPN). These neurons are fully reflective of the structure encountered in numeric data, which are granulated with the aid of FCM clustering. RPN dwells on the concepts of a collection of radial basis function and the function-based nonlinear polynomial processing. (b) The PSO-based design procedure being applied to each layer of the RPNN leads to the selection of preferred nodes of the network whose local parameters (such as the number of input variables, a collection of the specific subset of input variables, the order of the polynomial, the number of clusters of FCM clustering, and a fuzzification coefficient of the FCM method) are properly adjusted. The performance of the RPNN is quantified through a series of experiments where we use several modeling benchmarks, namely a synthetic three-dimensional data and learning machine data (computer hardware data, abalone data, MPG data, and Boston housing data) already used in neuro-fuzzy modeling. A comparative analysis shows that the proposed RPNN exhibits higher accuracy in comparison with some previous models available in the literature. 2013 * 285(<-341): Neuro-simulation modeling of chemical flooding Chemical flooding has proved to enhance oil recovery of reservoirs considerably. Development strategies of this method are more efficient when they consider both aspects of operation (recovery factor, RF) and economics (net present value, NPV). In this study, a multi-layer perceptron (MLP) neural network is developed for modeling of chemical flooding using surfactant and polymer via prediction of both RF and NPV in a unique model. The modeling algorithm is divided into three processes: training, generalization, and operation. In training process, the initial structure of the network is trained, and then the architecture of the trained network is optimized for reduction of prediction errors in generalization process. Furthermore, the optimum structure is compared with other methods like Radial Basis Function (RBF) neural network, quadratic and multi-objective regressions. The optimum architecture of the network contains one hidden layer with 8 neurons and training function of Bayesian regularization. In operation process, sensitivity analysis is studied for evaluating of effective parameters (inputs) on the performance of chemical flooding. The error is always less than 5% during the implementation of all processes. The results demonstrate that neuro-simulation of chemical flooding is reliable, inexpensive, fast in computational effort, and capable in accurate prediction of both RF and NPV in one model. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 286(<-576): Using a multi-objective genetic algorithm for SVM construction Support Vector Machines are kernel machines useful for classification and regression problems. in this paper, they are used for non-linear regression of environmental data. From a structural point of view, Support Vector Machines are particular Artificial Neural Networks and their training paradigm has some positive implications. in fact, the original training approach is useful to overcome the curse of dimensionality and too strict assumptions on statistics of the errors in data. Support Vector machines and Radial Basis Function Regularised Networks are presented within a common structural framework for non-linear regression in order to emphasise the training strategy for support vector machines and to better explain the multi-objective approach in support vector machines' construction. A support vector machine's performance depends on the kernel parameter, input selection and epsilon-tube optimal dimension. These will be used as decision variables for the evolutionary strategy based on a Genetic Algorithm, which exhibits the number of support vectors, for the capacity of machine, and the fitness to a validation subset, for the model accuracy in mapping the underlying physical phenomena, as objective functions. The strategy is tested on a case study dealing with groundwater modelling, based on time series (past measured rainfalls and levels) for level predictions at variable time horizons. 2006 * 287(<-209): A multi-objective micro genetic ELM algorithm The extreme learning machine (ELM) is a methodology for learning single-hidden layer feedforward neural networks (SLFN) which has been proved to be extremely fast and to provide very good generalization performance. ELM works by randomly choosing the weights and biases of the hidden nodes and then analytically obtaining the output weights and biases for a SLFN with the number of hidden nodes previously fixed. In this work, we develop a multi-objective micro genetic ELM (mu G-ELM) which provides the appropriate number of hidden nodes for the problem being solved as well as the weights and biases which minimize the MSE. The multi-objective algorithm is conducted by two criteria: the number of hidden nodes and the mean square error (MSE). Furthermore, as a novelty, mu G-ELM incorporates a regression device in order to decide whether the number of hidden nodes of the individuals of the population should be increased or decreased or unchanged. In general, the proposed algorithm reaches better errors by also implying a smaller number of hidden nodes for the data sets and competitors considered. (C) 2013 Elsevier B.V. All rights reserved. 2013 * 288(<-424): A multi-objective memetic and hybrid methodology for optimizing the parameters and performance of artificial neural networks The use of artificial neural networks implies considerable time spent choosing a set of parameters that contribute toward improving the final performance. Initial weights, the amount of hidden nodes and layers, training algorithm rates and transfer functions are normally selected through a manual process of trial-and-error that often fails to find the best possible set of neural network parameters for a specific problem. This paper proposes an automatic search methodology for the optimization of the parameters and performance of neural networks relying on use of Evolution Strategies, Particle Swarm Optimization and concepts from Genetic Algorithms corresponding to the hybrid and global search module. There is also a module that refers to local searches, including the well-known Multilayer Perceptrons, Back-propagation and the Levenberg-Marquardt training algorithms. The methodology proposed here performs the search using the aforementioned parameters in an attempt to optimize the networks and performance. Experiments were performed and the results proved the proposed method to be better than trial-and-error and other methods found in the literature. Crown Copyright (C) 2009 Published by Elsevier B.V. All rights reserved. 2010 * 289(<-248): A Clinical Decision Support System for Femoral Peripheral Arterial Disease Treatment One of the major challenges of providing reliable healthcare services is to diagnose and treat diseases in an accurate and timely manner. Recently, many researchers have successfully used artificial neural networks as a diagnostic assessment tool. In this study, the validation of such an assessment tool has been developed for treatment of the femoral peripheral arterial disease using a radial basis function neural network (RBFNN). A data set for training the RBFNN has been prepared by analyzing records of patients who had been treated by the thoracic and cardiovascular surgery clinic of a university hospital. The data set includes 186 patient records having 16 characteristic features associated with a binary treatment decision, namely, being a medical or a surgical one. K-means clustering algorithm has been used to determine the parameters of radial basis functions and the number of hidden nodes of the RBFNN is determined experimentally. For performance evaluation, the proposed RBFNN was compared to three different multilayer perceptron models having Pareto optimal hidden layer combinations using various performance indicators. Results of comparison indicate that the RBFNN can be used as an effective assessment tool for femoral peripheral arterial disease treatment. 2013 * 290(<-457): Optimal construction of a fast and accurate polarisable water potential based on multipole moments trained by machine learning To model liquid water correctly and to reproduce its structural, dynamic and thermodynamic properties warrants models that account accurately for electronic polarisation. We have previously demonstrated that polarisation can be represented by fluctuating multipole moments (derived by quantum chemical topology) predicted by multilayer perceptrons (MLPs) in response to the local structure of the cluster. Here we further develop this methodology of modeling polarisation enabling control of the balance between accuracy, in terms of errors in Coulomb energy and computing time. First, the predictive ability and speed of two additional machine learning methods, radial basis function neural networks (RBFNN) and Kriging, are assessed with respect to our previous MLP based polarisable water models, for water dimer, trimer, tetramer, pentamer and hexamer clusters. Compared to MLPs, we find that RBFNNs achieve a 14-26% decrease in median Coulomb energy error, with a factor 2.5-3 slowdown in speed, whilst Kriging achieves a 40-67% decrease in median energy error with a 6.5-8.5 factor slowdown in speed. Then, these compromises between accuracy and speed are improved upon through a simple multi-objective optimisation to identify Pareto-optimal combinations. Compared to the Kriging results, combinations are found that are no less accurate (at the 90th energy error percentile), yet are 58% faster for the dimer, and 26% faster for the pentamer. 2009 * 291(<-440): Application of Different Artificial Neural Networks Retention Models for Multi-Criteria Decision-Making Optimization in Gradient Ion Chromatography In this work, the principles of multi-criteria decision-making were used to develop an efficient optimization strategy in gradient elution ion chromatographic analysis. Two different artificial neural network retention models (multi-layer perceptron and radial basis function), three different separation criterion functions (chromatography response function, separation factor product and normalized retention difference product), and four different robustness criterion functions (CR1-CR4) were examined. The shape of the calculated separation vs the robustness response surface was used as principal criterion. Analysis time and minimum separation of adjacent peaks were additional criteria. The results showed that the radial basis artificial neural network retention model in combination with normalized retention difference product separation criterion function and CR3 robustness criterion function provided the optimal gradient ion chromatographic analysis. 2010 * 292(<-641): Nonlinear identification of aircraft gas-turbine dynamics Identification results for the shaft-speed dynamics of an aircraft gas turbine, under normal operation, are presented. As it has been found that the dynamics vary with the operating point, nonlinear models are employed. Two different approaches are considered: NARX models, and neural network models, namely multilayer perceptrons, radial basis function networks and B-spline networks. A special attention is given to genetic programming, in a multiobjective fashion, to determine the structure of NARMAX and B-spline models. (C) 2003 Elsevier B.V. All rights reserved. 2003 * 293(<-170): MULTI-OBJECTIVE OPTIMIZATION BY MEANS OF MULTI-DIMENSIONAL MLP NEURAL NETWORKS In this paper, a multi-layer perceptron (MLP) neural network (NN) is put forward as an efficient tool for performing two tasks: 1) optimization of multi-objective problems and 2) solving a non-linear system of equations. In both cases, mathematical functions which are continuous and partially bounded are involved. Previously, these two tasks were performed by recurrent neural networks and also strong algorithms like evolutionary ones. In this study, multi-dimensional structure in the output layer of the MLP-NN, as an innovative method, is utilized to implicitly optimize the multivariate functions under the network energy optimization mechanism. To this end, the activation functions in the output layer are replaced with the multivariate functions intended to be optimized. The effective training parameters in the global search are surveyed. Also, it is demonstrated that the MLP-NN with proper dynamic learning rate is able to find globally optimal solutions. Finally, the efficiency of the MLP-NN in both aspects of speed and power is investigated by some well-known experimental examples. In some of these examples, the proposed method gives explicitly better globally optimal solutions compared to that of the other references and also shows completely satisfactory results in other experiments. 2014 * 294(<-340): Nonlinear Modeling Method Applied to Prediction of Hot Metal Silicon in the Ironmaking Blast Furnace Feedforward neural networks have been established as versatile tools for nonlinear black-box modeling, but in many data-mining tasks the choice of relevant inputs and network complexity still constitute major challenges. Statistical tests for detecting relations between inputs and outputs proposed in the literature are largely based on the theory for linear systems, and laborious retraining combined with the risk of getting stuck in local minima make the application of exhaustive search through all possible network configurations impossible but for toy problems. This paper proposes a systematic method to tackle the problem where an output shall be estimated on the basis of a (large) set of potential inputs. Feedforward neural networks of multilayer perceptron type are used in the three-stage approach: First, starting from sufficiently large networks, an efficient pruning method is applied to detect potential model candidates. Next, the best results of the pruning runs are extracted by forming a Pareto-frontier, with the contradictory objectives of minimizing network complexity and estimation error. The networks on this frontier are considered to contain promising hidden nodes with their specific connections to relevant input variables. These hidden nodes are therefore optimally combined by mixed-integer linear programming to form a final set of neural network models, from which the user can select a model of suitable complexity. The modeling method is applied on an illustrative test example as well as on a complex modeling problem in the metallurgical industry, i.e., prediction of the silicon content of hot metal produced in a blast furnace. It is demonstrated to find relevant inputs and to yield parsimonious sparsely connected neural models of the output. 2011 * 295(<-659): Improving neural networks generalization with new constructive and pruning methods This paper presents a new constructive method and pruning approaches to control the design of Multi-Layer Perceptron (MLP) without loss in performance. The proposed methods use a multi-objective approach to guarantee generalization. The constructive approach searches for an optimal solution according to the pareto set shape with increasing number of hidden nodes. The pruning methods are able to simplify the network topology and to identify linear connections between the inputs and outputs of the neural model. Topology information and validation sets are used. 2002 * 296(<- 58): A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders. 2015 * 297(<-136): Improving the Accuracy of Urban Land Cover Classification Using Radarsat-2 PolSAR Data Land cover classification is one of the most important applications of polarimetric SAR images, especially in urban areas. There are numerous features that can be extracted from these images, hence feature selection plays an important role in PolSAR image classification. In this study, three main steps are used to address this task: 1) feature extraction in the form of three categories, namely original data features, decomposition features, and SAR discriminators; 2) feature selection in the framework of the single and multi-objective optimization; and 3) image classification using the best subset of features. In single objective methods, we employ genetic algorithms (GAs) and support vector machines (SVMs) or multi-layer perceptron (MLP) neural network in order to maximize classification accuracy. Then a new method is proposed to perform an efficient land cover classification of the San Francisco Bay urban area based on the multi-objective optimization approach. The objectives are to minimize the error of classification and the number of selected PolSAR parameters. The experimental results on Radarsat-2 fine-quad data show that the proposed method outperforms the single objective approaches tested against it, while saving computational complexity. Finally, we show that the our method has a better performance than the SVM with full set of features and the Wishart classifier which is based on the covariance matrix. 2014 * 298(<-189): A genetic algorithm-based multi-objective optimization of an artificial neural network classifier for breast cancer diagnosis The conventional technique for diagnosing the breast cancer disease relies on human experiences to identify the presence of certain pattern from the database. It is time-consuming and incurs unnecessary burden to radiologists. This work proposes a genetic algorithm-based multi-objective optimization of an Artificial Neural Network classifier, namely GA-MOO-NN, for the automatic breast cancer diagnosis. It performs a simultaneous search for the significant feature subsets and the optimum architecture of the network. The combination of ANN's parameters with feature selection to be optimized by Genetic Algorithm is novel. The Pareto-optimality with new ranking approach is applied for simultaneous minimizations of two competing objectives: the number of network's connections and squared error percentage of the validation data. Result shows that the algorithm with the proposed combination of objectives has achieved the best and average, 98.85 and 98.10 % accuracy of classification, respectively, on breast cancer dataset which outperform most systems of other works found in the literature. 2013 * 299(<-280): AMS 4.0: consensus prediction of post-translational modifications in protein sequences We present here the 2011 update of the AutoMotif Service (AMS 4.0) that predicts the wide selection of 88 different types of the single amino acid post-translational modifications (PTM) in protein sequences. The selection of experimentally confirmed modifications is acquired from the latest UniProt and Phospho.ELM databases for training. The sequence vicinity of each modified residue is represented using amino acids physico-chemical features encoded using high quality indices (HQI) obtaining by automatic clustering of known indices extracted from AAindex database. For each type of the numerical representation, the method builds the ensemble of Multi-Layer Perceptron (MLP) pattern classifiers, each optimising different objectives during the training (for example the recall, precision or area under the ROC curve (AUC)). The consensus is built using brainstorming technology, which combines multi-objective instances of machine learning algorithm, and the data fusion of different training objects representations, in order to boost the overall prediction accuracy of conserved short sequence motifs. The performance of AMS 4.0 is compared with the accuracy of previous versions, which were constructed using single machine learning methods (artificial neural networks, support vector machine). Our software improves the average AUC score of the earlier version by close to 7 % as calculated on the test datasets of all 88 PTM types. Moreover, for the selected most-difficult sequence motifs types it is able to improve the prediction performance by almost 32 %, when compared with previously used single machine learning methods. Summarising, the brainstorming consensus meta-learning methodology on the average boosts the AUC score up to around 89 %, averaged over all 88 PTM types. Detailed results for single machine learning methods and the consensus methodology are also provided, together with the comparison to previously published methods and state-of-the-art software tools. The source code and precompiled binaries of brainstorming tool are available at http://code.google.com/p/automotifserver/under Apache 2.0 licensing. 2012 * 300(<-534): Functional-link net with fuzzy integral for bankruptcy prediction The classification ability of a single-layer perceptron could be improved by considering some enhanced features. In particular, this form of neural networks is called a functional-link net. In the output neuron's activation function, such as the sigmoid function, an inner product of a connection weight vector with an input vector is computed. However, since the input features are not independent of each other for the enhanced pattern, an assumption of the additivity is not reasonable. This paper employs a non-additive technique, namely the fuzzy integral, to aggregate performance values for an input pattern by interpreting each of the connection weights as a fuzzy measure of the corresponding feature. A learning algorithm with the genetic algorithm is then designed to automatically find connection weights. The sample data for bankruptcy analysis obtained from Moody's Industrial Manuals is considered to examine the classification ability of the proposed method. The results demonstrate that the proposed method performs well in comparison with traditional functional-link net and multivariate techniques. (c) 2006 Elsevier B.V. All rights reserved. 2007 * 301(<-536): Learning multicriteria fuzzy classification method PROAFTN from data In this paper, we present a new methodology for learning parameters of multiple criteria classification method PROAFTN from data. There are numerous representations and techniques available for data mining, for example decision trees, rule bases, artificial neural networks, density estimation, regression and clustering. The PROAFTN method constitutes another approach for data mining. It belongs to the class of supervised learning algorithms and assigns membership degree of the alternatives to the classes. The PROAFTN method requires the elicitation of its parameters for the purpose of classification. Therefore, we need an automatic method that helps us to establish these parameters from the given data with minimum classification errors. Here, we propose variable neighborhood search metaheuristic for getting these parameters. The performances of the newly proposed method were evaluated using 10 cross validation technique. The results are compared with those obtained by other classification methods previously reported on the same data. It appears that the solutions of substantially better quality are obtained with proposed method than with these former ones. Crown Copyright (c) 2005 Published by Elsevier Ltd. All rights reserved. 2007 * 302(<-546): Fuzzy integral-based perceptron for two-class pattern classification problems The single-layer perceptron with single output node is a well-known neural network for two-class classification problems. Furthermore, the sigmoid or logistic function is usually used as the activation function in the output neuron. A critical step is to compute the sum of the products of the connection weights with the corresponding inputs, which indicates the assumption of additivity among individual variables. Unfortunately, because the input variables are not always independent of each other, an assumption of additivity may not be reasonable enough. In this paper, the inner product can be replaced with an aggregation value obtained by a useful fuzzy integral by viewing each of the connection weights as a value of a lambda-fuzzy measure for the corresponding variable. A genetic algorithm is then employed to obtain connection weights by maximizing the number of correctly classified training patterns and minimizing the errors between the actual and desired outputs of individual training patterns. The experimental results further demonstrate that the proposed method outperforms the traditional single-layer perceptron and performs well in comparison with other fuzzy or non-fuzzy classification methods. (c) 2006 Elsevier Inc. All rights reserved. 2007 * 303(<-590): Training of multilayer perceptron neural networks by using cellular genetic algorithms This paper deals with a method for training neural networks by using cellular genetic algorithms (CGA). This method was implemented as software, CGANN-Trainer, which was used to generate binary classifiers for recognition of patterns associated with breast cancer images in a multi-objective optimization problem. The results reached by the CGA with the Wisconsin Breast Cancer Database, and the Wisconsin Diagnostic Breast Cancer Database, were compared with some other methods previously reported using the same databases, proving to be an interesting alternative. 2006 * 304(<-632): Multicriteria fuzzy classification procedure PROCFTN: methodology and medical application In this paper, we introduce a new classification procedure for assigning objects to predefined classes, named PROCFTN. This procedure is based on a fuzzy scoring function for choosing a subset of prototypes, which represent the closest resemblance with an object to be assigned. It then applies the majority-voting rule to assign an object to a class. We also present a medical application of this procedure as an aid to assist the diagnosis of central nervous system tumours. The results are compared with those obtained by other classification methods, reported on the same data set, including decision tree, production rules, neural network, k nearest neighbor, multilayer perceptron and logistic regression. Our results are very encouraging and show that the multicriteria decision analysis approach can be successfully used to help medical diagnosis. Crown Copyright (C) 2003 Published by Elsevier B.V. All rights reserved. 2004 * 305(<-351): Donor-Recipient Matching Based on a Rule-System Built on a Multiobjective Artificial Neural Network 2011 * 306(<-361): Donor-Recipient Matching in Liver Transplantation Based on a Rule-System Built on a Multiobjective Artificial Neural Network 2011 * 307(<-417): Multiobjective scheduling for semiconductor manufacturing plants Scheduling of semiconductor wafer manufacturing system is identified as a complex problem, involving multiple and conflicting objectives (minimization of facility average utilization, minimization of waiting time and storage, for instance) to simultaneously satisfy. In this study, we propose an efficient approach based on an artificial neural network technique embedded into a multiobjective genetic algorithm for multi-decision scheduling problems in a semiconductor wafer fabrication environment. (C) 2010 Elsevier Ltd All rights reserved 2010 * 308(<-447): A Neural Network and a Genetic Algorithm for Multiobjective Scheduling of Semiconductor Manufacturing Plants Scheduling of semiconductor wafer fabrication system is identified as a complex problem, involving multiple objectives to be satisfied simultaneously (maximization of workstation utilization and minimization of waiting time and storage, for instance). In this study, we propose a methodology based oil an artificial neural network technique, for computing the various objective functions, embedded into a multiobjective genetic algorithm for multidecision scheduling problems in a semiconductor wafer fabrication environment. A discrete event simulator, developed and validated in our previous works, serves here to feed the neural network. Six criteria related to both equipment (facility average utilization) and products (average cycle time (ACT), standard deviation of ACT, average waiting time, work in process, and total storage) are chosen as significant performance indexes of the workshop. The optimization variables are the time between campaigns and the release time of batches into the plant. An industrial size example is taken as a test bench to validate the approach. 2009 * 309(<-679): Genetic neuro-scheduler: A new approach for job shop scheduling In this paper, a hybrid approach between two new techniques, genetic algorithms and artificial neural network is described for generating job shop schedules in a discrete manufacturing environment based on nonlinear multiobjective function. Genetic algorithm (GA) is used as an effective search technique for finding an optimal schedule via population of gene strings which represent alternative feasible schedules. GA propagates new population of genes through number of cycles called generations by implementing natural genetic mechanism. Specifically, gene strings should have a structure that imposes the most common restrictive constraint; a precedence constraint. The other technique is an artificial neural network that performs multiobjective schedule evaluation. The intention is to establish an effective model that maps a complex set of scheduling criteria (i.e. flowtime, lateness) to appropriate values provided by experienced schedulers. The proposed approach is prototyped and tested on four different job shop scheduling problems based on problem size, namely; small, medium, large, and a sample problem provided by a company. The comparative results indicate that the proposed approach is consistently better than those of heuristic algorithms used extensively in industry. 1995 * 310(<-686): GENETIC NEURO-SCHEDULER FOR JOB-SHOP SCHEDULING This paper describes a hybrid approach between two new techniques, Genetic Algorithms and Artificial Neural Networks, for generating Job Shop Schedules (JSS) in a discrete manufacturing environment based on non-linear multi-criteria objective function. Genetic Algorithm (GA) is used as a search technique for an optimal schedule via a uniform randomly generated population of gene strings which represent alternative feasible schedules. GA propagates this specific gene population through a number of cycles or generations by implementing natural genetic mechanism ( i.e. reproduction operator and crossover operator). It is important to design an appropriate format of genes for JSS problems. Specifically, gene strings should have a structure that imposes the most common restrictive constraint; a precedence constraint. The other is an Artificial Neural Network, which uses its highly connected-neuron network to perform as a multi-criteria evaluator. The basic idea is a neural network evaluator which maps a complex set of scheduling criteria (i.e. flowtime, lateness) to evaluate values provided by experienced experts. Once, the network is fully trained, it will be used as an evaluator to access the fitness or performance of those simulated gene strings. The proposed approach was prototyped and implemented on JSS problems based on different model sizes; namely small, medium, and large. The results are compared to the Shortest Processing Time heuristic used extensively in industry. 1993 * 311(<-617): Process optimisation of transfer moulding for electronic packages using artificial neural networks and multiobjective optimisation techniques Transfer moulding is the most common process for the encapsulation of electronic packages in semiconductor manufacturing. Quality of the moulding is affected by a large number of mould design parameters and process parameters. Currently, the parameters setting is performed by experienced engineers in a trial and error manner and often the optimal setting can not be obtained. In the face of global competition, the current practice is inadequate. In this research, a process optimisation system for transfer moulding of electronic packages is described which involves design of experiments (DOE) techniques, artificial neural networks (ANNs), multiple regression analysis and the minimax method. The system is aimed to determine the optimal mould design parameters and process parameter settings of transfer moulding of electronic packages for multiobjective problem. Implementation of the optimisation system has demonstrated that the time for the determination of optimal mould design parameters and process parameters setting can be greatly reduced and the parameters setting recommended by the system can contribute to the good quality of moulded packages without relying on experienced engineers. 2004 * 312(<-624): Intelligent process design system for the transfer moulding of electronic packages Currently, mould design and the setting of the process parameters of transfer moulding for electronic packages are done manually in a trial-and-error manner. The effectiveness of the setting of parameters is largely dependent on the experience of engineers. The paper describes an intelligent process design system for transfer moulding of electronic packages that is used to determine optimal mould design parameters and the setting of the process parameters mainly based on case-based reasoning, artificial neural networks and a multiobjective optimization scheme. The system consists of two modules: a case-based reasoning module and a process optimization module. The former module is used to determine initial mould design parameters and the setting of the process parameters while the latter module is used to determine optimal mould design parameters and the setting of the process parameters. Implementation of the intelligent system has demonstrated that the time for the determination of optimal mould design parameters and the setting of the process parameters can be greatly reduced, and the setting of parameters recommended by the system can contribute to the good quality of moulded packages. 2004 * 313(<-468): Signal denoising in engineering problems through the minimum gradient method This paper applies the minimum gradient method (MGM) to denoise signals in engineering problems. The MGM is a novel technique based on the complexity control, which defines the learning as a bi-objective problem in such a way to find the best trade-off between the empirical risk and the machine complexity. A neural network trained with this method can be used to pre-process data aiming at increasing the signal-to-noise ratio (SNR). After training, the neural network behaves as an adaptive filter which minimizes the cross-validation error. By applying the general singular value decomposition (GSVD), we show the relation between the proposed approach and the Wiener filter. Some results are presented, including a toy example and two complex engineering problems, which prove the effectiveness of the proposed approach. (C) 2009 Elsevier B.V. All rights reserved. 2009 * 314(<-480): Noise Reduction in a Non-Homogenous Ground Penetrating Radar Problem by Multiobjective Neural Networks This paper applies artificial neural networks (ANNs) trained with a multiobjective algorithm to preprocess the ground penetrating radar data obtained from a finite-difference time-domain (FDTD) model. This preprocessing aims at improving the target's reflected wave signal-to-noise ratio (SNR). Once trained, the NN behaves as an adaptive filter which minimizes the cross-validation error. Results considering both white and colored Gaussian noise, with many different SNR, are presented and they show the effectiveness of the proposed approach. 2009 * 315(<-685): ARTIFICIAL NEURAL NETWORKS VERSUS NATURAL NEURAL NETWORKS - A CONNECTIONIST PARADIGM FOR PREFERENCE ASSESSMENT Preference is an essential ingredient in all decision processes. This paper presents a new connectionist paradigm for preference assessment in a general multicriteria decision setting. A general structure of an artificial neural network for representing two specified prototypes of preference structures is discussed. An interactive preference assessment procedure and an autonomous learning algorithm based on a novel scheme of supervised learning are proposed. Operating characteristics of the proposed paradigm are also illustrated through detailed results of numerical simulations. 1994 * 316(<- 52): Multiple Actor-Critic Structures for Continuous-Time Optimal Control Using Input-Output Data In industrial process control, there may be multiple performance objectives, depending on salient features of the input-output data. Aiming at this situation, this paper proposes multiple actor-critic structures to obtain the optimal control via input-output data for unknown nonlinear systems. The shunting inhibitory artificial neural network (SIANN) is used to classify the input-output data into one of several categories. Different performance measure functions may be defined for disparate categories. The approximate dynamic programming algorithm, which contains model module, critic network, and action network, is used to establish the optimal control in each category. A recurrent neural network (RNN) model is used to reconstruct the unknown system dynamics using input-output data. NNs are used to approximate the critic and action networks, respectively. It is proven that the model error and the closed unknown system are uniformly ultimately bounded. Simulation results demonstrate the performance of the proposed optimal control scheme for the unknown nonlinear system. 2015 * 317(<-250): Evaluation of environmental impacts using backpropagation neural network Purpose - The purpose of this present study is to find a scientific method for the evaluation of environmental impacts according to the requirement 4.3.1. Design/methodology/approach - To realize the objectives, the authors worked with a representative sample from certified ISO 14001 organizations. The data aim to identify and evaluate (according to the organization's methodology) significant environmental impacts. In this study, the authors created two models for the evaluation of environmental impacts based on an artificial neural network applied in the pilot organization and compared the results obtained from these models with those obtained by applying an analytic hierarchy process (AHP) method. AHP is part of an multi-criteria decision making method and provides good multi-criteria support for decision making for problems that can be structured hierarchically. Findings - This paper presents a new approach that uses a backpropagation neural network to evaluate environmental impacts regardless of the organization type. Originality/value - This paper presents a unique approach for the reliable and objective evaluation of environmental impacts. 2013 * 318(<-386): Artificial neural network-based resistance spot welding quality assessment system On-line quality assessment has become one of the most critical requirements for improving the efficiency and the autonomy of automatic resistance spot welding (RSW) processes. An accurate and efficient model to perform non-destructive quality estimation is an essential part of the assessment process. This paper presents a structured and systematic approach developed to design an effective ANN-based model for on-line quality assessment in RSW. The proposed approach examines welding parameters and conditions known to have an influence on weld quality, and builds a quality estimation model step by step. The modeling procedure begins by examining, through a structured experimental design, the effect of welding parameters (welding time, welding current, electrode force and sheet metal thickness) and welding conditions represented by typical characteristics of the dynamic resistance curves on multiple welding quality indicators (indentation depth, nugget diameter and nugget penetration) and by analyzing their interactions and their sensitivity to the variation of the dynamic process conditions. Using these results and by combining an efficient modeling planning method, neural network paradigm, multi-criteria optimization and various statistical tools, the identification of the model form and the variables to be included in the model is achieved by executing a systematic model optimization procedure. The results demonstrate that the proposed approach can lead to a general ANN-based model able to accurately and reliably provide an appropriate assessment of the weld quality under diverse and variable welding conditions. 2011 * 319(<-243): Memetic Pareto differential evolutionary neural network used to solve an unbalanced liver transplantation problem Donor-recipient matching constitutes a complex scenario difficult to model. The risk of subjectivity and the likelihood of falling into error must not be underestimated. Computational tools for the decision-making process in liver transplantation can be useful, despite the inherent complexity involved. Therefore, a multi-objective evolutionary algorithm and various techniques to select individuals from the Pareto front are used in this paper to obtain artificial neural network models to aid decision making. Moreover, a combination of two pre-processing methods has been applied to the dataset to offset the existing imbalance. One of them is a resampling method and the other is a outlier deletion method. The best model obtained with these procedures (with AUC = 0.66) give medical experts a probability of graft survival at 3 months after the operation. This probability can help medical experts to achieve the best possible decision without forgetting the principles of fairness, efficiency and equity. 2013 * 320(<-409): Artificial neural networks for machining processes surface roughness modeling In recent years, several papers on machining processes have focused on the use of artificial neural networks for modeling surface roughness. Even in such a specific niche of engineering literature, the papers differ considerably in terms of how they define network architectures and validate results, as well as in their training algorithms, error measures, and the like. Furthermore, a perusal of the individual papers leaves a researcher without a clear, sweeping view of what the field's cutting edge is. Hence, this work reviews a number of these papers, providing a summary and analysis of the findings. Based on recommendations made by scholars of neurocomputing and statistics, the review includes a set of comparison criteria as well as assesses how the research findings were validated. This work also identifies trends in the literature and highlights their main differences. Ultimately, this work points to underexplored issues for future research and shows ways to improve how the results are validated. 2010 * 321(<-610): Evolutionary multi-objective optimization for simultaneous generation of signal-type and symbol-type representations It has been a controversial issue in the research of cognitive science and artificial intelligence whether signal-type representations (typically connectionist networks) or symbol-type representations (e.g., semantic networks, production systems) should be used. Meanwhile, it has also been recognized that both types of information representations might exist in the human brain. In addition, symbol-type representations are often very helpful in gaining insights into unknown systems. For these reasons, comprehensible symbolic rules need to be extracted from trained neural networks. In this paper, an evolutionary multi-objective algorithm is employed to generate multiple models that facilitate the generation of signal-type and symbol-type representations simultaneously. It is argued that one main difference between signal-type and symbol-type representations lies in the fact that the signal-type representations, are models of a higher complexity (fine representation), whereas symbol-type representations are models of a lower complexity (coarse representation). Thus, by generating models with a spectrum of model complexity, we are able to obtain a population of models of both signal-type and symbol-type quality, although certain post-processing is needed to get a fully symbol-type representation. An illustrative example is given on generating neural networks for the breast cancer diagnosis benchmark problem. 2005 * 322(<-315): A dynamic feedback model for partner selection in agile supply chains Purpose - The purpose of this paper is to present a four-phase dynamic feedback model for supply partner selection in agile supply chains (ASCs). ASCs are commonly used as a response to increasingly dynamic markets. However, partner selection in ASCs is inherently more complex and difficult under conditions of uncertainty and ambiguity as supply chains form and re-form. Design/methodology/approach - The model draws on both quantitative and qualitative techniques, including the Dempster-Shafer and optimisation theories, radial basis function artificial neural net:works (RBF-ANN), analytic network process-mixed integer multi-objective programming (ANP-MIMOP), Kraljic's supplier classification matrix and principles of continuous improvement. It incorporates modern computer programming techniques to overcome the information processing difficulties inherent in selecting from amongst large numbers of potential suppliers against multiple criteria in conditions of uncertainty. Findings - The model enables decision makers to make efficient and effective use of the vastly increased amount of data that is available in today's information-driven society and it offers a comprehensive, systematic and rigorous approach to a complex problem. Research limitations/implications - The model has two main drawbacks. First, practitioners may find it difficult to match supplier evaluation criteria with the strategic objectives for an ASC. Second, they may perceive the model to be too complex for use when speed is of the essence. Originality/value - The main contribution of this paper is that, for the first time, it draws together work from previous articles that have described each of the four stages of the model in detail to present a comprehensive overview of the model. 2012 * 323(<-342): A multiple kernel learning-based decision support model for contractor pre-qualification Due to the complex nature of the contractor pre-qualification such as subjectivity, non-linearity and multi-criteria, advanced model should be required for achieving a high accuracy of this decision-making process. Previous studies have been conducted to build up quantitative decision models for contractor pre-qualification, among them artificial neural network (ANN) and support vector machine (SVM) have been proved to be desirable in solving the pre-qualification problem with regards to their higher accuracy and efficiency for solving the non-linear problem of classification. Based on the algorithm of SVM, multiple kernel learning (MKL) method was developed and it has been proved to perform better than SVM in other areas. Hence, MKL is proposed in this research, the capability of MKL was compared with SVM through a case study. From the result, it has been proved that both SVM and MKL perform well in classification, and MKL is more preferable than SVM, with a proper parameter setting. Therefore, MKL can enhance the decision making of contractor pre-qualification. (C) 2010 Elsevier B.V. All rights reserved. 2011 * 324(<-377): A multi-objective artificial immune algorithm for parameter optimization in support vector machine Support vector machine (SVM) is a classification method based on the structured risk minimization principle. Penalize, C; and kernel, sigma parameters of SVM must be carefully selected in establishing an efficient SVM model. These parameters are selected by trial and error or man's experience. Artificial immune system (AIS) can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. A multi-objective artificial immune algorithm has been used to optimize the kernel and penalize parameters of SVM in this paper. In training stage of SVM, multiple solutions are found by using multi-objective artificial immune algorithm and then these parameters are evaluated in test stage. The proposed algorithm is applied to fault diagnosis of induction motors and anomaly detection problems and successful results are obtained. (c) 2009 Elsevier B.V. All rights reserved. 2011 * 325(<-529): Springback Compensation of Sheet Metal Bending Process Based on DOE & ANN Nowadays, the trend to a lightweight design accelerates the use of advanced high strength steel (AHSS) in automotive industry. Springback phenomena is a hot issue in the sheet metal forming, especially bending process using AHSS. Several analytical methods for that have been proposed in recent years. Each of method has their advantages and disadvantages. There are only a few optimal solutions which can minimize the two objectives simultaneously. In this study, an effective method optimized the multi objective value. The method by the design of experiments(DOE) and artificial neural network(ANN) was presented to compensate springback of bending parts. This method was applied to L and V bending process. The effective method could be optimized to multiple object. It was confirmed that the proposed method was more efficient than traditional manual FEA procedure and the trial and error approach for springback compensation.\ 2008 * 326(<-687): USING GENETIC ALGORITHMS FOR AN ARTIFICIAL NEURAL-NETWORK MODEL INVERSION Genetic algorithms (GAs) and artificial neural networks (ANNs) are techniques for optimization and learning, respectively, which both have been adopted from nature. Their main advantage over traditional techniques is the relatively better performance when applied to complex relations. GAs and ANNs are both self-learning systems, i.e., they do not require any background knowledge from the creator. In this paper, we describe the performance of a GA that finds hypothetical physical structures of poly(ethylene terephthalate) (PET) yarns corresponding to a certain combination of mechanical and shrinkage properties. This GA uses a validated ANN that has been trained for the complex relation between structure and properties of PET. This technique was tested by comparing the optimal points found by the GA with known experimental data under a variety of multi-criteria conditions. 1993 * 327(<-323): Modelling, optimization and decision making techniques in designing of functional clothing Functional clothing are actually engineered textiles as they require to meet the stringent performance characteristics rather than the aesthetic properties. Therefore, the trial and error approach of product design does not seem to be a viable way for functional clothing. It needs more potent approaches of modelling, optimization and decision making so that the design and functional requirements of clothing can be met with acceptable tolerance. This paper provides a brief outline of various techniques of modelling, optimization and decision making intended for designing of functional clothing. In the modelling part, regression and artificial neural network approaches have been discussed with the examples of thermal property and water repellency modelling. Subsequently, linear programming and genetic algorithm techniques have been invoked in the optimization part. Optimization of ultraviolet radiation protective clothing is taken up as a case study. Finally, multi-criteria decision making techniques have been explained with the hypothetical example of selection of best body armour vest for defense applications. 2011 * 328(<-489): On the modeling of car passenger ferryship design parameters with respect to selected sea-keeping qualities and additional resistance in waves This paper presents the modeling of car passenger ferryship design parameters with respect to such design criteria as selected sea-keeping qualities and additional resistance in waves. In the first part of the investigations approximations of selected statistical parameters of design criteria of ferryship were elaborated with respect to ship design parameters. The approximation functions were obtained with the use of artificial neural networks. In the second part of the investigations design solutions were searched for by applying the single- and multi-criterial optimization methods. The multi-criterial optimization was performed by using Pareto method. Such approach made it possible to present solutions in such form as to allow decision makers (shipowner, designer) to select solutions the most favourable in each individual case. 2009 * 329(<-271): Convergence analysis of sliding mode trajectories in multi-objective neural networks learning The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. (c) 2012 Elsevier Ltd. All rights reserved. 2012 * 330(<-281): Learning and training techniques in fuzzy control for energy efficiency in buildings A novel procedure for learning Fuzzy Controllers (FC) is proposed that concerns with energy efficiency issues in distributing electrical energy to heaters in an electrical energy heating system. Energy rationalization together with temperature control can significantly improve energy efficiency, by efficiently controlling electrical heating systems and electrical energy consumption. The novel procedure, which improves the training process, is designed to train the FC, as well as to run the control algorithm and to carry out energy distribution. Firstly, the dynamic thermal performance of different variables is mathematically modelled for each specific building type and climate zone. Secondly, an exploratory projection pursuit method is used to extract the relevant features. Finally, a supervised dynamic neural network model and identification techniques are applied to FC learning and training. The FC rule-set and parameter-set learning process is a multi-objective problem that minimizes both the indoor temperature error and the energy deficit in the house. The reliability of the proposed procedure is validated for a city in a winter zone in Spain. 2012 * 331(<-359): A New and Efficient Intelligent Collaboration Scheme for Fashion Design Technology-mediated collaboration process has been extensively studied for over a decade. Most applications with collaboration concepts reported in the literature focus on enhancing efficiency and effectiveness of the decision-making processes in objective and well-structured workflows. However, relatively few previous studies have investigated the applications of collaboration schemes to problems with subjective and unstructured nature. In this paper, we explore a new intelligent collaboration scheme for fashion design which, by nature, relies heavily on human judgment and creativity. Techniques such as multicriteria decision making, fuzzy logic, and artificial neural network (ANN) models are employed. Industrial data sets are used for the analysis. Our experimental results suggest that the proposed scheme exhibits significant improvement over the traditional method in terms of the time-cost effectiveness, and a company interview with design professionals has confirmed its effectiveness and significance. 2011 * 332(<-597): Intelligent interactive multiobjective optimization method and its application to reliability optimization In most practical situations involving reliability optimization, there are several mutually conflicting goals such as maximizing the system reliability and minimizing the cost, weight and volume. This paper develops an effective multiobjective optimization method, the Intelligent Interactive Multiobjective Optimization Method (IIMOM). In IIMOM, the general concept of the model parameter vector is proposed. From a practical point of view, a designer's preference structure model is built using Artificial Neural Networks (ANNs) with the model parameter vector as the input and the preference information articulated by the designer over representative samples from the Pareto frontier as the desired output. Then with the ANN model of the designer's preference structure as the objective, an optimization problem is solved to search for improved solutions and guide the interactive optimization process intelligently. IIMOM is applied to the reliability optimization problem of a multi-stage mixed system with five different value functions simulating the designer in the solution evaluation process. The results illustrate that IIMOM is effective in capturing different kinds of preference structures of the designer, and it provides a complete and effective solution for medium- and small-scale multiobjective optimization problems. 2005 * 333(<- 46): Implementation and testing of a soft computing based model predictive control on an industrial controller This work presents a real time testing approach of an Intelligent Multiobjective Nonlinear-Model Predictive Control Strategy (iMO-NMPC). The goal is the testing and analysis of the feasibility and reliability of some Soft Computing (SC) techniques running on a real time industrial controller. In this predictive control strategy, a Multiobjective Genetic Algorithm is used together with a Recurrent Artificial Neural Network in order to obtain the control action at each sampling time. The entire development process, from the numeric simulation of the control scheme to its implementation and testing on a PC-based industrial controller, is also presented in this paper. The computational time requirements are discussed as well. The obtained results show that the SC techniques can be considered also to tackle highly nonlinear and coupled complex control problems in real time, thus optimising and enhancing the response of the control loop. Therefore this work is a contribution to spread the SC techniques in on-line control applications, where currently they are relegated mainly to be used off-line, as is the case of optimal tuning of control strategies. (C) 2014 Elsevier B.V. All rights reserved. 2015 * 334(<-139): Framework for Creating Digital Representations of Structural Components Using Computational Intelligence Techniques A framework for creating a digital representation of physical structural components is investigated. A model updating scheme used with an artificial neural network to map updating parameters to the error observed between simulated experimental data and an analytical model of a turbine-engine fan blade. The simulated experimental airfoil has as-manufactured geometric deviations from the nominal, design-intent geometry on which the analytical model is based. The manufacturing geometric deviations are reduced through principal component analysis, where the scores of the principal components are the unknown updating parameters. A range of acceptable scores is used to devise a design of computer experiments that provides training and testing data for the neural network. This training data is composed of principal component scores as inputs. The outputs are the calculated errors between the analytical and experimental predictions of modal properties and frequency-response functions. Minimizing these errors will result in an updated analytical model that has predictions closer to the simulated experimental data. This minimization process is done through the use of two multiobjective evolutionary algorithms. The goal is to determine if the updating process can identify the principal components used in simulating the experiment data. 2014 * 335(<-154): Optimal Management of a Freshwater Lens in a Small Island Using Surrogate Models and Evolutionary Algorithms This paper examines a linked simulation-optimization procedure based on the combined application of an artificial neural network (ANN) and genetic algorithm (GA) with the aim of developing an efficient model for the multiobjective management of groundwater lenses in small islands. The simulation-optimization methodology is applied to a real aquifer in Kish Island of the Persian Gulf to determine the optimal groundwater-extraction while protecting the freshwater lens from seawater intrusion. The initial simulations are based on the application of SUTRA, a variable-density groundwater numerical model. The numerical model parameters are calibrated through automated parameter estimation. To make the optimization process computationally feasible, the numerical model is subsequently replaced by a trained ANN model as an approximate simulator. Even with a moderate number of input data sets based on the numerical simulations, the ANN metamodel can be efficiently trained. The ANN model is subsequently linked with GA to identify the nondominated or Pareto-optimal solutions. To provide flexibility in the implementation of the management plan, the model is built upon optimizing extraction from a number of zones instead of point-well locations. Two issues are of particular interest to the research reported in this paper are: (1) how the general idea of minimizing seawater intrusion can be effectively represented by objective functions within the framework of the simulation-optimization paradigm, and (2) the implications of applying the methodology to a real-world small-island groundwater lens. Four different models have been compared within the framework of multiobjective optimization, including (1) minimization of maximum salinity at observation wells, (2) minimization of the root mean square (RMS) change in concentrations over the planning period, (3) minimization of the arithmetic mean, and (4) minimization of the trimmed arithmetic mean of concentration in the observation wells. The latter model can provide a more effective framework to incorporate the general objective of minimizing seawater intrusion. This paper shows that integration of the latest innovative tools can provide the ability to solve complex real-world optimization problems in an effective way. (C) 2014 American Society of Civil Engineers. 2014 * 336(<-644): Simulation metamodeling through artificial neural networks Simulation metamodeling has been a major research field during the last decade. The main objective has been to provide robust, fast decision support aids to enhance the overall effectiveness of decision-making processes. This paper discusses the importance of simulation metamodeling through artificial neural networks (ANNs), and provides general guidelines for the development of ANN-based simulation metamodels. Such guidelines were successfully applied in the development of two ANNs trained to estimate the manufacturing lead times (MLT) for orders simultaneously processed in a four-machine job shop. The design of intelligent systems such as ANNs may help to avoid some of the drawbacks of traditional computer simulation. Metamodels offer significant advantages regarding time consumption and simplicity to evaluate multi-criteria situations. Their operation is notoriously fast compared to the time required to operate conventional simulation packages. (C) 2003 Elsevier Ltd. All rights reserved. 2003 * 337(<-208): Evolutionary Surrogate Optimization of an Industrial Sintering Process Despite showing immense potential as an optimization technique for solving complex industrial problems, the use of evolutionary algorithms, especially genetic algorithms (GAs), is restricted to offline applications in many industrial cases due to their computationally expensive nature. This problem becomes even more severe when the underlying function as well as constraint evaluation is computationally expensive. To reduce the overall application time under this kind of scenario, a combined usage of the original expensive model and a relatively less expensive surrogate model built around the data provided by the original model in the course of optimization has been proposed in this work. Use of surrogates provides the quickness in the application, thereby saving the execution time, and the use of original model allows the optimization tool to be in the right path of the search process. Switching to the surrogate model happens if predictability of the model is of acceptable accuracy (to be decided by the decision maker), and thereby the optimization time is saved without compromising the solution quality. This concept of successive use of surrogate (artificial neural network [ANN]) and original expensive model is applied on an industrial two-layer sintering process where optimization decides the individual thickness and coke content of each layer to maximize sinter quality and minimize coke consumption simultaneously. The use of surrogate could reduce the execution time by 60% and thereby improve the decision support system utilization without compromising the solution quality. 2013 * 338(<-344): Successive approximate model based multi-objective optimization for an industrial straight grate iron ore induration process using evolutionary algorithm Multi-objective optimization of any complex industrial process using first principle computationally expensive models often demands a substantially higher computation time for evolutionary algorithms making it less amenable for real time implementation. A combination of the above-mentioned first principle model and approximate models based on artificial neural network (ANN) successively learnt in due course of optimization using the data obtained from first principle models can be intelligently used for function evaluation and there by reduce the aforementioned computational burden to a large extent. In this work, a multi-objective optimization task (simultaneous maximization of throughput and Tumble index) of an industrial iron ore induration process has been studied to improve the operation of the process using the above-mentioned metamodeling approach. Different pressure and temperature values at different points of the furnace bed, grate speed and bed height have been used as decision variables where as the bounds on cold compression strength, abrasion index, maximum pellet temperature and burn-through point temperature have been treated as constraints. A popular evolutionary multi-objective algorithm, NSGA II, amalgamated with the first principle model of the induration process and its successively improving approximation model based on ANN, has been adopted to carryout the task. The optimization results show that as compared to the PO solutions obtained using only the first principle model, (i) similar or better quality PO solutions can be achieved by this metamodeling procedure with a close to 50% savings in function evaluation and there by computation time and (ii) by keeping the total number of function evaluations same, better quality PO solutions can be obtained. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 339(<-619): Joint application of artificial neural networks and evolutionary algorithms to watershed management Artificial neural networks (ANNs) have become common data driven tools for modeling complex, nonlinear problems in science and engineering. Many previous applications have relied on gradient-based search techniques, such as the back propagation ( BP) algorithm, for ANN training. Such techniques, however, are highly susceptible to premature convergence to local optima and require a trial-and-error process for effective design of ANN architecture and connection weights. This paper investigates the use of evolutionary programming ( EP), a robust search technique, and a hybrid EP-BP training algorithm for improved ANN design. Application results indicate that the EP-BP algorithm may limit the drawbacks of using local search algorithms alone and that the hybrid performs better than EP from the perspective of both training accuracy and efficiency. In addition, the resulting ANN is used to replace the hydrologic simulation component of a previously developed multiobjective decision support model for watershed management. Due to the efficiency of the trained ANN with respect to the traditional simulation model, the replacement reduced the overall computational time required to generate preferred watershed management policies by 75%. The reduction is likely to improve the practical utility of the management model from a typical user perspective. Moreover, the results reveal the potential role of properly trained ANNs in addressing computational demands of various problems without sacrificing the accuracy of solutions. 2004 * 340(<-640): Speeding up backpropagation using multiobjective evolutionary algorithms The use of backpropagation for training artificial neural networks (ANNs) is usually associated with a long training process. The user needs to experiment with a number of network architectures; with larger networks, more computational cost in terms of training time is required. The objective of this letter is to present an optimization algorithm, comprising a multiobjective evolutionary algorithm and a gradient-based local search. In the rest of the letter, this is referred to as the memetic Pareto artificial neural network algorithm for training ANNs. The evolutionary approach is used to train the network and simultaneously optimize its architecture. The result is a set of networks, with each network in the set attempting to optimize both the training error and the architecture. We also present a self-adaptive version with lower computational cost. We show empirically that the proposed method is capable of reducing the training time compared to gradient-based techniques. 2003 * 341(<-302): An integration methodology based on fuzzy inference systems and neural approaches for multi-stage supply-chains This paper proposes a methodology for supply chain (SC) integration from customers to suppliers through warehouses, retailers, and plants via both adaptive network based fuzzy inference system and artificial neural networks approaches. The methodology presented provides this integration by finding the requested supplier capacities using the demand and order lead time information across the whole SC in an uncertain environment. The SC structure is investigated stage by stage. The sensitivity analysis is made by comparing the obtained results with the traditional statistical techniques. A company serving in durable consumer goods industry that produces consumer electronics in Istanbul, Turkey was examined to demonstrate the applicability of the proposed methodology. (C) 2011 Elsevier Ltd. All rights reserved. 2012 * 342(<-463): Entropy-based optimal sensor networks for structural health monitoring of a cable-stayed bridge The sudden collapse of Interstate 35 Bridge in Minneapolis gave a wake-up call to US municipalities to re-evaluate aging bridges. In this situation, structural health monitoring (SUM) technology can provide the essential help needed for monitoring and maintaining the nation's infrastructure. Monitoring long span bridges such as cable-stayed bridges effectively requires the use of a large number of sensors. In this article, we introduce a probabilistic approach to identify optimal locations of sensors to enhance damage detection. Probability distribution functions are established using an artificial neural network traced using a priori knowledge of damage locations. The optimal number of sensors is identified using multi-objective optimization that simultaneously considers information entropy and sensor cost-objective functions. Luling Bridge, a cable-stayed bridge over the Mississippi River, is selected as a case study to demonstrate the efficiency of the proposed approach. 2009 * 343(<-675): Pattern classification by linear goal programming and its extensions Pattern classification is one of the main themes in pattern recognition, and has been tackled by several methods such as the statistic one, artificial neural networks, mathematical programming and so on. Among them, the multi-surface method proposed by Mangasarian is very attractive, because it can provide an exact discrimination function even for highly nonlinear problems without any assumption on the data distribution. However, the method often causes many slits on the discrimination curve. In other words, the piecewise linear discrimination curve is sometimes too complex resulting in a poor generalization ability. In this paper, several trials in order to overcome the difficulties of the multi-surface method are suggested. One of them is the utilization of goal programming in which the auxiliary linear programming problem is formulated as a goal programming in order to get as simple discrimination curves as possible. Another one is to apply fuzzy programming by which we can get fuzzy discrimination curves with gray zones. In addition, it will be shown that using the suggested methods, the additional learning can be easily made. These features of the methods make the discrimination more realistic. The effectiveness of the methods is shown on the basis of some applications. 1998 * 344(<-404): Multiple criteria optimization-based data mining methods and applications: a systematic survey Support Vector Machine, an optimization technique, is well known in the data mining community. In fact, many other optimization techniques have been effectively used in dealing with data separation and analysis. For the last 10 years, the author and his colleagues have proposed and extended a series of optimization-based classification models via Multiple Criteria Linear Programming (MCLP) and Multiple Criteria Quadratic Programming (MCQP). These methods are different from statistics, decision tree induction, and neural networks. The purpose of this paper is to review the basic concepts and frameworks of these methods and promote the research interests in the data mining community. According to the evolution of multiple criteria programming, the paper starts with the bases of MCLP. Then, it further discusses penalized MCLP, MCQP, Multiple Criteria Fuzzy Linear Programming (MCFLP), Multi-Class Multiple Criteria Programming (MCMCP), and the kernel-based Multiple Criteria Linear Program, as well as MCLP-based regression. This paper also outlines several applications of Multiple Criteria optimization-based data mining methods, such as Credit Card Risk Analysis, Classification of HIV-1 Mediated Neuronal Dendritic and Synaptic Damage, Network Intrusion Detection, Firm Bankruptcy Prediction, and VIP E-Mail Behavior Analysis. 2010 * 345(<-620): Classification of HIV-1-mediated neuronal dendritic and synaptic damage using multiple criteria linear programming The ability to identify neuronal damage in the dendritic arbor during HIV-1-associated dementia (HAD) is crucial for designing specific therapies for the treatment of HAD. To study this process, we utilized a computer-based image analysis method to quantitatively assess HIV-1 viral protein gp120 and glutamate-mediated individual neuronal damage in cultured cortical neurons. Changes in the number of neurites, arbors, branch nodes, cell body area, and average arbor lengths were determined and a database was formed (http://dm.st.unomaha. edu/database.htm). We further proposed a two-class model of multiple criteria linear programming (MCLP) to classify such HIV-1-mediated neuronal dendritic and synaptic damages. Given certain classes, including treatments with brain-derived neurotrophic factor (BDNF), glutamate, gp120 or non-treatment controls from our in vitro experimental systems, we used the two-class MCLP model to determine the data patterns between classes in order to gain insight about neuronal dendritic damages. This knowledge can be applied in principle to the design and study of specific therapies for the prevention or reversal of neuronal damage associated with HAD. Finally, the MCLP method was compared with a well-known artificial neural network algorithm to test for the relative potential of different data mining applications in HAD research. 2004 * 346(<-652): Multi-objective cooperative coevolution of artificial neural networks (multi-objective cooperative networks) In this paper we present a cooperative coevolutive model for the evolution of neural network topology and weights, called MOBNET. MOBNET evolves subcomponents that must be combined in order to form a network, instead of whole networks. The problem of assigning credit to the subcomponents is approached as a multi-objective optimization task. The subcomponents in a cooperative coevolutive model must fulfill different criteria to be useful, these criteria usually conflict with each other. The problem of evaluating the fitness on an individual based on many criteria that must be optimized together can be approached as a multi-criteria optimization problems, so the methods from multi-objective optimization offer the most natural way to solve the problem. In this work we show how using several objectives for every subcomponent and evaluating its fitness as a multi-objective optimization problem, the performance of the model is highly competitive. MOBNET is compared with several standard methods of classification and with other neural network models in solving four real-world problems, and it shows the best overall performance of all classification methods applied. It also produces smaller networks when compared to other models. The basic idea underlying MOBNET is extensible to a more general model of coevolutionary computation, as none of its features are exclusive of neural networks design. There are many applications of cooperative coevolution that could benefit from the multi-objective optimization approach proposed in this paper. (C) 2002 Elsevier Science Ltd. All fights reserved. 2002 * 347(<-362): Learning in the feed-forward random neural network: A critical review The Random Neural Network (RNN) has received, since its inception in 1989, considerable attention and has been successfully used in a number of applications. In this critical review paper we focus on the feed-forward RNN model and its ability to solve classification problems. In particular, we paid special attention to the RNN literature related with learning algorithms that discover the RNN interconnection weights, suggested other potential algorithms that can be used to find the RNN interconnection weights, and compared the RNN model with other neural-network based and non-neural network based classifier models. In review, the extensive literature review and experimentation with the RNN feed-forward model provided us with the necessary guidance to introduce six critical review comments that identify some gaps in the RNN's related literature and suggest directions for future research. (C) 2010 Elsevier B.V. All rights reserved. 2011 * 348(<-607): An evolutionary artificial neural networks approach for BF hot metal silicon content prediction This paper presents an evolutionary artificial neural network (EANN) to the prediction of the BF hot metal silicon content. The pareto differential evolution (PDE) algorithm is used to optimize the connection weights and the network's architecture (number of hidden nodes) simultaneously to improve the prediction precision. The application results show that the prediction of hot metal silicon content is successful. Data, used in this paper, were collected from No. I BF at Laiwu Iron and Steel Group Co.. 2005 * 349(<- 54): Coupled Data-Driven Evolutionary Algorithm for Toxic Cyanobacteria (Blue-Green Algae) Forecasting in Lake Kinneret Cyanobacteria blooming in surface waters have become a major concern worldwide, as they are unsightly, and cause a variety of toxins, undesirable tastes, and odors. Approaches of mathematical process-based (deterministic), statistically based, rule-based (heuristic), and artificial neural networks have been the subject of extensive research for cyanobacteria forecasting. This study suggests a new framework of linking an evolutionary computational method (a genetic algorithm) with a data driven modeling engine (model trees) for external loading, physical, chemical, and biological parameters selection, all coupled with their associated time lags as decision variables for cyanobacteria prediction in surface waters. The methodology is demonstrated through trial runs and sensitivity analyses on Lake Kinneret (the Sea of Galilee), Israel. Model trials produced good matching as depicted through the results correlation coefficient on verification data sets. Temperature was reconfirmed as a predominant parameter for cyanobacteria prediction. Model optimal input variables and forecast horizons differed in various solutions. Those in turn raised the problem of best variables selection, pointing towards the need of a multiobjective optimization model in future extensions of the proposed methodology. (C) 2014 American Society of Civil Engineers. 2015 * 350(<-507): Neuro-genetic non-invasive temperature estimation: Intensity and spatial prediction Objectives: The existence of proper non-invasive temperature estimators is an essential aspect when thermal therapy applications are envisaged. These estimators must be good predictors to enable temperature estimation at different operational situations, providing better control of the therapeutic instrumentation. In this work, radial basis functions artificial neural networks were constructed to access temperature evolution on an ultrasound insonated medium. Methods: The employed models were radial basis functions neural networks with external dynamics induced by their inputs. Both the most suited set of model inputs and number of neurons in the network were found using the multi-objective genetic algorithm. The neural models were validated in two situations: the operating ones, as used in the construction of the network; and in 11 unseen situations. The new data addressed two new spatial locations and a new intensity level, assessing the intensity and space prediction capacity of the proposed model. Results: Good performance was obtained during the validation process both in terms of the spatial points considered and whenever the new intensity level was within the range of applied intensities. A maximum absolute error of 0.5 degrees C +/- 10% (0.5 degrees C is the gold-standard threshold in hyperthermia/diathermia) was attained with low computationally complex models. Conclusion: The results confirm that the proposed neuro-genetic approach enables foreseeing temperature propagation, in connection to intensity and space parameters, thus enabling the assessment of different operating situations with proper temperature resolution. (C) 2008 Elsevier B.V. All rights reserved. 2008 * 351(<-559): Genetic algorithms in optimization of strength and ductility of low-carbon steels A comparative study between the conventional goal attainment strategy and an evolutionary approach using a genetic algorithm has been conducted for the multiobjective optimization of the strength and ductility of low-carbon ferrite-pearlite steels. The optimization is based upon the composition and microstructural relations of the mechanical properties suggested earlier through regression analyses. After finding that a genetic algorithm is more suitable for such a problem, Pareto fronts have been developed which give a range of strength and ductility useful in alloy design. An effort has been made to optimize the strength ductility balance of thermomechanically-processed high-strength multiphase steels. The objective functions are developed from empirical relations using regression and neural network modeling, which have the capacity to correlate high number of compositional and process variables, and works better than the conventional regression analyses. 2007 * 352(<-662): Evolutionary optimization of RBF networks. One of the main obstacles to the widespread use of artificial neural networks is the difficulty of adequately defining values for their free parameters. This article discusses how Radial Basis Function (RBF) networks can have their parameters defined by genetic algorithms. For such, it presents an overall view of the problems involved and the different approaches used to genetically optimize RBF networks. A new strategy to optimize RBF networks using genetic algorithms is proposed, which includes new representation, crossover operator and the use of a multiobjective optimization criterion. Experiments using a benchmark problem are performed and the results achieved using this model are compared to those achieved by other approaches. 2001 * 353(<-194): Neural Networks Applied in Chemistry. II. Neuro-Evolutionary Techniques in Process Modeling and Optimization Artificial neural networks are widely used in data analysis and to control dynamic processes. These tools are powerful and versatile, but the way in which they are constructed, in particular their architecture, strongly affects their value and reliability. We review here some key techniques for optimizing artificial neural networks and comment on their use in process modeling and optimization. Neuro-evolutionary techniques are described and compared, with the goal of providing efficient modeling methodologies which employ an optimal neural model. We also discuss how neural networks and evolutionary algorithms can be combined. Applications from chemical engineering illustrate the effectiveness and reliability of the hybrid neuro-evolutionary methods. 2013 * 354(<-512): Prediction of the solar radiation evolution using computational intelligence techniques and cloudiness indices In this paper, Artificial Neural Networks are applied for multi-step long term solar radiation prediction. The input-output structure of the neural network models is selected using evolutionary computation methods. The networks are trained as one-step-ahead predictors and iterated over time to obtain multi-step longer term predictions. Auto-regressive and auto-regressive with exogenous inputs models are compared, considering cloudiness indices as inputs in the latter case. These indices are obtained through pixel classification of ground-to-sky images, captured by a CCD camera. 2008 * 355(<-535): Multi-objective evolutionary optimization of subsonic airfoils by meta-modelling and evolution control The current work concerns the application of multi-objective evolutionary optimization by approximation function to aerodynamic design. A new general technique, named evolution control (EC), is used in order to manage the on-line enriching of correct solutions database, which is the basis of the learning procedure for the approximators. Substantially, this approach provides that the database, initially quite small and enabling a very inaccurate approximation, should be integrated during the optimization. Such integration is done by means of some choice criteria, allowing deciding which individuals of the current population should be verified. The technique showed being efficacious and very efficient for the considered problem, whose dimensionality are 5. Even if general principle of EC is valid independently from the kind of adopted approximator, this last strongly affects the application. Obtained results are utilized to show how the adoption of artificial neural networks and kriging can differently influence the whole optimization process. Moreover, first results, achieved after reformulating the same problem with seven parameters, support the idea of the performance of the method scale well with dimensionality. 2007 * 356(<-594): Stopping criteria for ensemble of evolutionary artificial neural networks The formation of ensemble of artificial neural networks has attracted attentions of researchers in the machine learning and statistical inference domains. It has been shown that combining different neural networks could improve the generalization ability of the learning machine. One challenge is when to stop the training or evolution of the neural networks to avoid overfitting. In this paper, we show that different early stopping criteria based on (i) the minimum validation fitness of the ensemble, and (ii) the minimum of the average population validation fitness could generalize better than the survival population in the last generation. The proposition was tested on four different ensemble methods: (i) a simple ensemble method, where each individual of the population (created and maintained by the evolutionary process) is used as a committee member, (ii) ensemble with island model as a diversity promotion mechanism, (iii) a recent successful ensemble method namely ensemble with negative correlation learning and (iv) an ensemble formed by applying multi-objective optimization. The experimental results suggested that using minimum validation fitness of the ensemble as an early stopping criterion is beneficial. (C) 2005 Elsevier B.V. All rights reserved. 2005 * 357(<-231): Manganese waste mud immobilization in cement natural zeolite lime blend: Process optimization using artificial neural networks and multi-criteria functions In this study, stabilization/solidification process of manganese contaminated mud using portland cement was optimized. For that purpose, immobilization process was modeled applying artificial neural networks with radial basis activation function. The optimal model presented satisfactory prediction characteristics (R2 value for manganese leaching was 0.9615 while and for concrete flexural strength 0.8748). Therefore, it was used in combination with seven in-house developed multi-criteria optimization functions, separately, in order to optimize concrete formulation. The used approach proved itself as efficient and cost effective alternative in ecological material formulation process. The best properties (i.e. high flexural strength and lowest manganese leaching) manifested stabilization/solidification matrix consisted of 350 g of portland cement, 20 g of lime, 70 g of natural zeolite, 10 g of manganese waste mud and 180 g of water. 2013 * 358(<-318): OPTIMIZATION OF ARSENIC SLUDGE IMMOBILIZATION PROCESS IN CEMENT - NATURAL ZEOLITE - LIME BLENDS USING ARTIFICIAL NEURAL NETWORKS AND MULTI-OBJECTIVE CRITERIA FUNCTIONS This work focuses on optimization of arsenic sludge immobilization process in cement natural zeolite lime blends using artificial neural networks and multi-objective criteria functions. The developed artificial neural network model describes relations between solidified/stabilized cement formulation and its mechanical (compressive strength) and ecological properties (arsenic and iron release). It is proven that developed artificial neural network solidified/stabilized model has satisfactory performance characteristics (R-2>0.9031 without presence of systematic error; based on an external validation experimental data set). Four multi-objective optimization criteria functions, different in terms of mathematical formulation and ecological interpretation, were developed. The developed criteria functions were used in combination with the artificial neural network solidified/stabilized model, providing optimal cement formulation. Finally, this study describes an efficient and cost-effective alternative in ecological material formulation process. 2012 * 359(<-552): Evolutionary artificial neural network for selecting flexible manufacturing systems under disparate level-of-satisfaction of decision maker This paper proposes the application of Meta-Learning Evolutionary Artificial Neural Network (MLEANN) in selecting the best flexible manufacturing systems (FMS) from a group of candidate FMSs. Multi-criteria decision-making (MCDM) methodology using an improved S-shaped membership function has been developed for finding out the "best candidate FMS alternative" from a set of candidate-FMSS. The MCDM model trade-offs among various parameters, viz., design parameters, economic considerations, etc., affecting the FMS selection process under multiple, conflicting-in-nature criteria environment. The selection of FMS is made according to the error output of the results found from the proposed MCDM model. 2007 * 360(<-584): Meta-learning evolutionary artificial neural network for selecting flexible manufacturing systems This paper proposes the application of Meta-Learning Evolutionary Artificial Neural Network (MLEANN) in selecting flexible manufacturing systems (FMS) from a group of candidate FMS's. First, multi-criteria decision-making (MCDM) methodology using an improved S-shaped membership function has been developed for finding out the 'best candidate FMS alternative' from a set of candidate-FMSs. The MCDM model trade-offs among various parameters, namely, design parameters, economic considerations, etc., affecting the FMS selection process in multi-criteria decision-making environment. Genetic algorithm is used to evolve the architecture and weights of the proposed neural network method. Further, a back-propagation (BP) algorithm is used as the local search algorithm. The selection of FMS is made according to the error output of the results found from the MCDM model. 2006 * 361(<- 95): Fuzzy reliability analysis of repairable industrial systems using soft-computing based hybridized techniques The purpose of the present study is to analyze the fuzzy reliability of a repairable industrial system utilizing historical vague, imprecise and uncertain data which reflects its components' failure and repair pattern. Soft-computing based two different hybridized techniques named as Genetic Algorithms Based Lambda-Tau (GABLT) and Neural Network and Genetic Algorithms Based Lambda-Tau (NGABLT) along with a traditional Fuzzy Lambda-Tau (FLT) technique are used to evaluate some important reliability indices of the system in the form of fuzzy membership functions. As a case study, all the three techniques are applied to analyse the fuzzy reliability of the washing system in a paper mill and results are compared. Sensitivity analysis has also been performed to analyze the effect of variation of different reliability parameters on system performance. The analysis can help maintenance personnel to understand and plan suitable maintenance strategy to improve the overall performance of the system. Based on results some important suggestions are given for future course of action in maintenance planning. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 362(<- 71): Multi-criteria Optimization of Hole Geometry for the Laser Trepanning of the Titanium Alloy Ti-6Al-4V Titanium and its alloys are extensively used materials in aerospace industry due to their remarkable metallurgical and mechanical characteristics; however, superior mechanical properties, poor thermal conductivity and higher chemical affinity of titanium alloys make them difficult-to-cut by conventional machining methods. The present research investigates the possibility of machining quality improvement in Ti-6Al-4V by applying the laser trepan drilling. The drilled hole quality in terms of hole taper, and circularity at top and bottom sides have been considered for multi-criteria optimization. Authors have applied a new hybrid approach of modelling and multi-objective optimization of hole geometry in laser trepan drilling of difficult to cut Ti-alloy. The hole quality comprising taper and circularities are represented by a common performance index in mathematical form using artificial neural network modelling coupled with grey-entropy measurement technique. Optimization of the performance index function using genetic algorithms show considerable improvements in hole quality. The parametric effect show that high chemical reactivity and low thermal conductivity of Ti-alloy play important role in deteriorating the hole geometry. 2015 * 363(<- 98): Modeling and optimization of turning duplex stainless steels The attractive combination of high mechanical strength, good corrosion resistance and relatively low cost has contributed to making duplex stainless steels (DSSs) one of the fastest growing groups of stainless steels. As the importance of DSSs is increasing, practical information about their successful machining is expected to be crucial. To address this industrial need, standard EN 1.4462 and super EN 1.4410 DSSs are machined under constant cutting speed multi-pass facing operations. A systematic approach which employs different modeling and optimization tools under a three phase investigation scheme has been adopted. In phase I, the effect of design variables such as cutting parameters, cutting fluids and axial length of cuts are investigated using the D-Optimal method. The mathematical models for performance characteristics such as; percentage increase in radial cutting force (%F-r), effective cutting power (P-e), maximum tool flank wear (VBmax) and chip volume ratio (R) are developed using response surface methodology (RSM). The adequacy of derived models for each cutting scenario is checked using analysis of variance (ANOVA). Parametric meta-heuristic optimization using Cuckoo search (CS) algorithm is then performed to determine the optimum design variable set for each performance. In the phase II, comprehensive experiment-based production cost and production rate models are developed. To overcome the conflict between the desire of minimizing the production cost and maximizing the production rate, compromise solutions are suggested using Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The alternatives are ranked according to their relative closeness to the ideal solution. In the phase III, expert systems based on fuzzy rule modeling approach are adopted to derive measures of machining operational sustainability called operational sustainability index (OSI). Artificial neural network (ANN) based models are developed to study the effect of design variables on computed OSIs. Cuckoo search neural network systems (CSNNS) are finally utilized to constrainedly optimize the cutting process per each cutting scenario. The most appropriate cutting setup to ensure successful turning of standard EN 1.4462 and super EN 1.4410 for each scenario is selected in accordance with conditions which give the maximum OSI. (C) 2014 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved. 2014 * 364(<-114): Multicriteria decision analysis: a multifaceted approach to medical equipment management Selecting medical equipment is a complex multidisciplinary task requiring mathematical tools, considering associated uncertainties. This paper offers an in-depth study of multiple-criteria decision analysis (MCDA) methods to identify the most appropriate ones for performing management tasks in resource-limited settings. The chosen articles were divided into three topics: evaluation of projects and equipment, selection of projects and equipment, and development of medical devices. Three methods (analytic hierarchy process [AHP], multi-attribute utility theory and elimination and choice expressing reality) were selected for detailed analyses of their application for medical equipment management. Twenty-one work using MCDA, artificial neural networks, human factors engineering, and value analysis were analysed in the framework of medical equipment management. The important aspects of the procedure were described, highlighting their advantages and disadvantages. It was determined that the AHP approach corresponds to all defined criteria for selecting large medical equipment. Managing large medical equipment using MCDA will reduce uncertainties, and provide a rational selection and purchase of the most efficient equipment in resource-limited settings. The direction for improving the AHP method was determined. 2014 * 365(<-245): Gene expression rule discovery and multi-objective ROC analysis using a neural-genetic hybrid Microarray data allows an unprecedented view of the biochemical mechanisms contained within a cell although deriving useful information from the data is still proving to be a difficult task. In this paper, a novel method based on a multi-objective genetic algorithm is investigated that evolves a near-optimal trade-off between Artificial Neural Network (ANN) classifier accuracy (sensitivity and specificity) and size (number of genes). This hybrid method is shown to work on four well-established gene expression data sets taken from the literature. The results provide evidence for the rule discovery ability of the hybrid method and indicate that the approach can return biologically intelligible as well as plausible results and requires no pre-filtering or pre-selection of genes. 2013 * 366(<-600): A hybrid promoter analysis methodology for prokaryotic genomes One of the big challenges of the post-genomic era is identifying regulatory systems and integrating them into genetic networks. Gene expression is determined by protein-protein interactions among regulatory proteins and with RNA polymerase(s), and protein-DNA interactions of these trans-acting factors with cis-acting DNA sequences in the promoter regions of those regulated genes. Therefore, identifying these protein-DNA interactions, by means of the DNA motifs that characterize the regulatory factors operating in the transcription of a gene, becomes crucial for determining, which genes participate in a regulation process, how they behave and how they are connected to build genetic networks. In this paper. we propose a hybrid promoter analysis methodology (HPAM) to discover complex promoter motifs that combines: the neural network efficiency and ability of representing imprecise and incomplete patterns; the flexibility and interpretability of fuzzy models; and the multi-objective evolutionary algorithms capability to identify optimal instances of a model by searching according to multiple criteria. We test our methodology by learning and predicting the RNA polymerase motif in prokaryotic genomes. This constitutes a special challenge due to the multiplicity of the RNA polymerase targets and its connectivity with other transcription factors, which sometimes require multiple functional binding sites even in close located regulatory regions; and the uncertainty of its motif, which allows sites with low specificity (i.e., differing from the best alignment or consensus) to still be functional. HPAM is available for public use in http://soar-tools.wustl.edu. (c) 2004 Elsevier B.V All rights reserved. 2005 * 367(<-663): A systemic self-modelling method and its application to material design and optimization A self-modelling system for material research has been developed based on discriminant analysis, artificial neural networks, classification mapping and genetic algorithms. It provides systemic methodologies for nonlinear multivariate modelling and multi-objective optimizing. It is designed to unveil connotative information from a limited experimental data set and gives qualitative, quantitative and geometry models of the object to be researched. In addition, optimized research schemes can be derived from these models by genetic algorithms and classification mapping. The technique is suitable for subjects that have some original study results but for the following reasons there are difficulties in doing further research. (i) The object researched has too many controlling factors and is too complex to analyse. (ii) The object is controlled by some unexplainable mechanisms and is difficult to analyse. (iii) The mathematical expression has strong nonlinearity and is difficult to resolve strictly. 2001 * 368(<-476): Multi-objective optimization of material selection for sustainable products: Artificial neural networks and genetic algorithm approach Material properties and selection are very important in product design. To get more sustainable products. not only the technical and economic factors, but also the environmental factors should be considered. To satisfy the requirements, evaluation indicators of materials are presented. Environmental impacts were Calculated by the Life Cycle Assessment method (LCA method). An integration of artificial neural networks (ANN) with genetic algorithms (GAs) is proposed to optimize the multi-objectives of material selection, it was validated by an example that the system can select suitable materials to develop sustainable products. (C) 2008 Elsevier Ltd. All rights reserved. 2009 * 369(<-499): Hybrid multiobjective evolutionary design for artificial neural networks Evolutionary algorithms are a class of stochastic search methods that attempts to emulate the biological process of evolution, incorporating concepts of selection, reproduction, and mutation. In recent years, there has been an increase in the use of evolutionary approaches in the training of artificial neural networks (ANNs). While evolutionary techniques for neural networks have shown to provide superior performance over conventional training approaches, the simultaneous optimization of network performance and architecture will almost always result in a slow training process due to the added algorithmic complexity. In this paper, we present a geometrical measure based on the singular value decomposition (SVD) to estimate the necessary number of neurons to be used in training a single-hidden-layer feedforward neural network (SLFN). In addition, we develop a new hybrid multiobjective evolutionary approach that includes the features of a variable length representation that allow for easy adaptation of neural networks structures, an architectural recombination procedure based on the geometrical measure that adapts the number of necessary hidden neurons and facilitates the exchange of neuronal information between candidate designs, and a microhybrid genetic algorithm (mu HGA) with an adaptive local search intensity scheme for local fine-tuning. In addition, the performances of well-known algorithms as well as the effectiveness and contributions of the proposed approach are analyzed and validated through a variety of data set types. 2008 * 370(<-603): Application of genetic algorithms for process integration and optimization considering environmental factors A systematic methodology for pollution prevention based on process integration is presented in this report. In this methodology, process simulation was carried out to provide mass and energy information of the chemical process. An artificial neural network (ANN) was used to replace rigorous process simulation in the optimization process to improve computational efficiency. Multiobjective optimization was performed to obtain coordinate optimization of process performance for both economics and environmental aspects. A multiobjective genetic algorithm was used to solve the multiobjective optimization problems. Mass and energy use were considered simultaneously in this program. A case study of a wastewater recovery system in ammonia production process is discussed to illustrate the effectiveness of this pollution prevention methodology. (c) 2004 American Institute of Chemical Engineers Environ Prog, 2005 * 371(<-611): Decision support for watershed management using evolutionary algorithms An integrative computational methodology is developed for the management of nonpoint source pollution from watersheds. The associated decision support system is based on an interface between evolutionary algorithms (EAs) and a comprehensive watershed simulation model, and is capable of identifying optimal or near-optimal land use patterns to satisfy objectives. Specifically, a genetic algorithm (GA) is linked with the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT) for single objective evaluations, and a Strength Pareto Evolutionary Algorithm has been integrated with SWAT for multiobjective optimization. The model can be operated at a small spatial scale, such as a farm field, or on a larger watershed scale. A secondary model that also uses a GA is developed for calibration of the simulation model. Sensitivity analysis and parameterization are carried out in a preliminary step to identify model parameters that need to be calibrated. Application to a demonstration watershed located in Southern Illinois reveals the capability of the model in achieving its intended goals. However, the model is found to be computationally demanding as a direct consequence of repeated SWAT simulations during the search for favorable solutions. An artificial neural network (ANN) has been developed to mimic SWAT outputs and ultimately replace it during the search process. Replacement of SWAT by the ANN results in an 84% reduction in computational time required to identify final land use patterns. The ANN model is trained using a hybrid of evolutionary programming (EP) and the back propagation (BP) algorithms. The hybrid algorithm was found to be more effective and efficient than either EP or BP alone. Overall, this study demonstrates the powerful and multifaceted role that EAs and artificial intelligence techniques could play in solving the complex and realistic problems of environmental and water resources systems. 2005 * 372(<-433): Multi-objective scheduling of dynamic job shop using variable neighborhood search Dynamic job shop scheduling that considers random job arrivals and machine breakdowns is studied in this paper. Considering an event driven policy rescheduling, is triggered in response to dynamic events by variable neighborhood search (VNS). A trained artificial neural network (ANN) updates parameters of VNS at any rescheduling point. Also, a multi-objective performance measure is applied as objective function that consists of makespan and tardiness. The proposed method is compared with some common dispatching rules that have widely used in the literature for dynamic job shop scheduling problem. Results illustrate the high effectiveness and efficiency of the proposed method in a variety of shop floor conditions. (C) 2009 Published by Elsevier Ltd. 2010 * 373(<-454): [Artificial neural network parameters optimization software and its application in the design of sustained release tablets]. Artificial neural network (ANN) is a multi-objective optimization method that needs mathematic and statistic knowledge which restricts its application in the pharmaceutical research area. An artificial neural network parameters optimization software (ANNPOS) programmed by the Visual Basic language was developed to overcome this shortcoming. In the design of a sustained release formulation, the suitable parameters of ANN were estimated by the ANNPOS. And then the Matlab 5.0 Neural Network Toolbox was used to determine the optimal formulation. It showed that the ANNPOS reduced the complexity and difficulty in the ANN's application. 2009 * 374(<-621): A neural network approach to multiobjective and multilevel programming problems This study aims at utilizing the dynamic behavior of artificial neural networks (ANNs) to solve multiobjective programming (MOP) and multilevel programming (MLP) problems. The traditional and nontraditional approaches to the MLP are first classified into five categories. Then, based on the approach proposed by Hopfield and Tank [1], the optimization problem is converted into a system of nonlinear differential equations through the use of an energy function and Lagrange multipliers. Finally, the procedure is extended to MOP and MLP problems. To solve the resulting differential equations, a steepest descent search technique is used. This proposed nontraditional algorithm is efficient for solving complex problems, and is especially useful for implementation on a large-scale VLSI, in which the MOP and MLP problems can be solved on a real time basis. To illustrate the approach, several numerical examples are solved and compared. (C) 2004 Elsevier Ltd. All rights reserved. 2004 * 375(<-655): An evolutionary artificial neural networks approach for breast cancer diagnosis This paper presents an evolutionary artificial neural network (EANN) approach based on the pareto-differential evolution (PDE) algorithm augmented with local search for the prediction of breast cancer. The approach is named memetic pareto artificial neural network (MPANN). Axtificial neural networks (ANNs) could be used to improve the work of medical practitioners in the diagnosis of breast cancer. Their abilities to approximate nonlinear functions and capture complex relationships in the data are instrumental abilities which could support the medical domain. We compare our resuits against an evolutionary programming approach and standard backpropagation (BP), and we show experimentally that MPANN has better generalization and much lower computational cost. (C) 2002 Elsevier Science B.V. All rights reserved. 2002 * 376(<-328): Food processing optimization using evolutionary algorithms Evolutionary algorithms are widely used in single and multi-objective optimization. They are easy to use and provide solution(s) in one simulation run. They are used in food processing industries for decision making. Food processing presents constrained and unconstrained optimization problems. This paper reviews the development of evolutionary algorithm techniques as used in the food processing industries. Some evolutionary algorithms like genetic algorithm, differential evolution, artificial neural networks and fuzzy logic were studied with reference to their applications in food processing. Several processes involved in food processing which include thermal processing, food quality, process design, drying, fermentation and hydrogenation processes are discussed with reference to evolutionary optimization techniques. We compared the performances of different types of evolutionary algorithm techniques and suggested further areas of application of the techniques in food processing optimization. 2011 * 377(<-513): Soft computing in engineering design - A review The present paper surveys the application of soft computing (SC) techniques in engineering design. Within this context, fuzzy logic (FL), genetic algorithms (GA) and artificial neural networks (ANN), as well as their fusion are reviewed in order to examine the capability of soft computing methods and techniques to effectively address various hard-to-solve design tasks and issues. Both these tasks and issues are studied in the first part of the paper accompanied by references to some results extracted from a survey performed for in some industrial enterprises. The second part of the paper makes an extensive review of the literature regarding the application of soft computing (SC) techniques in engineering design. Although this review cannot be collectively exhaustive, it may be considered as a valuable guide for researchers who are interested in the domain of engineering design and wish to explore the opportunities offered by fuzzy logic, artificial neural networks and genetic algorithms for further improvement of both the design outcome and the design process itself. An arithmetic method is used in order to evaluate the review results, to locate the research areas where SC has already given considerable results and to reveal new research opportunities. (C) 2007 Elsevier Ltd. All rights reserved. 2008 * 378(<-182): Application of computational intelligence techniques for load shedding in power systems: A review Recent blackouts around the world question the reliability of conventional and adaptive load shedding techniques in avoiding such power outages. To address this issue, reliable techniques are required to provide fast and accurate load shedding to prevent collapse in the power system. Computational intelligence techniques, due to their robustness and flexibility in dealing with complex non-linear systems, could be an option in addressing this problem. Computational intelligence includes techniques like artificial neural networks, genetic algorithms, fuzzy logic control, adaptive neuro-fuzzy inference system, and particle swarm optimization. Research in these techniques is being undertaken in order to discover means for more efficient and reliable load shedding. This paper provides an overview of these techniques as applied to load shedding in a power system. This paper also compares the advantages of computational intelligence techniques over conventional load shedding techniques. Finally, this paper discusses the limitation of computational intelligence techniques, which restricts their usage in load shedding in real time. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 379(<-274): Computational algorithms inspired by biological processes and evolution In recent times computational algorithms inspired by biological processes and evolution are gaining much popularity for solving science and engineering problems. These algorithms are broadly classified into evolutionary computation and swarm intelligence algorithms, which are derived based on the analogy of natural evolution and biological activities. These include genetic algorithms, genetic programming, differential evolution, particle swarm optimization, ant colony optimization, artificial neural networks, etc. The algorithms being random-search techniques, use some heuristics to guide the search towards optimal solution and speed-up the convergence to obtain the global optimal solutions. The bio-inspired methods have several attractive features and advantages compared to conventional optimization solvers. They also facilitate the advantage of simulation and optimization environment simultaneously to solve hard-to-define (in simple expressions), real-world problems. These biologically inspired methods have provided novel ways of problem-solving for practical problems in traffic routing, networking, games, industry, robotics, economics, mechanical, chemical, electrical, civil, water resources and others fields. This article discusses the key features and development of bio-inspired computational algorithms, and their scope for application in science and engineering fields. 2012 * 380(<-368): Machine scheduling in custom furniture industry through neuro-evolutionary hybridization Machine scheduling is a critical problem in industries where products are custom-designed. The wide range of products, the lack of previous experiences in manufacturing, and the several conflicting criteria used to evaluate the quality of the schedules define a huge search space. Furthermore, production complexity and human influence in each manufacturing step make time estimations difficult to obtain thus reducing accuracy of schedules. The solution described in this paper combines evolutionary computing and neural networks to reduce the impact of (i) the huge search space that the multi-objective optimization must deal with and (ii) the inherent problem of computing the processing times in a domain like custom manufacturing. Our hybrid approach obtains near optimal schedules through the Non-dominated Sorting Genetic Algorithm II (NSGA-II) combined with time estimations based on multilayer perceptron neural networks. (C) 2010 Elsevier B. V. All rights reserved. 2011 * 381(<-541): Multi-objective particle swarm optimization hybrid algorithm: An application on industrial cracking furnace In this paper, a new multi-objective particle swarm optimization (MOPSO) procedure, based on the Pareto dominance hybrid algorithm, is proposed and applied in a naphtha industrial cracking furnace for the first time. Pareto dominance is incorporated into particle swarm optimization (PSO). Our algorithm takes the Pareto set as a repository of particles that is later used by other particles to guide their own flight. In addition, an MOPSO and artificial neural network (ANN) hybrid model is applied in the operation optimization of a naphtha industrial cracking furnace. Therein, sensitivity analysis is investigated and taken as the basis on which decision variables of multi-objective problem base. From both theoretical computation and practical application, the validity and reliability of proposed algorithm are verified by two test functions studied, and actual application example of the optimization of operation parameter of cracking process. Moreover, the yields of ethylene and propylene are improved. 2007 * 382(<-598): Global optimization of a feature-based process sequence using GA and ANN techniques Operation sequencing has been a key area of research and development for computer-aided process planning (CAPP). An optimal process sequence could largely increase the efficiency and decrease the cost of production. Genetic algorithms (GAs) are a technique for seeking to `breed' good solutions to complex problems by survival of the fittest. Some attempts using GAs have been made on operation sequencing optimization, but few systems have intended to provide a globally optimized fittest function definition. In addition, most of the systems have a lack of adaptability or have an inability to learn. This paper presents an optimization strategy for process sequencing based on multi-objective fittness:minimum manufacturing cost, shortest manufacturing time and best satisfaction of manufacturing sequence rules. A hybrid approach is proposed to incorporate a genetic algorithm, neural network and analytical hierarchical process (AHP) for process sequencing. After a brief study of the current research, relevant issues of process planning are described. A globally optimized fittness function is then defined including the evaluation of manufacturing rules using AHP, calculation of cost and time and determination of relative weights using neural network techniques. The proposed GA-based process sequencing, the implementation and test results are discussed. Finally, conclusions and future work are summarized. 2005 * 383(<-633): An intelligent simulation method based on artificial neural network for container yard operation This paper presents an intelligent simulation method for regulation of container yard operation on container terminal. This method includes the functions of system status evaluation, operation rule and stack height regulation, and operation scheduling. In order to realize optimal operation regulation, a control architecture based on fuzzy artificial neural network is established. The regulation process includes two phases: prediction phase forecasts coming container quantity; inference phase makes decision on operation rule and stack height. The operation scheduling is a fuzzy multi-objective programming problem with operation criteria such as minimum ship waiting time and operation time. The algorithm combining genetic algorithm with simulation is developed. A case study is presented to verify the validity and usefulness of the method in simulation environment. 2004 * 384(<-237): Optimization of the Activated Sludge Process This paper presents a multiobjective model for optimization of the activated sludge process (ASP) in a wastewater-treatment plant (WWTP). To minimize the energy consumption of the activated sludge process and maximize the quality of the effluent, three different objective functions are modeled [i.e., the airflow rate, the carbonaceous biochemical oxygen demand (CBOD) of the effluent, and the total suspended solids (TSS) of the effluent]. These models are developed using a multilayer perceptron (MLP) neural network based on industrial data. Dissolved oxygen (DO) is the controlled variable in these objectives. A multiobjective model that included these objectives is solved with a multiobjective particle swarm optimization (MOPSO) algorithm. Computation results are reported for three trade-offs between energy savings and the quality of the effluent. A 15% reduction in airflow can be achieved by optimal settings of dissolved oxygen, provided that energy savings take precedence over the quality of the effluent. DOI: 10.1061/(ASCE)EY.1943-7897.0000092. (C) 2013 American Society of Civil Engineers. 2013 * 385(<-563): Learning-based automated negotiation between shipper and forwarder This paper studies an automated negotiation system by means of a learning-based approach. Negotiation between shipper and forwarder is used as an example in which the issues of negotiation are unit shipping price, delay penalty, due date, and shipping quantity. A data ratios method is proposed as the input of the neural network technique to explore the learning in automated negotiation with the negotiation decision functions (NDFs) developed by [Faratin, P., Sierra, C., & Jennings, N.R. (1998). Negotiation Decision Functions for Autonomous Agents. Robotics and Autonomous Systems, 24 (3), 159-182]. The concession tactic and weight of every issue offered by the opponent can be learned from this process exactly. After learning, a trade-off mechanism can be applied to achieve better negotiation result on the distance to Pareto optimal solution. Based on the results of this study, we believe that our findings can provide more insight into agent-based negotiation and can be applied to improve negotiation processes. (c) 2006 Elsevier Ltd. All rights reserved. 2006 * 386(<-683): FUZZY THRESHOLD FUNCTIONS AND APPLICATIONS The set of fuzzy threshold functions is defined to be a fuzzy set over the set of functions. All threshold functions have full memberships in this fuzzy set. Defines and investigates a distance measure between a non-linearly separable function and the set of all threshold functions. Defines an explicit expression for the membership function of a fuzzy threshold function through the use of this distance measure and finds three upper bounds for this measure. Presents a general method to compute the distance, an algorithm to generate the representation automatically, and a procedure to determine the proper weights and thresholds automatically. Presents the relationships among threshold gate networks, artificial neural networks and fuzzy neural networks. The results may have useful applications in logic design, pattern recognition, fuzzy logic, multi-objective fuzzy optimization and related areas. 1995 * 387(<- 63): Thermochromic sensor design based on Fe(II) spin crossover/polymers hybrid materials and artificial neural networks as a tool in modelling This article explores the use of multi-objective evolutionary machine learning techniques to find the minimum number of sensors from a pull of 6 sensors as well as the minimum number of analytical signals belonging to each selected sensor for the design of an optimal colourimetric temperature sensor. The analytical information was obtained with a calibrated neural network that provides the best temperature estimation with respect to the selected colourimetric sensor responses from a previously developed sensor array. The sensor array was developed by embedding the linear spin crossover material [Fe-(NH(2)trz)(3)](BF4)(2) into polymers with different polarity, offering different thermochromic responses related to different morphologies of the spin crossover particles when embedded in each polymer. The different thermochromic responses are tracked by the green component of the RGB colour space and the a* from CIEL*a*b* obtained with a conventional photographic digital camera. These two colour signals are used as analytical parameters for the subsequent computer processing and model calibration. The use of multi-objective optimization techniques for neural network calibration demonstrated that only 3 signals coming from 3 sensors of the 6 studied are sufficient to provide optimal temperature estimation. The optimized selection was the green channel from polyurethane hydrogel D6 and PVC prepared in THF and a* from PMMA prepared in toluene. (C) 2014 Elsevier B.V. All rights reserved. 2015 * 388(<-104): Neuro-genetic multi-objective optimization and computer-aided design of pantoprazole molecularly imprinted polypyrrole sensor A molecularly imprinted polymer (MIP) of pantoprazole (PNZ) was prepared through electropolymerization of pyrrole on a functionalized multi-walled carbon nanotube modified pencil graphite electrode. The preparation of MIP and quantitative measurements were performed by cyclic voltammetry and differential pulse voltammetry (DPV), respectively. Several important parameters controlling the performance of polypyrrole film. The factors, i.e. pH of buffer solution, cyclic voltammetric scan rate in polymerization step, number of cyclic voltammetric scans, monomer and template concentrations in prepolymerization mixture, nanotube concentration in functionalized multi-walled carbon nanotubes-coating step, uptake time after MIP preparation and uptake step stirring rate were expected to affect MIP preparation and voltammetric measurements. The optimization of parameters was performed using Plackett-Burman design, central composite design, artificial neural network and genetic algorithm. The Pareto plot showed that effects of monomer concentration and pH are most important to the process. The best MIP to NIP response ratio was obtained 17.4. The selection of monomer was performed computationally using ab initio calculations. The calibration curve demonstrated linearity over a concentration range of 5-700 mu M with a correlation coefficient (r) of 0.9980. The detection limit of PNZ was obtained 3.75 x 10(-7) M. The minimum and maximum recovery (%) through the spiking 0.1-0.4 mM PNZ to a biological and some pharmaceutical matrices were obtained 95.9% (human blood serum) and 106% (PNZ tablet), respectively. (C) 2014 Elsevier B.V. All rights reserved. 2014 * 389(<-372): A SELF ORGANIZING MAP BASED HYBRID MULTI-OBJECTIVE OPTIMIZATION OF WATER DISTRIBUTION NETWORKS Water Distribution Networks (WDNs) are an essential infrastructure of every civilization. In the past decades, there has been a lot of work on the optimization of WDNs. This paper presents a hybrid NSGA-II for multi-objective optimization of combinatorial WDN design, utilizing the SOM network as a tool to find the genotypic or phenotypic similarities. SOM is a versatile unsupervised Artificial Neural Network (ANN) that can be used to extract the similarities and find the related vectors with the use of a proper similarity measure. The proposed method, SOM-NSGA-II, derives subpopulations or virtual islands for inbreeding similar individuals to speed up the convergence process of the optimization. The cross-over operation between similar individuals of the subpopulations at the constraint dominated region of the solution space showed a faster convergence and a wider Pareto-front for the test problems considered. An added advantage of the method is the application of genotypic sorting of the population by SOM for visual representation of the structure of the Pareto front. The resulted maps showed the extent of variation of the decision variables and their relative importance. This method may be utilized to speed up optimization of large scale WDNs and as an important visual aid for decision makers and designers of WDNs. 2011 * 390(<-388): Pairs trading and outranking: The multi-step-ahead forecasting case Pairs trading is a popular speculation strategy. Several implementation methods are proposed in the literature: they can be based on a distance criterion or on co-integration. This article extends previous research in another direction: the combination of forecasting techniques (Neural Networks) and multi-criteria decision making methods (Electre III). The key contribution of this paper is the introduction of multi-step-ahead forecasts. It leads to major changes in the trading system and raises new empirical and methodological questions. The results of an application based on S&P 100 Index stocks are promising: this methodology could be a powerful tool for pairs selection in a highly non-linear environment. (C) 2010 Elsevier B.V. All rights reserved. 2010 * 391(<-579): Nonessential objectives within network approaches for MCDM In Gal and Hanne [Eur. J. Oper. Res. 119 (1999) 373] the problem of using several methods to solve a multiple criteria decision making (MCDM) problem with linear objective functions after dropping nonessential objectives is analyzed. It turned out that the solution does not need be the same when using various methods for solving the system containing the nonessential objectives or not. In this paper we consider the application of network approaches for multicriteria decision making such as neural networks and an approach for combining MCDM methods (called MCDM networks). We discuss questions of comparing the results obtained with several methods as applied to the problem with or without nonessential objectives. Especially, we argue for considering redundancies such as nonessential objectives as a native feature in complex information processing. In contrast to previous results on nonessential objectives, the current paper focuses on discrete MCDM problems which are also denoted as multiple attribute decision making (MADM). (c) 2004 Elsevier B.V. All rights reserved. 2006 * 392(<-666): Clustering and selection of multiple criteria alternatives using unsupervised and supervised neural networks There are decision-making problems that involve grouping and selecting a set of alternatives. Traditional decision-making approaches treat different sets of alternatives with the same method of analysis and selection. In this paper, we propose clustering alternatives into different sets so that different methods of analysis, selection, and implementation for each set can be applied. We consider multiple criteria decision-making alternatives where the decision-maker is faced with several conflicting and non-commensurate objectives (or criteria). For example, consider buying a set of computers for a company that vary in terms of their functions, prices, and computing powers. In this paper, we develop theories and procedures for clustering and selecting discrete multiple criteria alternatives. The sets of alternatives clustered are mutually exclusive and are based on (1) similar features among alternatives, and (2) preferential structure of the decision-maker. The decision-making process can be broken down into three steps: (1) generating alternatives; (2) grouping or clustering alternatives based on similarity of their features; and (3) choosing one or more alternatives from each cluster of alternatives. We utilize unsupervised learning clustering artificial neural networks (ANN) with variable weights for clustering of alternatives, and we use feedforward ANN for the selection of the best alternatives for each cluster of alternatives. The decision-maker is interactively involved by comparing and contrasting alternatives within each group so that the best alternative can be selected from each group. For the learning mechanism of ANN, we proposed using a generalized Euclidean distance where by changing its coefficients new formation of clusters of alternatives can be achieved. The algorithm is interactive and the results are independent of the initial set-up information. Some examples and computational results are presented. 2000 * 393(<-681): IN-PROCESS REGRESSIONS AND ADAPTIVE MULTICRITERIA NEURAL NETWORKS FOR MONITORING AND SUPERVISING MACHINING OPERATIONS The authors develop a monitoring and supervising system for machining operations using in-process regressions (for monitoring) and adaptive feedforward artificial neural networks (for supervising). The system is designed for: (1) in-process tool life measurement and prediction; (2) supervision of machining operations in terms of the best machining setup; and (3) catastrophic tool failure monitoring. The monitoring system predicts tool life by using different sensors for gathering information based on a regression model that allows for the variations between tools and different machine setups. The regression model makes its prediction by using the history of other tools and combining it with the information obtained about the tool under consideration. The supervision system identifies the best parameters for the machine setup problem within the framework of multiple criteria decision making. The decision maker (operator) considers several criteria, such as cutting quality, production rate and tool life. To make the optimal decision with several criteria, an adaptive feedforward artificial neural network is used to assess the decision maker's preferences. The authors' neural network approach learns from the decision maker's complex behavior and hence, in automatic mode, can make decisions for the decision maker. The approach is not computationally demanding, and experiments demonstrate that its predictions are accurate. 1995 * 394(<- 65): Decision support for management of urban transport projects The planning phase within the urban-transport project management is a complex process from both the management and techno-economic aspects. The focus of this research is on decision-making processes related to the planning phase during management of urbanroad infrastructure projects. The proposed concept is based on multicriteria methods and Artificial Neural Networks. The decision-support concept presented in this paper is tested on the road infrastructure of the city of Split, and it shows how urban road infrastructure planning can be improved. 2015 * 395(<-215): A novel artificial immune clonal selection classification and rule mining with swarm learning model Metaheuristic optimisation algorithms have become popular choice for solving complex problems. By integrating Artificial Immune clonal selection algorithm (CSA) and particle swarm optimisation (PSO) algorithm, a novel hybrid Clonal Selection Classification and Rule Mining with Swarm Learning Algorithm (CS2) is proposed. The main goal of the approach is to exploit and explore the parallel computation merit of Clonal Selection and the speed and self-organisation merits of Particle Swarm by sharing information between clonal selection population and particle swarm. Hence, we employed the advantages of PSO to improve the mutation mechanism of the artificial immune CSA and to mine classification rules within datasets. Consequently, our proposed algorithm required less training time and memory cells in comparison to other AIS algorithms. In this paper, classification rule mining has been modelled as a miltiobjective optimisation problem with predictive accuracy. The multiobjective approach is intended to allow the PSO algorithm to return an approximation to the accuracy and comprehensibility border, containing solutions that are spread across the border. We compared our proposed algorithm classification accuracy CS2 with five commonly used CSAs, namely: AIRS1, AIRS2, AIRS-Parallel, CLONALG, and CSCA using eight benchmark datasets. We also compared our proposed algorithm classification accuracy CS2 with other five methods, namely: Naive Bayes, SVM, MLP, CART, and RFB. The results show that the proposed algorithm is comparable to the 10 studied algorithms. As a result, the hybridisation, built of CSA and PSO, can develop respective merit, compensate opponent defect, and make search-optimal effect and speed better. 2013 * 396(<-232): Optimization of mechanical property and shape recovery behavior of Ti-(similar to 49 at.%) Ni alloy using artificial neural network and genetic algorithm Multi-objective genetic algorithm based searching is used for designing the process schedule of Ti-(similar to 49 at.%) Ni alloy, to achieve optimum mechanical property and shape recovery behavior. Artificial neural network technique based data driven models are developed to empirically describe the relationship between the processing conditions and the properties. The models are used as objective functions for the optimization process. The optimization search found to be helpful to design the decision space variables for the improvement in shape recovery behavior without sacrificing the mechanical properties of the alloy. The Pareto solutions have been used as the guideline to find the process schedules, which is validated by suitable experimentation. (C) 2012 Elsevier Ltd. All rights reserved. 2013 * 397(<-509): An integrated multi-objective immune algorithm for optimizing the wire bonding process of integrated circuits Optimization of the wire bonding process of an integrated circuit (IC) is a multi-objective optimization problem (MOOP). In this research, an integrated multi-objective immune algorithm (MOIA) that combines an artificial immune algorithm (IA) with an artificial neural network (ANN) and a generalized Pareto-based scale-independent fitness function (GPSIFF) is developed to find the optimal process parameters for the first bond of an IC wire bonding. The back-propagation ANN is used to establish the nonlinear multivariate relationships between the wire boning parameters and the multi-responses, and is applied to generate the multiple response values for each antibody generated by the IA. The GPSIFF is then used to evaluate the affinity for each antibody and to find the non-dominated solutions. The "Error Ratio" is then applied to measure the convergence of the integrated approach. The "Spread Metric" is used to measure the diversity of the proposed approach. Implementation results show that the integrated MOIA approach does generate the Pareto-optimal solutions for the decision maker, and the Pareto-optimal solutions have good convergence and diversity performance. 2008 * 398(<-521): Design of electroceramic materials using artificial neural networks and multiobjective evolutionary algorithms We describe the computational design of electroceramic materials with optimal permittivity for application as electronic components. Given the difficulty of large-scale manufacture and characterization of these materials, including the theoretical prediction of their materials properties by conventional means, our approach is based on a recently established database containing composition and property information for a wide range of ceramic compounds. The electroceramic materials composition-function relationship is encapsulated by an artificial neural network which is used as one of the objectives in a multiobjective evolutionary algorithm. Evolutionary algorithms are stochastic optimization techniques which we employ to search for optimal materials based on chemical composition. The other objectives optimized include the reliability of the neural network prediction and the overall electrostatic charge of the material. The evolutionary algorithm searches for materials which simultaneously have high relative permittivity, minimum overall charge, and Good prediction reliability. We find that we are able to predict a range of new electroceramic materials with varying degrees of reliability. In some cases the materials are similar to those contained in the database; in others, completely new materials are predicted. 2008 * 399(<-574): Approach to optimization of cutting conditions by using artificial neural networks Optimum selection of cutting conditions importantly contribute to the increase of productivity and the reduction of costs, therefore utmost attention is paid to this problem in this contribution. In this paper, a neural network-based approach to complex optimization of cutting parameters is proposed. It describes the multi-objective technique of optimization of cutting conditions by means of the neural networks taking into consideration the technological, economic and organizational limitations. To reach higher precision of the predicted results. a neural optimization algorithm is developed and presented to ensure simple, fast and efficient optimization of all important turning parameters. The approach is suitable for fast determination of optimum cutting parameters during machining. where there is not enough time for deep analysis. To demonstrate the procedure and performance of the neural network approach, an illustrative example is discussed in detail. (c) 2005 Elsevier B.V. All rights reserved. 2006 * 400(<-581): Prediction of human behaviour using artificial neural networks This paper contributes to the analysis and prediction of deviate intentional behaviour of human operators in Human-Machine Systems using Artificial Neural Networks that take uncertainty into account. Such deviate intentional behaviour is a particular violation, called Barrier Removal. The objective of the paper is to propose a predictive Benefit-Cost-Deficit model that allows a multi-reference, multi-factor and multi-criterion evaluation. Human operator evaluations can be uncertain. The uncertainty of their subjective judgements is therefore integrated into the prediction of the Barrier Removal. The proposed approach is validated on a railway application, and the prediction convergence of the uncertainty-integrating model is demonstrated. 2006 * 401(<-674): Evaluation of factors and current approaches related to computerized design of tillage tools: a review The objectives of this paper are to evaluate the factors that are involved in the tillage process, and to explore the potential approaches for the computer-aided design of tillage tools. An overview related to the dynamic effect on the performance of tillage operations has been conducted. Compared with the analytical methods, the finite element method (FEM) has some advantages for the computerized design of tillage tools. The artificial neural networks (ANN) may be useful for the integrated evaluation of tillage performance with multi-objectives. ANN can be employed for simulation of a dynamic constitutive model and identification of soil conditions for agricultural soils. The integral approach of ANN analysis with FEM is found to be promising for optimizing design of tillage tools. (C) 1998 ISTVS. All rights reserved. 1998 * 402(<-682): COMPARING BP AND ART-II NEURAL-NETWORK CLASSIFIERS FOR FACILITY LOCATION This paper compares the performance of Artificial Neural Networks (ANNs) as classifiers in the facility location domain. The ART II (Adaptive Resonance Theory) and BP (Back Propagation) paradigms are used as exemplars of ANNs developed using supervised and unsupervised learning. Their performances are compared with that obtained using a linear multi-attribute utility model (MAUM) to classify the 48 states in the continental U.S.A. based on location profiles developed from government publications. In this paper, the models are used to classify the U.S. states based on their suitability for accommodating new manufacturing facilities. For this data set, the BP ANN model displayed robust performance and showed better convergence with the MAUM. 1995 * 403(<-267): Application of a linearly decreasing weight particle swarm to optimize the process conditions of al matrix nanocomposites In this paper, SiC nanoparticles were added into the commercial casting Al-Si aluminum alloy to fabricate metal matrix nanocomposites (MMNCs) with uniform reinforcement distribution. Experimental results revealed that the presence of nano-SiC reinforcement led to significant improvement in hardness and UTS while the ductility of the aluminum matrix is retained. An integrated optimization approach using an artificial neural network and a modified particle swarm is proposed to solve a process parameter design problem in casting. The artificial neural network is used to obtain the relationships between decision variables and the performance measures of interest, while the particle swarm is used to perform the optimization with multiple objectives. The results showed that the particle swarm is an effective method for solving multi-objective optimization problems, and that an integrated approach using an artificial neural network and a modified particle swarm can be used to solve complex process parameter design problems. 2012 * 404(<-353): Optimization of tile manufacturing process using particle swarm optimization In this paper, an integrated optimization approach using an artificial neural network and a bidirectional particle swarm is proposed. The artificial neural network is used to obtain the relationships between decision variables and the performance measures of interest, while the bidirectional particle swarm is used to perform the optimization with multiple objectives. Finally, the proposed approach is used to solve a process parameter design problem in cement roof-tile manufacturing. The results showed that the bidirectional particle swarm is an effective method for solving multi-objective optimization problems, and that an integrated approach using an artificial neural network and a bidirectional particle swarm can be used to solve complex process parameter design problems. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 405(<- 27): EFFECTS OF PROJECT UNCERTAINTIES ON NONLINEAR TIME-COST TRADEOFF PROFILE This study presents the effects of project uncertainties on nonlinear time-cost tradeoff (TOT) profile of real life engineering projects by the fusion of fuzzy logic and artificial neural network (ANN) models with hybrid meta-heuristic (HMH) technique, abridged as Fuzzy-ANN-HMH. Nonlinear time-cost relationship of project activities is dealt with ANN models. ANN models are then integrated with HMH technique to search for Pareto-optimal nonlinear TCT profile. HMH technique incorporates simulated annealing in the selection process of multiobjective genetic algorithm. Moreover, in real life engineering projects, uncertainties like management experience, labor skills, and weather conditions are commonly involved, which affect the duration and cost of the project activities. Fuzzy-ANN-HMH analyses responsiveness of nonlinear TOT profile with respect to these uncertainties. A comparison of Fuzzy-ANN-HMH is made with another method in literature to solve nonlinear TCT problem and the superiority of Fuzzy-ANN-HMH is demonstrated by results. The study gives project planners to carry out the best plan that optimizes time and cost to complete a project under uncertain environment. 2015 * 406(<-169): Integrated ANN-HMH Approach for Nonlinear Time-Cost Tradeoff Problem This paper presents an integrated Artificial Neural Network - Hybrid Meta Heuristic(ANN-HMH) method to solve the nonlinear time-cost tradeoff(TCT) problem of real life engineering projects. ANN models help to capture the existing nonlinear time-cost relationship in project activities. ANN models are then integrated with HMH technique to search for optimal TCT profile. HMH is a proven evolutionary multiobjective optimization technique for solving TCT problems. The study has implication in real time monitoring and control of project scheduling processes. 2014 * 407(<-522): Neural network embedded multiobjective genetic algorithm to solve non-linear time-cost tradeoff problems of project scheduling This paper presents a novel method to solve non-linear time-cost tradeoff (TCT) problem of real world engineering projects. Multiobjective genetic algorithm (MOGA) is employed to search for optimal TCT profile. Applicability of ANN based model for rapid estimation of time-cost relationship by invoking its function approximation capability is investigated, ANN models are then integrated with MOGA so as to develop a comprehensive approach to solve non-linear TCT problems of project scheduling. The study has implications in real time monitoring and control of project scheduling process. 2008 * 408(<-277): Defining a nonlinear control problem to reduce particulate matter population exposure In this paper a multi-objective nonlinear approach to control air quality at a regional scale is presented. Both economic and air quality sides of the problem are modeled through artificial neural network models. Simulating the complex nonlinear atmospheric phenomena, they can be used in an optimization routine to identify the efficient solutions of a decision problem for air quality planning. The methodology is applied over Northern Italy, an area in Europe known for its high concentrations of particulate matter. Results illustrate the effectiveness of the approach assessing the nonlinear chemical reactions in an air quality decision problem. (C) 2012 Elsevier Ltd. All rights reserved. 2012 * 409(<-288): Surrogate models to compute optimal air quality planning policies at a regional scale Secondary pollutants (such as PM10) derives from complex non-linear reactions involving precursor emissions, namely VOC, NOx, NH3, primary PM and SO2. Due to difficulty to cope with this complexity, Decision Support Systems (DSSs) are essential tools to help Environmental Authorities to plan air quality policies that fulfill EU Directive 2008/50 requirements in a cost-efficient way. To implement these DSSs the common approach is to describe the air quality indices using linear models, derived through model reduction techniques starting from deterministic Chemical Transport Model simulations. This linear approach limits the applicability of these surrogate models, and while these may work properly at coarse spatial resolutions (continental/national), where average values over large areas are of interest, they often prove inadequate at sub national scales, where the impact of non linearities on air quality are usually higher. The objective of this work is to identify air quality models able to properly describe the relation between emissions and air quality indices, at a sub national scale. In this context, artificial neural networks, identified processing long-term simulation output of a 3D deterministic multi-phase modelling system, are used to describe the non-linear relations between the control variables (precursor emissions reduction) and a pollution index. These models can then be used with a reasonable computing effort to solve a multi-objective (air quality and emission reduction costs) optimization problem, that requires thousands of model runs and thus would be unfeasible using the original process-based model. A case study of Northern Italy is presented. (C) 2011 Elsevier Ltd. All rights reserved. 2012 * 410(<-508): A multi-objective nonlinear optimization approach to designing effective air quality control policies This paper presents the implementation of a two-objective optimization methodology to select effective tropospheric ozone pollution control strategies on a mesoscale domain. The objectives considered are (a) the emission reduction cost and (b) the Air Quality Index. The control variables are the precursor emission reductions due to available technologies. The nonlinear relationship linking air quality objective and precursor emissions is described by artificial neural networks, identified by processing deterministic Chemical Transport Modeling system simulations. Pareto optimal solutions are calculated with the Weighted Sum Strategy. The two-objective problem has been applied to a complex domain in Northern Italy, including the Milan metropolitan area, a region characterized by frequent and persistent ozone episodes. (c) 2008 Elsevier Ltd. All rights reserved. 2008 * 411(<- 11): AI techniques for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands Flow regime is the key driver of the riverine ecology. This study proposes a novel hybrid methodology based on artificial intelligence (Al) techniques for quantifying riverine ecosystems requirements and delivering suitable flow regimes that sustain river and floodplain ecology through optimizing reservoir operation. This approach addresses issues to better fit riverine ecosystem requirements with existing human demands. We first explored and characterized the relationship between flow regimes and fish communities through a hybrid artificial neural network (ANN). Then the non-dominated sorting genetic algorithm II (NSGA-II) was established for river flow management over the Shihmen Reservoir in northern Taiwan. The ecosystem requirement took the form of maximizing fish diversity, which could be estimated by the hybrid ANN. The human requirement was to provide a higher satisfaction degree of water supply. The results demonstrated that the proposed methodology could offer a number of diversified alternative strategies for reservoir operation and improve reservoir operational strategies producing downstream flows that could meet both human and ecosystem needs. Applications that make this methodology attractive to water resources managers benefit from the wide spread of Pareto-front (optimal) solutions allowing decision makers to easily determine the best compromise through the trade-off between reservoir operational strategies for human and ecosystem needs. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 412(<-140): Simulation and Optimization Modeling for the Management of Groundwater Resources. II: Combined Applications The world population is increasing continuously and is expected to reach the 9.5billion mark in 2050 from the current 7.1billion. The importance of the groundwater resources is also increasing with the increase in population, because the quality and quantity of water resources are continuously declining because of urbanization, contamination, and climate change impact. Thus, under the current environment, the conservation and management of groundwater resources is a critical challenge for fulfilling the rising water demand for agricultural, industrial, and domestic uses. Various simulation and optimization approaches have been used to solve the groundwater management problems. Optimization methods have proved to be of great importance when used with simulation models, and the combined use of these two approaches gives the best results. The main objective of this review is to analyze combined uses of simulation and optimization modeling approaches and to provide an impression of their applications reported in literature. In addition to traditional optimization techniques, this paper highlights the application of computational intelligence techniques, such as artificial neural networks, response matrix approach, and multiobjective approach. Conclusions are drawn based on this review, which could be useful for system managers and planners for selecting the best suitable technique for their specific uses. 2014 * 413(<-160): Optimization modelling for seawater intrusion management The coastal aquifers of the world are facing environmental problem of seawater intrusion. This problem is the results of indiscriminate and unplanned groundwater exploitation for fulfilling the growing need of freshwater for the burgeoning global population. There is a need to develop appropriate management models for assessing the maximum feasible pumping rates which protects seawater intrusion in coastal aquifers. The comprehensive reviews on the use of various programming techniques for the solution of seawater intrusion management problem of coastal aquifers have been provided in this paper. The literature review revealed that the management models used in the past mainly considered the objectives of maximization of pumping rate, minimization of drawdown, minimization of pumped water, minimization of seawater volume into the aquifer, and/or minimization of pumping cost. The past reviews are grouped into five sections based on the programming techniques adopted. The sections include: linear programming, nonlinear programming, genetic algorithms, artificial neural networks, and multiobjective optimization models. Conclusions are drawn where gaps exist and more research needs to be focused. This review provides the basis for the selection of appropriate methodology for the management of seawater intrusion problems of coastal aquifers. (C) 2013 Elsevier B.V. All rights reserved. 2014 * 414(<-171): Multi-Objective Quantity-Quality Reservoir Operation in Sudden Pollution Damage caused by entered pollution in reservoirs can affect a water resource system in two ways: (1) Damages that are caused due to consumption of polluted water and (2) damages that are caused due to insufficient water allocation. Those damages conflict with each other. Thus, the crisis should be managed in a way that the least damage occurs in the water resource system. This paper investigates crisis management due to the sudden entrance of a 30 m(3) methyl tert-butyl ether (MTBE) load to the Karaj dam in Iran, which supplies municipal water to the cities of Tehran and Karaj. To simulate MTBE advection, dispersion, and vaporization, the latter process is added to the CE-QUAL-W2 model. After that, the multi-objective NSGAII-ALANN algorithm, which is a combination of the NSGAII optimization method along with a multi layer perceptron (MLP), which is one of the most widely used artificial neural network (ANN) structures, is employed to extract the best set of decisions in which the two aforementioned damages are minimized. By assigning a specific importance to each objective function, after extracting the optimal solutions, it is possible to choose one of the solutions with the least damage. Four scenarios of entering pollution to the Karaj reservoir the first day of each season are considered, resulting in a Pareto set of operation policies for each scenario. Results of the proposed methodology indicate that if the pollution enters the reservoir in summer, by using one of the optimal policies extracted from the Pareto set of the 2nd Scenario, by a 36% reduction in meeting the demand, allocated pollution decreases to about 60%. In other seasons, there is a significant decrease in allocated pollution with a smaller reduction in the met demand. 2014 * 415(<-347): Characterizing the Socio-Economic Driving Forces of Groundwater Abstraction with Artificial Neural Networks and Multivariate Techniques Integrated groundwater quantity management in the Middle East region must consider appropriate control measures of the socio economic needs. Hence, there is a need for a better knowledge and understanding of the socio economic variables influencing the groundwater quantity. Gaza Strip was chosen as the study area and real data were collected from twenty five municipalities for the reference year 2001. In this paper, the effective variables have been characterized and prioritized using multi-criteria analysis with artificial neural networks (ANN) and expert opinion and judgment. The selected variables were classified and organized using the multivariate techniques of cluster analysis, factor analysis, principal components and classification analysis. There are significant discrepancies between the results of ANN analysis and expert opinion and judgment in terms of ranking and prioritizing the socio-economic variables. Characterization of the priority effective socio-economic driving forces indicates that water managers and planners can introduce demand-based groundwater management in place of the existing supply-based groundwater management. This ensures the success of undertaking responsive technical, managerial and regulatory measures. Income per capita has the highest priority. Efficiency of revenue collection is not a significant socio-economic factor. The models strengthen the integration of preventive approach into groundwater quantity management. In addition to that, they assist decision makers to better assess the socio economic needs and undertake proactive measures to protect the coastal aquifer. 2011 * 416(<- 25): Multi-Objective Operations of Multi-Wetland Ecosystem: iModel Applied to the Everglades Restoration The Everglades is a complex, multiwetland ecosystem that is heavily managed to meet often-competing flood control, water supply, and environmental demands. Using objective measures to balance these demands through operational protocols has always been a challenge in the multibillion-dollar restoration plans for the ecosystem. Physically based models have been the primary tools for planning efforts but for such a complex system, they are laborious and computationally intensive. Development of optimal operations based on iterative runs of these models is a great challenge. This paper presents an inverse modeling framework for formal optimization suited for wetland system operations that helps overcome such limitations. Labor-intensive and computation-intensive physically representative models are emulated in each individual wetland area using an autoregressive artificial neural network with exogenous variables. Using prescribed inflow, outflow, and meteorological input data, such hydrologic model emulators aided by a dimension-reduction technique provide targeted spatial and temporal predictions for water level (stage) within each area of the Everglades, while excluding computation processes that are intensive but insignificant to the predictions. This computer software uses the augmented Lagrangian genetic algorithm technique (subject to linear and nonlinear constraints) to steer predictions of stage spatial variability within individual wetlands towards corresponding desired goals (including restoration targets). In the augmented Lagrangian genetic algorithm, flow releases are coded as the decision variables to be optimized subject to budget, intrahydraulic conveyance, flow capacity, and upstream storage constraints. Optimization is performed by dividing and solving a sequence of subproblems using the genetic algorithm procedures of initialization, selection, elitism, crossover, and mutation. As part of the process, Lagrangian and penalty parameters are updated, and optimization terminates when certain stopping criteria are met. Applying the technique reported in this paper to a specific Everglades restoration plan (the River of Grass Project) showed a sound hydrologic model emulator prediction when compared to the physical model for all wetland areas. Feeding optimal releases predicted by the computer software into a physical model showed equal or better matching of the restoration target with different release patterns compared to that of the physical model base run scenario. Results show that hydraulic conveyance limitations play a significant role in Everglades restoration. Also, results show that employing an adversity tradeoff matrix presented multiple so-called optimal solutions with different optimization weights and a powerful negotiation matrix. (C) 2015 American Society of Civil Engineers. 2015 * 417(<- 42): Multi-Objective Optimal Operation Model of Cascade Reservoirs and Its Application on Water and Sediment Regulation Recently suspended river, where formed both in tributary and in main stream of Ningxia-Inner Mongolia reaches in the Upper Yellow River, severely threatens people's lives and property safety downstream. In this paper, taking Ningxia-Inner Mongolia reaches for the example, water and sediment are regulated by cascade reservoirs upstream, which can form artificial controlled flood, to improve the relationship of water-sediment and slow down the speed of sedimentation rate. Then, multi-objective optimal operation model of cascade reservoirs is established with four objectives: ice and flood control, power generation, water supply, water and sediment regulation. Based on optimization technique of feasible search space, the non-dominated sorting genetic algorithm (NSGA-II) is improved and a new multi-objective algorithm, Feasible Search Space Optimization-Non-dominated Sorting Genetic Algorithm (FSSO-NSGA-II) is innovatively proposed in this paper. The best time of water and sediment regulation is discussed and the regulation index system and scenarios are constructed in three level years of 2010, 2020 and 2030. After that regulation efforts and contribution to sediment transportation are quantified under three scenarios. Compared with history data in 2010, the accuracy and superiority of multi-objective model and FSSO-NSGA-II are verified. Even more, four-dimensional vector coordinate systems are proposed innovatively to represent each objective and sensitivity of three scenarios are analyzed to clarify the impact on regulation objectives by regulation indexes. At last, relationships between four objectives are revealed. The research findings provide optimal solutions of multi-objectives optimal operation by FSSO-NSGA-II, which have an important theoretical significance to enrich the methods of water and sediment optimal operation by cascade reservoirs, guiding significance to water and sediment regulation implementation and construct water and sediment control system in the whole Yellow River basin. Research Highlights We establish an multi-objective optimal operation model with four regulation objectives. We proposed a improved multi-objective algorithm (FSSO-NSGA-II) based on feasible search space optimization. Regulation index system and three scenarios are constructed. Four-dimensional vector coordinate systems are proposed to represent each objective and sensitivity of scenarios are analyzed. Relationships between four objectives are revealed. 2015 * 418(<- 99): An adaptive ant colony optimization framework for scheduling environmental flow management alternatives under varied environmental water availability conditions Human water use is increasing and, as such, water for the environment is limited and needs to be managed efficiently. One method for achieving this is the scheduling of environmental flow management alternatives (EFMAs) (e.g., releases, wetland regulators), with these schedules generally developed over a number of years. However, the availability of environmental water changes annually as a result of natural variability (e.g., drought, wet years). To incorporate this variation and schedule EFMAs in a operational setting, a previously formulated multiobjective optimization approach for EFMA schedule development used for long-term planning has been modified and incorporated into an adaptive framework. As part of this approach, optimal schedules are updated at regular intervals during the planning horizon based on environmental water allocation forecasts, which are obtained using artificial neural networks. In addition, the changes between current and updated schedules can be minimized to reduce any disruptions to long-term planning. The utility of the approach is assessed by applying it to an 89km section of the River Murray in South Australia. Results indicate that the approach is beneficial under a range of hydrological conditions and an improved ecological response is obtained in a operational setting compared with previous long-term approaches. Also, it successfully produces trade-offs between the number of disruptions to schedules and the ecological response, with results suggesting that ecological response increases with minimal alterations required to existing schedules. Overall, the results indicate that the information obtained using the proposed approach potentially aides managers in the efficient management of environmental water. Key Points Adaptive optimization framework developed for use in an operational setting The framework showing advantages compared with long-term planning approaches Ecological response increased with minimal disruption to existing schedules 10.1002/(ISSN)1944-7973 hybridization) in order to significantly reduce the very high computational effort required by the optimization process. The results show that by using this hybrid optimization procedure, the computation time of a single optimization run can be reduced by 46-72% while achieving Pareto-optimal solution sets with similar, or even slightly better, quality as those obtained when conducting NSGA-II runs that use FE simulations over the whole run-time of the optimization process. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 517(<-200): Decomposition-based multi-objective differential evolution particle swarm optimization for the design of a tubular permanent magnet linear synchronous motor This article proposes a decomposition-based multi-objective differential evolution particle swarm optimization (DMDEPSO) algorithm for the design of a tubular permanent magnet linear synchronous motor (TPMLSM) which takes into account multiple conflicting objectives. In the optimization process, the objectives are evaluated by an artificial neural network response surface (ANNRS), which is trained by the samples of the TPMSLM whose performances are calculated by finite element analysis (FEA). DMDEPSO which hybridizes differential evolution (DE) and particle swarm optimization (PSO) together, first decomposes the multi-objective optimization problem into a number of single-objective optimization subproblems, each of which is associated with a Pareto optimal solution, and then optimizes these subproblems simultaneously. PSO updates the position of each particle (solution) according to the best information about itself and its neighbourhood. If any particle stagnates continuously, DE relocates its position by using two different particles randomly selected from the whole swarm. Finally, based on the DMDEPSO, optimization is gradually carried out to maximize the thrust of TPMLSM and minimize the ripple, permanent magnet volume, and winding volume simultaneously. The result shows that the optimized TPMLSM meets or exceeds the performance requirements. In addition, comparisons with chosen algorithms illustrate the effectiveness of DMDEPSO to find the Pareto optimal solutions for the TPMLSM optimization problem. 2013 * 518(<-269): Multi objective calibration of large scaled water quality model using a hybrid particle swarm optimization and neural network algorithm Large scaled simulation models, especially the water quality simulation models, are so complicated that makes calibration processes huge tasks; in order to attain optimum solution, lots of parameters must be calibrated, simultaneously. Methods based on evolutionary algorithm developed new horizons in calibration procedure. Hybrid algorithms are of the newest. In hybrid algorithms, one of the modules is applied as a simulator and the other one takes role as an optimization module. In this article, overcoming these challenges, hybrid ANN-PSO algorithm is applied in calibration process of water quality model CE-QUAL-W2. Here, Particle Swarm Optimization (PSO) provides simulation (CE-QUAL-W2) model with sets of parameters to simulate model. Using these results, Neural Network (estimator) is trained. In the next step, simulator would be replaced with estimator and Artificial Neural Network (ANN) would estimate simulator's behavior in a way less time. The first goal is to calibrate thermal parameter; going forward through this process needs water surface elevation parameter to be calibrated, too. As a result, the proposed model will become multi-objective one, applied in Karkheh reservoir in Iran during 6 month simulation period. The proposed approach overcomes the high computational efforts required if a conventional calibration search technique was used, while retaining the quality of the final calibration results. Estimator (ANN) embedded in optimization algorithm (PSO) in calibration process, undoubtedly, reduced run time while the answers have reliable quality. 2012 * 519(<-366): Multi-objective memetic algorithm: comparing artificial neural networks and pattern search filter method approaches In this work, two methodologies to reduce the computation time of expensive multi-objective optimization problems are compared. These methodologies consist of the hybridization of a multi-objective evolutionary algorithm (MOEA) with local search procedures. First, an inverse artificial neural network proposed previously, consisting of mapping the decision variables into the multiple objectives to be optimized in order to generate improved solutions on certain generations of the MOEA, is presented. Second, a new approach based on a pattern search filter method is proposed in order to perform a local search around certain solutions selected previously from the Pareto frontier. The results obtained, by the application of both methodologies to difficult test problems, indicate a good performance of the approaches proposed. 2011 * 520(<-391): Optimum Design of Tubular Permanent-Magnet Motors for Thrust Characteristics Improvement by Combined Taguchi-Neural Network Approach Although tubular permanent-magnet motors have advantages such as remarkable force capability and high efficiency due to lack of end winding, they suffer from high thrust force ripple. This paper presents the use of Taguchi method and artificial neural network (ANN) for shape optimization of axially magnetized tubular linear permanent-magnet (TLPM) motors. A multiobjective design optimization is presented to improve force ripple, developed thrust, and permanent-magnet volume simultaneously. The iron pole-piece slotting technique is used and its design parameters are optimized to minimize the motor's force pulsation. To obtain optimal configuration using this technique, four design variables are selected and their approximate optimum values are determined by the Taguchi method using analysis of means (ANOM). In the next step, two more influential parameters are selected by analysis of variance (ANOVA) and their accurate optimum values are obtained by a trained ANN. Finite-element analysis (FEA) is used to appraise the performance of the motor in different experiments of the Taguchi method and for training the ANN. The results show that force pulsation of the optimized motor is greatly reduced while there is small drop in the motor thrust. 2010 * 521(<-545): A neural networks inversion-based algorithm for multiobjective design of a high-field superconducting dipole magnet In this paper, an original algorithm to solve multiobjective design problems, which makes use of a neural network (NN) inversion method, is presented. The proposed approach allows us to explore the solutions directly in the objectives space, rather than in the parameters space, with a great saving of computation time in the reconstruction of the Pareto front. A multilayer perceptron NN is first trained to solve the analysis design problem. The inversion of the neural model allows us to obtain the design parameters, starting from the desired requirements on all the conflicting multiple objectives. The performance of the method is demonstrated by its application to the design of a high-field superconducting dipole magnet, where a tradeoff between the superconductors volumes is required in order to obtain a prescribed magnetic field value in the dipole axis. 2007 * 522(<- 1): Minimum-weight design for three dimensional woven composite stiffened panels using neural networks and genetic algorithms The paper describes a modeling strategy for multi-scale analysis and optimization of stiffened panels, made of three-dimensional woven composites. Artificial neural network techniques are utilized to generate an approximate response for the optimum structural design in order to increase efficiency and applicability. The artificial neural networks are integrated with genetic algorithms to optimize mixed discrete-continuous design variables for the three dimensional woven composite structures. The proposed procedure is then applied to the multi-objective optimal design of a stiffened panel subject to buckling and post-buckling requirements. (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 523(<-143): Neuro-evolutionary optimization methodology applied to the synthesis process of ash based adsorbents Ash and modified ash were investigated as alternative adsorbents for copper ions. Our aim was to establish optimal working conditions for obtaining the new adsorbents, using a neuro-evolutionary optimization methodology. The materials were characterized by SEM, FT-IR, EDAX, XRD, and by the removal percentage. Three multilayer perceptron neural networks were developed and aggregated into a stack to form the model of the process. The neural model was integrated into an optimization procedure solved with a genetic algorithm to obtain the optimum values for the percentage of adsorption. The new adsorbents provide two benefits: environmental protection and energy recovery. (C) 2013 The Korean Society of Industrial and Engineering Chemistry. Published by Elsevier B.V. All rights reserved. 2014 * 524(<-177): Multi-objective optimization of a building envelope for thermal performance using genetic algorithms and artificial neural network The objective of this paper is to present a method to optimize the equivalent thermophysical properties of the external walls (thermal conductivity k(wall) and volumetric specific heat (rho c)(wall)) of a dwelling in order to improve its thermal efficiency. Classical optimization involves several dynamic yearly thermal simulations, which are commonly quite time consuming. To reduce the computational requirements, we have adopted a methodology that couples an artificial neural network and the genetic algorithm NSGA-H. This optimization technique has been applied to a dwelling for two French climates, Nancy (continental) and Nice (Mediterranean). We have chosen to characterize the energy performance of the dwelling with two criteria, which are the optimization targets: the annual energy consumption Q(TOT) and the summer comfort degree I-sum. First, using a design of experiments, we have quantified and analyzed the impact of the variables k(wall) and (rho c)(wall) on the objectives Q(TOT) and I-sum, depending on the climate. Then, the optimal Pareto fronts obtained from the optimization are presented and analyzed. The optimal solutions are compared to those from mono-objective optimization by using an aggregative method and a constraint problem in GenOpt. The comparison clearly shows the importance of performing multi-objective optimization. (C) 2013 Elsevier B.V. All rights reserved. 2013 * 525(<-178): A hybrid multi-objective approach based on the genetic algorithm and neural network to design an incremental cellular manufacturing system One important issue related to the implementation of cellular manufacturing systems (CMSs) is to decide whether to convert an existing job shop into a CMS comprehensively in a single run, or in stages incrementally by forming cells one after the other, taking the advantage of the experiences of implementation. This paper presents a new multi-objective nonlinear programming model in a dynamic environment. Furthermore, a novel hybrid multi-objective approach based on the genetic algorithm and artificial neural network is proposed to solve the presented model. From the computational analyses, the proposed algorithm is found much more efficient than the fast non-dominated sorting genetic algorithm (NSGA-II) in generating Pareto optimal fronts. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 526(<-186): Generalized evolutionary optimum design of fiber-reinforced tire belt structure This paper deals with the multi-objective optimization of tire reinforcement structures such as the tread belt and the carcass path. The multi-objective functions are defined in terms of the discrete-type design variables and approximated by artificial neutral network, and the sensitivity analyses of these functions are replaced with the iterative genetic evolution. The multi-objective optimization algorithm introduced in this paper is not only highly CPU-time-efficient but it can also be applicable to other multi-objective optimization problems in which the objective function, the design variables and the constraints are not continuous but discrete. Through the illustrative numerical experiments, the fiber-reinforced tire belt structure is optimally tailored. The proposed multi-objective optimization algorithm is not limited to the tire reinforcement structure, but it can be applicable to the generalized multi-objective structural optimization problems in various engineering applications. 2013 * 527(<-270): Structure optimization of neural network for dynamic system modeling using multi-objective genetic algorithm The problem of constructing an adequate and parsimonious neural network topology for modeling non-linear dynamic system is studied and investigated. Neural networks have been shown to perform function approximation and represent dynamic systems. The network structures are usually guessed or selected in accordance with the designer's prior knowledge. However, the multiplicity of the model parameters makes it troublesome to get an optimum structure. In this paper, an alternative algorithm based on a multi-objective optimization algorithm is proposed. The developed neural network model should fulfil two criteria or objectives namely good predictive accuracy and minimum model structure. The result shows that the proposed algorithm is able to identify simulated examples correctly, and identifies the adequate model for real process data based on a set of solutions called the Pareto optimal set, from which the best network can be selected. 2012 * 528(<-272): Pareto-optimal analysis of Zn-coated Fe in the presence of dislocations using genetic algorithms To design a coating that will absorb maximum energy prior to failure with minimum deformation, the shearing process of polycrystalline Zn coated Fe is simulated in the presence of dislocations, using molecular dynamics. The results fed to an Evolutionary Neural Network generated the meta-models of objective functions required in the subsequent Pareto-optimization task using a Multi-objective Genetic Algorithm. Similar calculations conducted for single crystals, and also in the absence of dislocations, are compared and analyzed. (C) 2012 Elsevier B. V. All rights reserved. 2012 * 529(<-450): Analyzing Fe-Zn system using molecular dynamics, evolutionary neural nets and multi-objective genetic algorithms Failure behavior of Zn coated Fe is simulated through molecular dynamics (MD) and the energy absorbed at the onset of failure along with the corresponding strain of the Zn lattice are computed for different levels of applied shear rate. temperature and thickness. Data-driven models are constructed by feeding the MD results to an evolutionary neural network. The outputs of these neural networks are utilized to carry out a multi-objective optimization through genetic algorithms, where the best possible tradeoffs between two conflicting requirements, minimum deformation and maximum energy absorption at the onset of failure, are determined by constructing a Pareto frontier. (C) 2009 Elsevier B.V. All rights reserved. 2009 * 530(<-582): Multiobjective RBFNNs designer for function approximation: An application for mineral reduction Radial Basis Function Neural Networks (RBFNNs) are well known because, among other applications, they present a good performance when approximating functions. The function approximation problem arises in the construction of a control system to optimize the process of the mineral reduction. In order to regulate the temperature of the ovens and other parameters, it is necessary a module to predict the final concentration of mineral that will be obtained from the source materials. This module can be formed by an RBFNN that predicts the output and by the algorithm that designs the RBFNN dynamically as more data is obtained. The design of RBFNNs is a very complex task where many parameters have to be determined, therefore, a genetic algorithm that determines all of them has been developed. This algorithm provides satisfactory results since the networks it generates are able to predict quite precisely the final concentration of mineral. 2006 * 531(<-149): Optimal design of floating substructures for spar-type wind turbine systems The platform and floating structure of spar type offshore wind turbine systems should be designed in order for the 6-DOF motions to be minimized, considering diverse loading environments such as the ocean wave, wind, and current conditions. The objective of this study is to optimally design the platform and substructure of a 3MW spar type wind turbine system with the maximum postural stability in 6-DOF motions as well as the minimum material cost. Therefore, design variables of the platform and substructure were first determined and then optimized by a hydrodynamic analysis. For the hydrodynamic analysis, the body weight of the system was considered, and the ocean wave conditions were quantified to the wave forces using the Morison's equation. Moreover, the minimal number of computation analysis models was generated by the Design of Experiments (DOE), and the design variables of the platform and substructure were finally optimized by using a genetic algorithm with a neural network approximation. 2014 * 532(<-441): Optimization of Vertical Roller Mill by Using Artificial Neural Networks The vertical roller mill is important for machine grinding and mixing various crude materials in the process of producing Portland cement. A vertical roller mill is subjected to cyclic bending stress because of the roller load. Because of the cyclic bending stress, only 4?106 ?8?106 cycles are achieved instead of 4?107 cycles. The stress also causes fractures at the edge of grinding path of the outer roller. The expenses incurred in repairing the grinding path amounts to 30% of the total maintenance cost. Therefore, it is desirable to redesign the vertical roller mill in order to reduce the expenses incurred in repairing the roller. In this study, artificial neural networks (ANNs) were applied in order to solve the multiobjective optimization problem for vertical roller mills by using the function approximation ability of ANNs. To learn and generalize ANNs, the maximum and minimum stresses were estimated from the results of the finite-element analysis of a vertical roller mill. Thus, ANNs could be applied to solve the multiobjective optimization problem. 2010 * 533(<-452): Probabilistic Evaluation of Optimal Location of Surge Arresters on EHV and UHV Networks Due to Switching and Lightning Surges Switching surges are of primary importance in insulation coordination of extremely high voltage and ultra-high voltage networks. However, in regions of high lightning activity or high ground resistance insulation design, preferably, should be based on the risk of failure caused by lightning and switching surges and the probability of line outage, a combination of lightning and switching flashover rates (SSFOR). This paper describes an effective installation of transmission line arresters (TLAs) to obtain a better protection scheme (i.e., minimizing global risk to the network). As a consequence, protection costs are reduced in accordance with the costs of elements actually protected and the number of TLAs utilized. In order to accomplish this, a probabilistic method for calculating the lightning related failure and an artificial neural network for estimating the SSFOR are presented. A multicriteria optimization method based on a genetic algorithm is also developed to determine the optimum location of TLAs. 2009 * 534(<-497): ANN for multi-objective optimal reactive compensation of a power system with wind generators In this paper, we develop a method aimed to impose an acceptable voltages profile and to reduce active losses of an electrical supply network including wind generators in real time. These tasks are ensured by acting on capacitor and reactor banks implemented in the load nodes. To solve this problem, we minimize multi-objective functions associated to the active losses and the compensation devices cost under constraints imposed on the voltages and the reactive productions of the various banks. The minimization procedure was realised by the use of evolutionary algorithms. After a training phase, a neural model has the capacity to provide a good estimation of the voltages, the reactive productions and the losses for actual curves of the load and the wind speed, in real time. (C) 2008 Elsevier B.V. All rights reserved. 2008 * 535(<-290): A Compact Optical Instrument with Artificial Neural Network for pH Determination The aim of this work was the determination of pH with a sensor array-based optical portable instrument. This sensor array consists of eleven membranes with selective colour changes at different pH intervals. The method for the pH calculation is based on the implementation of artificial neural networks that use the responses of the membranes to generate a final pH value. A multi-objective algorithm was used to select the minimum number of sensing elements required to achieve an accurate pH determination from the neural network, and also to minimise the network size. This helps to minimise instrument and array development costs and save on microprocessor energy consumption. A set of artificial neural networks that fulfils these requirements is proposed using different combinations of the membranes in the sensor array, and is evaluated in terms of accuracy and reliability. In the end, the network including the response of the eleven membranes in the sensor was selected for validation in the instrument prototype because of its high accuracy. The performance of the instrument was evaluated by measuring the pH of a large set of real samples, showing that high precision can be obtained in the full range. 2012 * 536(<-311): Influence of hollow glass microspheres on the mechanical and physical properties and cost of particle reinforced polymer composites The goal of the study was to find a cost-effective composition of a particle reinforced composite that is light in weight but has sufficient mechanical properties. The matrix of the particulate composite is unsaturated polyester resin that is reinforced with alumina trihydrate particles. Part of the alumina trihydrate proportion was replaced with hollow glass microspheres to reduce weight and save costs. In order to find out the influence of the light filler on the physical and mechanical properties of composites, materials with different percentages of the light filler were prepared. Test specimens were cut from moulded sheets that were fabricated with vacuum assisted extruder. Tensile strength, indentation hardness measured with a Barcol impressor, and density were determined. Based on the experimental data a multi-criteria optimization problem was formulated and solved to find the optimal design of the material. Artificial neural networks and a hybrid genetic algorithm were used. The optimal solution is given as a Pareto curve to represent the distinction between the density and selected mechanical properties of the composite material. The composite material filled with 6% hollow glass microspheres showed 3% loss in the tensile strength and 26% loss in the surface hardness compared to the composition without the filler. The weight decreased by 13% compared with the initial composition. The addition of hollow glass microspheres did not lower the net value of the material, it increased 7%. 2012 * 537(<- 77): Multi-Criteria Design Optimization of Ultra Large Diameter Permanent Magnet Generator This paper presents a novel design optimization procedure for an ultra large diameter permanent magnet generator. As the machine features unorthodox electromagnetic and mechanical layouts, basic principles for determining structural loads together with material quantities for cost estimation are described. Finite element modelling with beam elements is used for retrieving stresses and deformations of the novel carrier structure. Mathematical system response model of the generator is created with artificial neural networks, while genetic algorithm with gradient method is utilized for determining the optimal solutions. Input dataset for the model build-up is constructed with a help of a full factorial experimental method. Achieved results are utilized for describing the relationship between the structural response and efficiency values of the generator. As the design of the machine has to fulfil contradicting technical and economical requirements, Pareto optimality concept is employed. As an example, a set of optimal solutions is determined for the particular case. 2015 * 538(<-527): A method for optimal design of automotive body assembly using multi-material construction This paper proposes a new method for designing lightweight automotive body assemblies using multi-material construction with low cost penalty. Current constructions of automotive structures are based on single types of materials, e.g., steel or aluminium. The principle of the multi-material construction concept is that proper materials are selected for their intended functions. The design problem is formulated as a multi-objective nonlinear mathematical programming problem involving both discrete and continuous variables. The discrete variables are the material types and continuous variables are the thicknesses of the panels. This problem is then solved using a multi-objective genetic algorithm. An artificial neural network is employed to approximate the constraint functions and reduce the number of finite element runs. The proposed method is illustrated through a case study of lightweight design of an automotive door assembly. (c) 2007 Elsevier Ltd. All rights reserved. 2008 * 539(<- 56): Topographical optimisation of single-storey non-domestic steel framed buildings using photovoltaic panels for net-zero carbon impact A methodology is presented that combines a multi-objective evolutionary algorithm and artificial neural networks to optimise single-storey steel commercial buildings for net-zero carbon impact. Both symmetric and asymmetric geometries are considered in conjunction with regulated, unregulated and embodied carbon. Offsetting is achieved through photovoltaic (PV) panels integrated into the roof. Asymmetric geometries can increase the south facing surface area and consequently allow for improved PV energy production. An exemplar carbon and energy breakdown of a retail unit located in Belfast UK with a south facing PV roof is considered. It was found in most cases that regulated energy offsetting can be achieved with symmetric geometries. However, asymmetric geometries were necessary to account for the unregulated and embodied carbon. For buildings where the volume is large due to high eaves, carbon offsetting became increasingly more difficult, and not possible in certain cases. The use of asymmetric geometries was found to allow for lower embodied energy structures with similar carbon performance to symmetrical structures. (C) 2014 Elsevier Ltd. All rights reserved. 2015 * 540(<-119): Free vibration analysis of an adhesively bonded functionally graded double containment cantilever joint In this study, Genetic Algorithms (GAs) combined with the proposed neural networks were implemented to the free vibration analysis of an adhesively bonded double containment cantilever joint with a functionally graded plate. The proposed neural networks were trained and tested based on a limited number of data including the natural frequencies and modal strain energies calculated using the finite element method. GA evaluates a value generated iteratively by an objective function and this value is calculated by the finite element method. The iteration process restricts us apparently to use directly the finite element method in our multi-objective optimisation problem in which the natural frequency is maximised and the corresponding modal strain energy is minimised. The proposed neural networks were used accurately to predict the natural frequencies and modal strain energies instead of calculating directly them by using the finite element method. Consequently, the computation time and efforts were reduced considerably. The adhesive joint was observed to tend vertical bending modes and torsional modes. Therefore, the multi-objective optimisation problem was limited to only the first mode which appeared as a bending mode. The effects of the geometrical dimensions and the material composition variation through the plate thickness were investigated. As the material composition of the horizontal plate becomes ceramic rich, both natural frequency and modal strain energy of the adhesive joint increased regularly. The plate length and plate thickness were more effective geometrical design parameters whereas the support length and thickness were less effective. However, the adhesive thickness had a small effect on the optimal design of the adhesive joint as far as the natural frequencies and modal strain energies are concerned. The distributions of optimal solutions were also presented for the adhesive joints with fundamental joint lengths and material compositions in reference to their natural frequencies and corresponding modal strain energies. 2014 * 541(<-533): Multi-objective stacking sequence optimization of laminated cylindrical panels using a genetic algorithm and neural networks A imulti-objective optimization strategy for optimal stacking sequence of laminated cylindrical panels is presented, with respect to the first natural frequency and critical buckling load, using the weighted summation method. To improve the speed of the optimization process, artificial neural networks are used to reproduce the behavior of the structure both in free vibration and buckling conditions. Based on first order shear deformation theory of laminated shells, a finite element code, capable of evaluating the first natural frequency and buckling load, is prepared of which the outputs are used for training and testing the developed neural networks. In order to find the optimal solution. a genetic algorithm is implemented. Verifications are made for both finite element code results and utilization of neural networks in the optimization process. With the purpose of illustrating the optimization process, numerical results are presented for a symmetric angle-ply six layer cylindrical panel. (c) 2006 Elsevier Ltd. All rights reserved. 2007 * 542(<-164): Dynamic Response and Optimal Design of Curved Metallic Sandwich Panels under Blast Loading It is important to understand the effect of curvature on the blast response of curved structures so as to seek the optimal configurations of such structures with improved blast resistance. In this study, the dynamic response and protective performance of a type of curved metallic sandwich panel subjected to air blast loading were examined using LS-DYNA. The numerical methods were validated using experimental data in the literature. The curved panel consisted of an aluminum alloy outer face and a rolled homogeneous armour (RHA) steel inner face in addition to a closed-cell aluminum foam core. The results showed that the configuration of a "soft" outer face and a "hard" inner face worked well for the curved sandwich panel against air blast loading in terms of maximum deflection (MaxD) and energy absorption. The panel curvature was found to have a monotonic effect on the specific energy absorption (SEA) and a nonmonotonic effect on the MaxD of the panel. Based on artificial neural network (ANN) metamodels, multiobjective optimization designs of the panel were carried out. The optimization results revealed the trade-off relationships between the blast-resistant and the lightweight objectives and showed the great use of Pareto front in such design circumstances. 2014 * 543(<-184): Blast resistance and multi-objective optimization of aluminum foam-cored sandwich panels In this work, a group of metallic aluminum foam-cored sandwich panels (AFSPs) were used as vehicle armor against blast loading. The dynamic responses of the AFSPs with various combinations of face-sheet materials were analyzed using LS-DYNA. It was found that the AFSP with an aluminum (AA2024 T3) front face and a Rolled Homogeneous Armor (RHA) steel back face (labeled T3-AF-RHA) outperformed the other panel configurations in terms of maximum back face deflection (MaxD) and areal specific energy absorption (ASEA). It was also found that boundary conditions and the standoff distance (SoD) between an explosive and a target surface both have a remarkable influence on the blast response of the AFSPs. Using artificial neural network (ANN) approximation models, multi-objective design optimization (MDO) of the T3-AF-RHA panel was performed both with and without variations in blast load intensity. The optimization results show that the two objectives of MaxD minimization and ASEA maximization conflict with each other and that the optimal designs must be identified in a Pareto sense. Moreover, the Pareto curves obtained are different for varied blast impulse levels. Consequently, it is concluded that loading variation should be considered when designing such sandwich armors to achieve more robust blast-resistant performance. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 544(<-515): Identification of Constitutive Parameters using Hybrid ANN multi-objective optimization procedure This paper deals with the identification of material parameters for an elastoplastic behaviour model with isotropic hardening using several experimental tests at the same time. But, these tests are generally inhomogeneous and finite element simulations are necessary for their analysis. Therefore an inverse analysis is carried out and the identification problem is converted into a multi-objective optimization where prohibitive computing time is required. We propose in this work a hybrid approach where Artificial Neural Networks (ANN) are trained by finite element results. Then, the multi objective procedure calls the ANN function in place of the finite element code. The proposed approach is exemplified on the identification of non-associative Hill'48 criterion and Voce parameters model of the Stainless Steel AISI 304. 2008 * 545(<-528): IDENTIFICATION OF MATERIAL PARAMETERS BY HYBRID METHOD Accurate identification of material parameters is an important task to model the forming processes. In this paper several experimental tests are used simultaneously to identify an elastoplastic behaviour model with isotropic hardening. Yet, while this identification of the material parameters is converted into a multi-objective optimization, it results into a prohibitive computing time is required. In order to overcome this issue, in this work a hybrid optimization approach based both on the finite element and the artificial neural networks computations is presented. The proposed strategy is used to identify the Karafillis & Boyce criterion and the Voce parameters model of the Stainless Steel AISI 304. 2008 * 546(<- 17): Aerothermal shape optimization for a double row of discrete film cooling holes on the suction surface of a turbine vane A multiple-objective optimization is implemented for a double row of staggered film holes on the suction surface of a turbine vane. The optimization aims to maximize the film cooling performance, which is assessed using the cooling effectiveness, while minimizing the corresponding aerodynamic loss, which is measured with a mass-averaged total pressure coefficient. Three geometric variables defining the hole shape are optimized: the conical expansion angle, compound angle and length to diameter ratio of the non-diffused portion of the hole. The optimization employs a non-dominated sorting genetic algorithm coupled with an artificial neural network to generate the Pareto front. Reynolds-averaged Navier-Stokes simulations are employed to construct the neural network and investigate the aerodynamic and thermal optimum solutions. The optimum designs exhibit improved performance in comparison to the reference design. The optimization methodology allowed investigation into the impact of varying the geometric variables on the cooling effectiveness and the aerodynamic loss. 2015 * 547(<- 60): Aerothermal Optimization and Experimental Verification for Discrete Turbine Airfoil Film Cooling The optimization aims to maximize the film cooling performance while minimizing the corresponding aerodynamic penalty. The cooling performance is assessed using the adiabatic film cooling effectiveness, while the aerodynamic penalty is measured with a mass-averaged total pressure loss coefficient. Two design variables are selected: the coolant-to-mainstream temperature ratio and the coolant-to-mainstream total pressure ratio. Two staggered rows of discrete cylindrical film cooling holes on the suction surface of a turbine vane are considered. Anondominated sorting genetic algorithm (NSGA-II) is coupled with an artificial neural network (ANN) to perform a multiple-objective optimization of the coolant flow parameters on the vane suction surface. Three-dimensional Reynolds-averaged Navier-Stokes (RANS) simulations are employed to construct the ANN network that produces low-fidelity predictions of the objective functions during the optimization. The effect of varying the coolant flow parameters on the adiabatic film cooling effectiveness and the aerodynamic loss is analyzed using the optimization method and RANS simulations. The computational fluid dynamics predictions of the adiabatic film cooling effectiveness and aerodynamic performance are assessed and validated against corresponding experimental measurements. The optimal solutions are reproduced in the experimental facility and the Pareto front is substantiated with experimental data. 2015 * 548(<-229): CFD modeling and multi-objective optimization of cyclone geometry using desirability function, artificial neural networks and genetic algorithms The low-mass loading gas cyclone separator has two performance parameters, the pressure drop and the collection efficiency (cut-off diameter). In this paper, a multi-objective optimization study of a gas cyclone separator has been performed using the response surface methodology (RSM) and CFD data. The effects of the inlet height, the inlet width, the vortex finder diameter and the cyclone total height on the cyclone performance have been investigated. The analysis of design of experiment shows a strong interaction between the inlet dimensions and the vortex finder diameter. No interaction between the cyclone height and the other three factors was observed. The desirability function approach has been used for the multi-objective optimization. A new set of geometrical ratios (design) has been obtained to achieve the best performance. A numerical comparison between the new design and the Stairmand design confirms the superior performance of the new design. As an alternative approach for applying RSM as a meta-model, two radial basis function neural networks (RBFNNs) have been used. Furthermore, the genetic algorithms technique has been used instead of the desirability function approach. A multi-objective optimization study using NSGA-II technique has been performed to obtain the Pareto front for the best performance cyclone separator. (C) 2012 Elsevier Inc. All rights reserved. 2013 * 549(<-304): Modeling and Pareto optimization of gas cyclone separator performance using RBF type artificial neural networks and genetic algorithms Both the pressure drop and the cut-off diameter are important performance parameters in the design of the cyclone separator. In this paper, a multi-objective optimization study of the gas cyclone separator is performed. In order to predict accurately the complex non linear relationships between the performance parameters (pressure drop and cut-off diameter) and the geometrical dimensions, two radial basis function neural networks (RBFNNs) are developed and employed to model the pressure drop and the cut-off diameter for cyclone separators. The artificial neural networks have been trained and tested by the experimental data available in literatures for the pressure drop and the lozia and Leith model for the cut-off diameter. The results demonstrate that artificial neural networks can offer an alternative and powerful approach to model the cyclone performance parameters. The analysis indicates the significant effect of the vortex finder diameter D-x, the vortex finder lengths, the inlet width b and the total height H-t. The response surface methodology has been used to fit a second-order polynomial to the RBFNN. The second-order polynomial has been used to study the interaction between the geometrical parameters. The two trained artificial neural networks have been used as two objective functions to get new optimal ratios for minimum pressure drop and minimum cut-off diameter using the multi-objective genetic algorithm optimization technique. Sometimes, the main concern is minimizing the pressure drop, so a single objective optimization study has been performed to obtain the cyclone geometrical ratio for minimum pressure drop. The comparison of numerical simulation of the new optimal design and the Stairmand design confirms the superior performance of the new design. (C) 2011 Elsevier B.V. All rights reserved. 2012 * 550(<-335): CFD modeling and multi-objective optimization of compact heat exchanger using CAN method Thermal modeling and optimal design of compact heat exchanger is presented in this paper. Fin pitch, fin height, cold stream flow length, no-flow length and hot stream flow length were considered as five design parameters. A CFD analysis coupled with artificial neural network was used to develop a relation between Colburn factor and Fanning friction factor for the triangle fin geometry with acceptable precision. Then, fast and elitist non-dominated sorting genetic algorithm (NSGA-II) was applied to obtain the maximum effectiveness and the minimum total pressure drop as two objective functions. The results of optimal designs were a set of multiple optimum solutions, called 'Pareto-optimal solutions'. It reveals that any geometrical changes which decrease the pressure drop in the optimum situation, lead to a decrease in the effectiveness and vice versa. Finally sensitivity analysis shows the increases of heat transfer surface area necessarily do not increases the pressure drop and it is case sensitive. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 551(<-412): Thermal-economic multi-objective optimization of plate fin heat exchanger using genetic algorithm Thermal modeling and optimal design of compact heat exchangers are presented in this paper. epsilon-NTU method was applied to estimate the heat exchanger pressure drop and effectiveness. Fin pitch, fin height, fin offset length, cold stream flow length, no-flow length and hot stream flow length were considered as six design parameters. Fast and elitist non-dominated sorting genetic-algorithm (NSGA-II) was applied to obtain the maximum effectiveness and the minimum total annual cost (sum of investment and operation costs) as two objective functions. The results of optimal designs were a set of multiple optimum solutions, called 'Pareto optimal solutions'. The sensitivity analysis of change in optimum effectiveness and total annual cost with change in design parameters of the plate fin heat exchanger was also performed and the results are reported. As a short cut for choosing the system optimal design parameters the correlations between two objectives and six decision variables with acceptable precision were presented using artificial neural network analysis. (C) 2009 Elsevier Ltd. All rights reserved. 2010 * 552(<-273): A sensitive and efficient method for simultaneous trace detection and identification of triterpene acids and its application to pharmacokinetic study A sensitive and efficient method for simultaneous trace detection of seven triterpene acids was developed and validated for analysis of rat plasma samples. The required micro-sampling of only 20 ut. blood reduced the difficulty in blood collection and the injury to animal. The whole pretreatment procedure was more conveniently finished within 26 min through the application of the semi-automated derivatization extraction method to biological samples. Seven analytes were rapidly separated within 30 min on reversed-phase Akasil-C18 column and quantified by fluorescence detector. Online ion trap MS with atmospheric pressure chemical ionization (APCI) source was used for further identification. The novel application of artificial neural network (ANN) combined with genetic algorithm (GA) to optimization of derivatization was performed and compared with the classical response surface methodology (RSM). Optimal derivatization condition was validated by multi-criteria and nonparametric tests and used successfully to achieve the rather high sensitivity (limit of detection: 0.67-1.08 ng/mL). The limit of reactant concentration (LORC) special for derivatization was studied and the lower values (2.53-4.03 ng/mL) ensured the trace detection. Results of validation demonstrated the advantages for pharmacokinetic study, such as higher sensitivity, better accuracy, easier pretreatment and shorter run-time. Pharmacokinetic study of triterpene acids after oral administration of Salvia miltiorrhiza extract to mice was conducted for the first time. The present method provided more sensitive and efficient alternative for the medical detection of bioactive constituents from herbal extract in the biological liquid. (C) 2012 Elsevier B.V. All rights reserved. 2012 * 553(<-276): A sensitive and efficient method to systematically detect two biophenols in medicinal herb, herbal products and rat plasma based on thorough study of derivatization and its convenient application to pharmacokinetics with semi-automated device A sensitive and efficient method using a semi-automated pretreatment device, pre-column derivatization, multivariate optimization and high performance liquid chromatography with fluorescence and mass spectrometric detection was developed and validated for the systematic determination of two biophenols in four herb-related samples (medicinal herb; herbal products in tablet, capsule and oral liquid forms) and plasma samples after oral administration to rat. Only micro-sampling of 20 mu L blood was needed for the analysis, and the pretreatment procedure including blood collection, derivatization by 10-ethyl-acridine-3-sulfonyl chloride (EASC) and injection to the sampling vials was efficiently finished in 10 min with no cumbersome and complicated operation. The novel application of artificial neural network (ANN) coupled with genetic algorithm (GA) to optimization of derivatization condition was executed and compared with the classical response surface methodology (RSM). The optimal condition for derivatization was validated by multi-criteria and nonparametric tests and used successfully to achieve the higher sensitivity (limit of detection: 0.6 and 0.8 ng/mL). The limit of reactant concentration (LORC) was put forward for derivatization method for the first time, and the lower values (2.0-2.7 ng/mL) provided the guarantee for the trace detection with the micro samples (<50 mu L) required. The results of validation including selectivity, sensitivity, linearity, accuracy, precision, recovery, matrix effect and stability demonstrated the advantages of this method. The pharmacokinetic study of major bioactive components salidroside and p-tyrosol in herb Rhodiola crenulata and its products was more conveniently performed in 25 min. The established method could be the sensitive and efficient alternative method for the systematic detection of bioactive components in series of drug carriers from raw herb to herbal products and to blood in medical research. And the approaches of the thorough study played the guiding role in seeking a novel analytical method. (C) 2012 Elsevier B.V. All rights reserved. 2012 * 554(<-295): Modeling and genetic algorithm-based multi-objective optimization of the MED-TVC desalination system This study proposes a systematic approach of analysis and optimization of the multi-effect distillation-thermal vapor compression (MED-TVC) desalination system. The effect of input variables, such as temperature difference, motive steam mass flow rate, and preheated feed water temperature was investigated using response surface methodology (RSM) and partial least squares (PLS) technique. Mathematical and economical models with exergy analysis were used for total annual cost (TAC), gain output ratio (CUR) and fresh water flow rate (Q). Multi-objective optimization (MOO) to minimize TAC and maximize CUR and Q was performed using a genetic algorithm (GA) based on an artificial neural network (ANN) model. Best Pareto optimal solution selected from the Pareto sets showed that the MED-TVC system with 6 effects is the best system among the systems with 3, 4, 5 and 6 effects, which has a minimum value of unit product cost (UPC) and maximum values of CUR and Q. The system with 6 effects under the optimum operation conditions can save 14%, 12.5%, 2% in cost and reduces the amount of steam used for the production of 1 m(3) of fresh water by 50%, 34% and 18% as compared to systems with 3.4 and 5 effects, respectively. (C) 2012 Elsevier B.V. All rights reserved. 2012 * 555(<-532): Modeling and optimization of m-cresol isopropylation for obtaining n-thymol: Combining a hybrid artificial neural network with a genetic algorithm The application of a hybrid framework based on the combination, artificial neural network-genetic algorithm (ANN-GA), for n-thymol synthesis modeling and optimization has been developed. The effects of molar ratio propylene/cresol (X1), catalyst mass (X2) and temperature (X3) on n-thymol selectivity Y1 and m-cresol conversion Y2 were studied. A 3-8-2 ANN model was found to be very suitable for reaction modeling. The multiobjective optimization, led to optimal operating conditions ( 0.55 <= X1 <= 0.77; 1.773 g <= X2 <= 1.86 g; 289.74 degrees C <= X3 <= 291.33 degrees C) representing good solutions for obtaining high n-thymol selectivity and high m-cresol conversion. This optimal zone corresponded to n-thymol selectivity and m-cresol conversion ranging respectively in the interval [79.3; 79.5]% and [ 13.4 %; 23.7]%. These results were better than those obtained with a sequential method based on experimental design for which, optimum conditions led to n-thymol selectivity and m-cresol conversion values respectively equal to 67% and 11%. The hybrid method ANN-GA showed its ability to solve complex problems with a good fitting. 2007 * 556(<- 97): Synthesis of ZnO nano-sono-catalyst for degradation of reactive dye focusing on energy consumption: operational parameters influence, modeling, and optimization Simple synthesized Nano-sized ZnO powder in the absence of high-temperature activation treatments was studied to act as sono-catalyst. Effects of six operational parameters such as initial solution pH (pH(0)), initial concentration of dye stuff (C-0), additional dose of nano-sized ZnO powder (D-SC), ultrasound (US) irradiation frequency (Fr-SC), US irradiation power (P-SC), and treatment time (t(SC)) were examined. Synthetic wastewater containing Reactive Red 198 (RR198) was used as the sample model. Combined design of experiments was done and experiments were conducted according to protocols. The experimental data were collected in a laboratory-scaled batch reactor equipped with ultrasonic bath cleaner as the ultrasonic source. The measured CR% ranging from 0.8 to 100 and EnC (wh) from 0.3 to 13.6 gained under given conditions. The data used for modeling were used in two more common models in this type of studies: Multiple linear regression (MLR) and artificial neural network (ANN). The ANN models obviously outperformed MLR models. Finally, Multi-objective optimization of CR% and EnC was carried out using genetic algorithm (GA) over the outperformed ANN models. The optimization procedure causes non-dominated optimal points which give an insight of the optimal operating conditions. 2014 * 557(<-134): Electrocoagulation efficiency and energy consumption probing by artificial intelligent approaches Color removal efficiency (CR%) and energy consumption (EnC) of Electrocoagulation (EC) were investigated using synthetic wastewater, containing disperses like orange25 dye (DO25). Five operational parameters including initial pH (pH(0)) (2, 5.5, and 9), initial dye concentration (C-0) (20, 60, and 100mgL(-1)), applied voltage (V-EC) (10, 20, and 30V), initial electrolyte concentration (C-S) (0, 1.5, and 3gL(-1)), and treatment time (t(EC)) (0, 0.5, 5, 10, 15, 25, 35, and 50min) were probed as more effective operational parameters of EC. Combined design of experiments (DOE) was designed and experiments were conducted in accordance with it. The experimental data were obtained in a laboratory through a handmade batch reactor. The achieved CR% (0-99.9) and EnC (0-69.4wh) were gained under experimental conditions. The optimum value of C-0 was almost 20ppm (minimum range). Two optimum clusters could be discriminated for other four parameters. First group was corresponded to conditions with pH(0)=9 (maximum value of range), C-S=0.7-1.1(g/lit), V-EC=10V (minimum of range), and t(EC)=1min. Second group was corresponded to the conditions with pH(0)=6.8 (except two cases), C-S=1.1-2(g/lit), V-EC=10-15.2V, and t(EC)=49.4-50min. The data was used for model building by employing two more popular models in this study: reduced quadratic multiple regression model (SMLR) and artificial neural network (ANN). Further statistical tests were applied to exhibit models' goodness and to compare the models. Based on statistical comparison, ANN models obviously outperformed SMLR models. Finally, multi objective optimization of CR% and EnC was carried out using genetic algorithm (GA) over the outperformed ANN models. The optimization procedure causes nondominated optimal points, which gave an insight into optimal operating conditions of the EC. 2014 * 558(<-216): Optimization of OCM reactions over Na-W-Mn/SiO2 catalyst at elevated pressure using artificial neural network and response surface methodology In this study, Response Surface Methodology (RSM) and Artificial Neural Network (ANN) predictive models are developed, based on experimental data of the Oxidative Coupling of Methane (OCM) over Na-W-Mn/SiO2 at 0.4 MPa, which was obtained in an isothermal fixed bed reactor. Results show that the simulation and prediction accuracy of ANN was apparently higher compared to RSM. Thus, the Hybrid Genetic Algorithm (HGA), based on developed ANN models, was used for simultaneous maximization of CH4 conversion and C2+ selectivity. The pareto optimal solutions show that at a reaction temperature of 987 K, feed GHSV of 15790 h(-1), diluents amounts of 20 mole%, and methane to oxygen molar ratio of 3.5, the maximum C2+ yield obtained from ANN-HGA was 23.91% (CH4 conversion of 34.6% and C2+ selectivity of 69%), as compared to 22.81% from the experimental measurements (CH4 conversion of 34.0% and C2+ selectivity of 67.1%). The predicted error in optimum yield by ANN-HGA was 4.81%, suggesting that the combination of ANN models with the hybrid genetic algorithm could be used to find a suitable operating condition for the OCM process at elevated pressures. (C) 2013 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved. 2013 * 559(<-339): Optimization of OCM reaction conditions over Na-W-Mn/SiO2 catalyst at elevated pressure Performance of the oxidative coupling of methane (OCM) at elevated pressures has been simulated by a set of supervised Artificial Neural Network (ANN) models using reaction data gathered in a microreactor device. Accuracy of the developed models were evaluated by comparing the predicted results with the test data set showing a good agreement. In order to enhance the performance of OCM process at 0.4 MPa as a desired operating pressure for commercial application of OCM, the Hybrid Genetic Algorithm (HGA) was used to obtain the optimal values of the operating conditions. Nondominated Pareto optimal solutions were obtained and additional experiments were carried out at two different optimum conditions in order to verify the optimums. The results show that combination of ANN models with HGA could be used in finding the suitable operating conditions for OCM process at elevated pressures. It was shown that the C2+ yield of above 23% can be achieved at 0.4 MPa by using Na-W-Mn/SiO2 as the OCM catalyst. (C) 2011 Taiwan Institute of Chemical Engineers. Published by Elsevier B.V. All rights reserved. 2011 * 560(<-348): RSM and ANN modeling for electrocoagulation of copper from simulated wastewater: Multi objective optimization using genetic algorithm approach Performance of electrocoagulation system for the removal of copper from synthetic wastewater was investigated using aluminum electrode pair at four operational parameters: copper concentration (2.5-32.5 mg L(-1)), pH (5-9), voltage (6-18 V) and treatment time (5-25 min). Metal removal efficiency and energy consumption were monitored as responses. Experiments were conducted as per center composite design, and the data was used for model building employing response surface methodology (RSM) and artificial neural network approach (ANN). Multi objective optimization for maximizing the copper removal efficiency and minimizing the energy consumption was carried out using genetic algorithm (GA) over the ANN model. The optimization procedure resulted in the creation of nondominated optimal points which gave an insight regarding the optimal operating conditions of the process. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 561(<-308): Optimization studies for catalytic ozonation of cephalexin antibiotic in a batch reactor This study examines the effect of pH of solution, cephalexin (CEX) concentration, and O-3 dosage on removal of CEX by catalytic ozonation. All three parameters were found to exert a significant effect on the removal of cephalexin and on the enhancement in biodegradability of solution. The operating conditions were optimized using response surface methodology (RSM) and artificial neural network (ANN). Both RSM and ANN models were found capable of predicting removal of CEX during catalytic ozonation. Simultaneous optimization of two responses (chemical oxygen demand (COD) removal and CEX removal) was carried out using genetic algorithm based multiobjective optimization. Non-dominated Pareto optimal solutions provided the range of optimum conditions for the catalytic ozonation process. The optimized values of pH (9.7), ozone supply (34.5 mg/L), and CEX concentration (33.6 mg/L) using GA multiobjective optimization corresponded to complete conversion of CEX with 72% removal of COD. 2012 * 562(<-531): Modelling and optimization of catalytic-dielectric barrier discharge plasma reactor for methane and carbon dioxide conversion using hybrid artificial neural network - genetic algorithm technique A hybrid artificial neural network-genetic algorithm (ANN-GA) was developed to model, simulate and optimize the catalytic-dielectric barrier discharge plasma reactor. Effects of CH4/CO2 feed ratio, total feed flow rate, discharge voltage and reactor wall temperature on the performance of the reactor was investigated by the ANN-based model simulation. Pareto optimal solutions and the corresponding optimal operating parameter range based on multi-objective scan be suggested for two cases, i.e., simultaneous maximization of CH4 conversion and C2+ selectivity (Case 1), and H-2 selectivity and H-2/CO ratio (Case 2). It can be concluded that the hybrid catalytic-dielectric barrier discharge plasma reactor is potential for co-generation of synthesis gas and higher hydrocarbons from methane and carbon dioxide and performed better than the conventional fixed-bed reactor with respect to CH4 conversion, C2+ yield and H-2 selectivity. (C) 2007 Published by Elsevier Ltd. 2007 * 563(<-566): Hybrid artificial neural network-genetic algorithm technique for modeling and optimization of plasma reactor A hybrid artificial neural network-genetic algorithm (ANN-GA) numerical technique was successfully developed to model, to simulate, and to optimize a dielectric barrier discharge (DBD) plasma reactor without catalyst and heating. Effects of CH4/CO2 feed ratio, total feed flow rate, and discharge voltage on the performance of noncatalytic DBD plasma reactor were studied by an ANN-based simulation with a good fitting. From the multiobjectives optimization, the Pareto optimal solutions and corresponding optimal process parameter ranges resulted for the noncatalytic DBD plasma reactor owing to the optimization of three cases, i.e., CH4 conversion and C2+ selectivity, CH4 conversion and C2+ yield, and CH4 conversion and H-2 selectivity. 2006 * 564(<-156): Esterification of oleic acid to biodiesel using magnetic ionic liquid: Multi-objective optimization and kinetic study The esterification of oleic acid in the presence of magnetic ionic liquid, 1-butyl-3-methylimidazolium tetrachloroferrate ([BMIM][FeCI4]) at reaction temperature of 65 degrees C has been investigated. Artificial neural network-genetic algorithm (ANN-GA) was used to simultaneously optimized methyl oleate yield and oleic acid conversion for the reaction. It was found that optimum responses for both yield and conversion were 83.4%, which can be achieved using molar ratio methanol-oleic acid of 22:1, catalyst loading of 0.003 mol and reaction time at 3.6 h. Esterification of oleic acid at optimum condition using recycled [BMIM][FeCl4] registered not much loss in catalytic activity after six successive runs. Kinetic study indicated that the reaction followed a pseudo-first order reaction, with activation energy and pre-activation energy of 17.97 kJ/mol and 181.62 min(-1), respectively. These values were relatively low compared to homogeneous or heterogeneous catalysts for esterification of oleic acid. Thus, [BMIM][FeCI4] is a promising new type of catalyst for conversion of high free fatty acid (FFA) feeds to biodiesel. (C) 2013 Elsevier Ltd. All rights reserved. 2014 * 565(<-179): Optimization of oleic acid esterification catalyzed by ionic liquid for green biodiesel synthesis In order to improve the efficiency of biodiesel production from esterification of free fatty acids, an alternative to sulfuric acid has been explored. In this study, esterification of oleic acid was performed using 1-butyl-3-methylimidazolium hydrogen sulfate ([BMIM][HSO4]) ionic liquid for green biodiesel production. Response surface methodology (RSM) based on central composite design (CCD) was employed to study the effect of independent parameters on the process and also for single-objective optimization, while artificial neural network-genetic algorithm (ANN-GA) was utilized to simultaneously optimize the responses of the reaction (methyl oleate yield and oleic acid conversion). From the results, the predicted mathematical models for both methyl oleate yield and oleic acid conversion covered more than 80% of the variability in the experimental data. Furthermore, the linear temperature coefficient was identified as the most influential coefficient towards both responses. Higher responses were predicted for multi-objective optimization using ANN-GA compared to single-objective optimization using RSM. The optimum responses predicted using multi-objective optimization were 81.2% and 80.6% for methyl oleate yield and oleic acid conversion, respectively. The conditions to achieve optimum response were methanol-oleic acid molar ratio of 9:1, catalyst loading (0.06 mol), reaction temperature (87 degrees C), and reaction time (5.2 h). Furthermore, there was only small decrease in the catalytic activity of the IL after being recycled for five successive runs. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 566(<- 75): Application of Artificial Neural Network and Genetic Programming in Modeling and Optimization of Ultraviolet Water Disinfection Reactors Ultraviolet (UV) disinfection is an environmental-friendly technology for water treatment. However, design and operation of UV disinfection reactors are very difficult without a good model. In this work, two modeling methods, Artificial Neural Network (ANN) and Genetic Programming (GP), were applied to model UV water disinfection reactors. The model training data were obtained from simulation using Computational Fluid Dynamics (CFD) software. The accuracy of these two modeling methods was compared based on modeling error as well as generalization ability for new inputs. The ANN and GP models were then used to determine optimal design and operating variables of UV disinfection process, using multi-objective optimization. Selected Pareto-optimal solutions were compared using CFD simulations, and the results are presented and discussed in this paper. 2015 * 567(<-383): Optimization of a Centrifugal Compressor Impeller II-Artificial Neural Network and Genetic Algorithm The optimization of a centrifugal compressor was conducted. The ANN (Artificial Neural Network) was adopted as an optimization algorithm, and it was learned and trained with the DOE (Design of Experiment). In the DOE, it was predicted the main effect and the interaction effect of design variables to the objective function. The ANN was improved in the optimization process using the GA (Genetic Algorithm). When any output at each generation was reached a standard level, it was re-calculated by the CFD (Computational Fluid Dynamics) and it was applied to develop a new ANN. After 6th generation, the prediction difference between ANN and CFD was less than 1%. A pareto of the efficiency versus the pressure ratio was obtained through the 21th generation. Using this method, the computational time for the optimization was equivalent to the time consumed by the gradient method, and the optimized results of multi-objective function were obtained. 2011 * 568(<- 47): A hybrid evolutionary performance improvement procedure for optimisation of continuous variable discharge concentrators An iterative hybrid performance improvement approach integrating artificial neural network modelling and Pareto genetic algorithm optimisation was developed and tested. The optimisation procedure, code named NNREGA, was tested for tuning pilot scale Continuous Variable Discharge Concentrator (CVD) in order to simultaneously maximise recovery and upgrade ratio of gold bearing sulphides from a polymetallic massive sulphide ore. For the tests the CVD was retrofitted during normal operation on the flotation tailings stream. On the basis of mineralogical data showing strong pyrite-gold association in the flotation tailings, iron assays were used as an indicator of CVD performance on recovery of gold bearing sulphides. Initially, 17 pilot scale statistically designed tests were conducted to assess metallurgical performance. The Matlab 2010a software was used to train and simulate back propagation ANNs on experimental results. Regression models developed from simulation data were used to formulate the objective functions used to optimise the CVD using the NSGA-II genetic algorithm. Results show that the NNREGA procedure provides an efficient way of exploring the design space to learn the relationship between interacting variables and outputs and is capable of generating the operating line, which is a non-dominated recovery/grade line. The paper forms a basis for future work aiming to model and scale up processing equipment. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 569(<-258): Multidisciplinary optimization of collapsible cylindrical energy absorbers under axial impact load In this article, the multi-objective optimization of cylindrical aluminum tubes under axial impact load is presented. The specific absorbed energy and the maximum crushing force are considered as objective functions. The geometric dimensions of tubes including diameter, length and thickness are chosen as design variables. D/t and L/D ratios are constricted in the range of which collapsing of tubes occurs in concertina or diamond mode. The Non-dominated Sorting Genetic Algorithm-II is applied to obtain the Pareto optimal solutions. A back-propagation neural network is constructed as the surrogate model to formulate the mapping between the design variables and the objective functions. The finite element software ABAQUS/Explicit is used to generate the training and test sets for the artificial neural networks. To validate the results of finite element model, several impact tests are carried out using drop hammer testing machine. 2012 * 570(<- 62): Pareto genetic design of group method of data handling type neural network for prediction discharge coefficient in rectangular side orifices The powerful method of Group Method of Data Handling (GMDH) was used for estimating the discharge coefficient of a rectangular side orifice. First, the existing equations for calculating the discharge coefficient were studied making use of experimental results. On the first hand, the factors affecting the discharge coefficient were determined, then five models were constructed in order to analyze the sensitivity in achieving accuracy by using different parameters. The results, obtained using statistical indexes (MARE=0.021 and RMSE=0.017), showed that one model out of the five models, on estimation using the dimensionless parameters of the ratio of depth of flow in main channel to width of rectangular orifice (Y-m/L), Froude number (Fr), the ratio of sill height to width of rectangular orifice (W/L) and width of main channel to width of rectangular orifice (B/L), presented the best results. (C) 2014 Elsevier Ltd. All rights reserved. 2015 * 571(<-159): APPLICATION OF ANN AND GA FOR THE PREDICTION AND OPTIMIZATION OF THERMAL AND FLOW CHARACTERISTICS IN A RECTANGULAR CHANNEL FITTED WITH TWISTED TAPE VORTEX GENERATORS This study reports an application of the hybrid model, including back propagation network and genetic algorithm, for predicting the thermal and flow characteristics in a rectangular channel fitted with multiple twisted tape vortex generators (MT-VG). Dimensionless geometric parameters and Reynolds number were considered as network inputs, and Nusselt number and friction factor were the output variables. The performance of the developed neural networks was found to be superior in comparison with the empirical correlations. In addition, the proposed networks have been considered as two objective functions in order to obtain optimal operation conditions. Since mentioned objectives are conflicting, the multi-objective optimization using genetic algorithm was used for the optimization. 2014 * 572(<-380): Multi-objective shape optimization of helico-axial multiphase pump impeller based on NSGA-II and ANN In order to improve the prototype's performance of the helico-axial multiphase pump, a multi-objective optimal method for the pump impeller was developed by combining the artificial neural network (ANN) with non-dominated sorting genetic algorithm-II (NSGA-II). The main geometric parameters influencing the impeller's performance were chosen as the optimization variables, and the sample spaces were structured according to the orthogonal experimental design method. Then the pressure rise and efficiency in specific working conditions were obtained about all the elements in the sample space by numerical simulation. With the simulated results as the input specimen, a multiphase pump performance prediction model was designed through BP neural network. With the obtained prediction model as the fitness value evaluation method, the pump impeller was optimized using the NSGA-II multi-objective genetic algorithm, which finally offered an improved impeller structure with enhanced pressure rise and efficiency. Furthermore, five stages of optimized compression cells were manufactured and applied in experiment test. The result shows compared to the original design, the pressure rise of the optimized pump has increased by similar to 10% and the efficiency has increased by similar to 3%, which is in keeping with our optimal result and confirms our method is feasible. (C) 2010 Elsevier Ltd. All rights reserved. 2011 * 573(<-401): An optimization strategy for die design in the low-density polyethylene annular extrusion process based on FES/BPNN/NSGA-II An optimization strategy for die design in the polymer extrusion process is proposed in the study based on the finite element simulation, the back-propagation neural network, and the non-dominated sorting genetic algorithm II (NSGA-II). The three-dimensional simulation of polymer melts flow in the extrusion process is conducted using the penalty finite element method. The model for predicting the flow patterns in the extrusion process is established with the artificial neural network based on the simulated results. The non-dominated sorting genetic algorithm II is performed for the search of globally optimal design variables with its objective functions evaluated by the established neural network model. The proposed optimization strategy is successfully applied to the die design in low-density polyethylene (LDPE) annular extrusion process. A constrained multi-objective optimization model is established according to the characteristics of annular extrusion process. The minimum of velocity relative difference, delta u, and the minimum of swell ratio, S (w), that, respectively, ensure the extrinsic feature, mechanical property, and dimensional precision of the final products are taken as optimization objectives with a constrained condition on the maximum shear stress. Three important die structure parameters, including the die contraction angle alpha, the ratio of parallel length to inner radius L/R (i), and the ratio of outer to inner radius R (o) /R (i), are taken as design variables. The Phan-Thien-Tanner constitutive model is adopted to describe the viscoelastic rheological characteristics of LDPE whose parameters are fitted by the distributions of material functions detected on the strain-controlled rheometer. The penalty finite element model of polymer melts flowing through out of the extrusion die is derived. A decoupled method is employed to solve the viscoelastic flow problem with the discrete elastic-viscous split-stress algorithm. The simulated results are selected and extracted to constitute the learning samples according to the orthogonal experimental design method. The back propagation algorithm is adopted for the training and the establishment of the predicting model for the optimization objective. A Pareto-optimal set for the constrained multi-objective optimization is obtained using the constrained NSGA-II, and the optimal solution is extracted based on the fuzzy set theory. The optimization for die parameters in the annular extrusion process of low-density polyethylene is performed and the optimization objective is successfully achieved. 2010 * 574(<- 66): Application of multi-objective genetic algorithm to optimize energy efficiency and thermal comfort in building design Several conflicting criteria exist in building design optimization, especially energy consumption and indoor environment thermal performance. This paper presents a novel multi-objective optimization model that can assist designers in green building design. The Pareto solution was used to obtain a set of optimal solutions for building design optimization, and uses an improved multi-objective genetic algorithm (NSGA-II) as a theoretical basis for building design multi-objective optimization model. Based on the simulation data on energy consumption and indoor thermal comfort, the study also used a simulation-based improved back-propagation (BP) network which is optimized by a genetic algorithm (GA) to characterize building behavior, and then establishes a GA-BP network model for rapidly predicting the energy consumption and indoor thermal comfort status of residential buildings; Third, the building design multi-objective optimization model was established by using the GA-BP network as a fitness function of the multi-objective Genetic Algorithm (NSGA-II); Finally, a case study is presented with the aid of the multi-objective approach in which dozens of potential designs are revealed for a typical building design in China, with a wide range of trade-offs between thermal comfort and energy consumption. (C) 2014 Elsevier B.V. All rights reserved. 2015 * 575(<- 78): Modeling and optimization of catalytic performance of SAPO-34 nanocatalysts synthesized sonochemically using a new hybrid of non-dominated sorting genetic algorithm-II based artificial neural networks (NSGA-II-ANNs) The effects of ultrasound-related variables on the catalytic properties of sonochemically prepared SAPO-34 nanocatalysts in methanol to olefins (MTO) reactions were investigated. Different catalytic behaviors are observed which can be explained by the differences in the catalysts' physicochemical properties affected by ultrasonic (US) power intensity, sonication temperature, irradiation time and sonotrode size. This result confirms that the activity of SAPO-34 catalysts improves with the rise in US power, time and temperature. In order to find a catalyst with the maximum conversion of methanol, maximum light olefins content and maximum lifetime, a hybrid of non-dominated sorting genetic algorithm-II based artificial neural networks (NSGA-II-ANNs) was used. The multilayer feed forward neural networks with back-propagation structures were implemented using different training rules in the neural networks approach to relate the ultrasound-related variables and the catalytic performance of SAPO-34 catalysts. A comparison between experimental and artificial neural network (ANN) values indicates that the ANN model with a 3-10-3 structure using the Bayesian regulation training rule has the best fit and can be used as a fitness evaluation inside the non-dominated sorting genetic algorithm-II (NSGA-II). Also, multiple linear regression (MLR) was used to predict these objective functions. The results indicate a poor fit for the objective functions with a low coefficient of determination. This confirms that the ANN technique is more effective than the traditional statistical-based prediction models. Finally, this ANN model was linked to the NSGA-II and Pareto-optimal solutions were determined by the NSGA-II. 2015 * 576(<-201): Airflow and temperature distribution optimization in data centers using artificial neural networks To control energy usage in data center rooms, reduced order models are important in order to perform real-time assessment of the optimum operating conditions to reduce energy usage. Here computational fluid dynamics (CFD) simulation-based Artificial Neural Network (ANN) models were developed and applied to a basic hot aisle/cold aisle data center configuration in order to predict thermal operating conditions for a specified set of control variables. Once trained, the ANN-based model predictions were shown to agree well with the CFD results for arbitrary values of the input variables within the specified limits. In addition, the ANN model was combined with a cost function based multi-objective Genetic Algorithm (GA), which enabled the operating conditions to be inversely predicted for specified values of the output variable (e.g., server rack inlet temperatures). The ANN-GA optimization approach considerably reduces the total computation time compared to a fully CFD-based response surface optimization methodology. Consequently, operating conditions are capable of being reliably predicted in seconds, even for configurations outside of the original ANN training set. These results show that an ANN based model can yield an effective real-time thermal management design tool for data centers. (c) 2013 Elsevier Ltd. All rights reserved. 2013 * 577(<-326): Multi-Objective Optimization of aluminum hollow tubes for vehicle crash energy absorption using a genetic algorithm and neural networks A numerical study of the crushing of thin-walled circular aluminum tubes has been carried out to investigate their behaviors under axial impact loading. These kinds of tubes are usually used in automobile and train structures to absorb the impact energy. A Multi-Objective Optimization of circular aluminum tubes undergoing axial compressive loading for vehicle crash energy absorption is performed for five crushing parameters using the weighted summation method. To improve the accuracy of the optimization process, artificial neural networks are used to reproduce the behavior of the crushing parameters in crush dynamics conditions. An explicit finite element method (FEM) is used to model and analyzed the behavior. A series of aluminum cylindrical tubes are simulated under axial impact condition for the experimental validation of the numerical solutions. A finite element code, capable of evaluating parameters crush, is prepared of which the outputs are used for training and testing the developed neural networks. In order to find the optimal solution, a genetic algorithm is implemented. With the purpose of illustrating optimum dimensional ratios, numerical results are presented for thin-walled circular aluminum AA6060-T5 and AA6060-T4 tubes. Multi-Objective Optimization of circular aluminum tubes has been performed in the basis of different priorities to create the ability for designer to select the optimum dimension ratio. Also, crush parameters of two aluminum alloys has been compared. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 578(<-374): Multi-point and multi-objective optimization design method for industrial axial compressor cascades Modern aerodynamic optimization design methods for the industrial axial compressor cascade mainly aim at improving both design point and off-design point performance. In this study, a multi-point and multi-objective optimization design method is established for the cascade, particularly aiming at widening the operating range while maintaining good performance at the acceptable expense of computational load. The design objectives are to maximize the static pressure ratio and minimize the total pressure loss coefficient at the design point, and to maximize the operating range for the positive and negative incidences. To alleviate the computational load, a design of experiment (DOE)-based GA-BP-ANN model is constructed to rapidly approximate the cascade aerodynamic performance in the optimization process. The artificial neural network (ANN) is trained by the genetic algorithm (GA) technique and back propagation (BP) algorithm, where the training cascades are sampled by the DOE method and analysed by the computational fluid dynamics method. The multi-objective genetic algorithm is used to search for a series of Pareto-optimum solutions, from which an optimal cascade is found out whose objectives are all better than (ABT) those of the original design. The ABT cascade is characterized by the lower camber and higher turning angle, leading to better aerodynamic performance in a widened operating range. Compared with the original design, the ABT cascade decreases the total pressure loss coefficient by 1.54 per cent, 23.4 per cent, and 7.87 per cent at the incidences of 5 degrees, -9 degrees, and 13 degrees, respectively. The established optimization design method can be extended to the three-dimensional aerodynamic design of axial compressor blade. 2011 * 579(<-106): Multiobjective optimization of composite cylindrical shells for strength and frequency using genetic algorithm and neural networks In this paper, the optimal fiber orientations relative to the principle axis of composite cylindrical shell composed of four and six layers were determined so that the natural frequency and strength of the shell are optimized. For this purpose, first, the free vibration analysis of the shell was carried out based on 3D elasticity. Then, for calculation of the strength objective function, the inverse form of Tsai-Hill yield criteria was used and the functions of strength and frequency were developed in terms of fiber orientation. Once the correctness of the above solutions was ensured, the objective functions were modeled with artificial neural network (ANN). The model made was then introduced into genetic algorithm (GA) and the maximum fitness function and optimal staking sequence of the layers with respect to the fibers angles were obtained. Optimal solutions obtained by combination of ANN and GA are compared to the solutions obtained by analytical solution and GA; eventually, the tables and diagrams are presented and different fiber orientations as optimization solutions are presented as the final results of the composite shell analysis. 2014 * 580(<-253): Multiobjective Optimization Design of Spinal Pedicle Screws Using Neural Networks and Genetic Algorithm: Mathematical Models and Mechanical Validation Short-segment instrumentation for spine fractures is threatened by relatively high failure rates. Failure of the spinal pedicle screws including breakage and loosening may jeopardize the fixation integrity and lead to treatment failure. Two important design objectives, bending strength and pullout strength, may conflict with each other and warrant a multiobjective optimization study. In the present study using the three-dimensional finite element (FE) analytical results based on an L-25 orthogonal array, bending and pullout objective functions were developed by an artificial neural network (ANN) algorithm, and the trade-off solutions known as Pareto optima were explored by a genetic algorithm(GA). The results showed that the knee solutions of the Pareto fronts with both high bending and pullout strength ranged from 92% to 94% of their maxima, respectively. In mechanical validation, the results of mathematical analyses were closely related to those of experimental tests with a correlation coefficient of -0.91 for bending and 0.93 for pullout (P < 0.01 for both). The optimal design had significantly higher fatigue life (P < 0.01) and comparable pullout strength as compared with commercial screws. Multiobjective optimization study of spinal pedicle screws using the hybrid of ANN and GA could achieve an ideal with high bending and pullout performances simultaneously. 2013 * 581(<-262): Soft computing based multi-objective optimization of steam cycle power plant using NSGA-II and ANN In this paper a steam turbine power plant is thermo-economically modeled and optimized. For this purpose, the data for actual running power plant are used for modeling, verifying the results and optimization. Turbine inlet temperature, boiler pressure, turbines extraction pressures, turbines and pumps isentropic efficiency, reheat pressure as well as condenser pressure are selected as fifteen design variables. Then, the fast and elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) is applied to maximize the thermal efficiency and minimize the total cost rate (sum of investment cost, fuel cost, and maintenance cost) simultaneously. The results of the optimal design are a set of multiple optimum solutions, called 'Pareto optimal solutions'. The optimization results in some points show 3.76% increase in efficiency and 3.84% decrease in total cost rate simultaneously, when it compared with the actual data of the running power plant. Finally as a short cut to choose the system optimal design parameters a correlation between two objectives and fifteen decision variables with acceptable precision are presented using Artificial Neural Network (ANN). (c) 2012 Elsevier B.V. All rights reserved. 2012 * 582(<- 43): Experimental and numerical analysis of the optimized finned-tube heat exchanger for OM314 diesel exhaust exergy recovery In this research, a multi objective optimization based on Artificial Neural Network (ANN) and Genetic Algorithm (GA) are applied on the obtained results from numerical outcomes for a finned-tube heat exchanger (HEX) in diesel exhaust heat recovery. Thirty heat exchangers with different fin length, thickness and fin numbers are modeled and those results in three engine loads are optimized with weight functions for pressure drop, recovered heat and HEX weight. Finally, two cases of HEXs (an optimized and a non-optimized) are produced experimentally and mounted on the exhaust of an OM314 diesel engine to compare their results in heat and exergy recovery. All experiments are done for five engine loads (0%, 20%, 40%, 60% and 80% of full load) and four water mass flow rates (50, 40, 30 and 20 g/s). Results show that maximum exergy recovers occurs in high engine loads and optimized HEX with 10 fins have averagely 8% second law efficiency in exergy recovery. (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 583(<-105): Multi-objective optimization of nanofluid flow in flat tubes using CFD, Artificial Neural Networks and genetic algorithms In this article, multi-objective optimization of Al2O3-water nanofluid parameters in flat tubes is performed using Computational Fluid Dynamics (CFD) techniques, Artificial Neural Networks (ANN) and Non-dominated Sorting Genetic Algorithms (NSGA II). At first, nanofluid flow is solved numerically in various flat tubes using CFD techniques and heat transfer coefficient ((h) over bar) and pressure drop (Delta P) in tubes are calculated. In this step, two phase mixture model is applied for nanofluid flow analysis and the flow regime is also laminar. In the next step, numerical data of the previous step will be applied for modeling (h) over bar and Delta P using Grouped Method of Data Handling (GMDH) type ANN. Finally, the modeling achieved by GMDH will be used for Pareto based multi-objective optimization of nanofluid parameters in horizontal flat tubes using NSGA II algorithm. It is shown that the achieved Pareto solution includes important design information on nanofluid parameters in flat tubes. (C) 2014 The Society of Powder Technology Japan. Published by Elsevier B. V. and The Society of Powder Technology Japan. All rights reserved. 2014 * 584(<-175): Thermal modeling of gas engine driven air to water heat pump systems in heating mode using genetic algorithm and Artificial Neural Network methods The gas-engine driven air-to-water heat pump, type air conditioning system, is composed of two major thermodynamic cycles (including the vapor compression refrigeration cycle and the internal combustion gas engine cycle) as well as a refrigerant-water plate heat exchanger. The thermal modeling of gas engine driven air-to-water heat pump system with engine heat recovery heat exchangers was performed here for the heating mode of operation (in which it was required to model engine heat recovery heat exchanger). The modeling was performed using typical thermodynamic characteristics of system components, Artificial Neural Network and the multi-objective genetic algorithm optimization method. The comparison of modeling results with experimental ones showed average differences of 5.08%, 5.93%, 5.21%, 2.88% and 6.2% which shows acceptable agreement for operating pressure, gas engine fuel consumption, outlet water temperature, engine rotational speed, and system primary energy ratio. (C) 2013 Elsevier Ltd and HR. All rights reserved. 2013 * 585(<-446): Optimization of the core configuration design using a hybrid artificial intelligence algorithm for research reactors To successfully carry out material irradiation experiments and radioisotope productions, a high thermal neutron flux at irradiation box over a desired life time of a core configuration is needed. On the other hand, reactor safety and operational constraints must be preserved during core configuration selection. Two main objectives and two safety and operational constraints are suggested to optimize reactor core configuration design. Suggested parameters and conditions are considered as two separate fitness functions composed of two main objectives and two penalty functions. This is a constrained and combinatorial type of a multi-objective optimization problem. in this paper, a fast and effective hybrid artificial intelligence algorithm is introduced and developed to reach a Pareto optimal set. The hybrid algorithm is composed of a fast and elitist multi-objective genetic algorithm (GA) and a fast fitness function evaluating system based on the cascade feed forward artificial neural networks (ANNs). A specific GA representation of core configuration and also special GA operators are introduced and used to overcome the combinatorial constraints of this optimization problem. A software package (Core Pattern Calculator 1) is developed to prepare and reform required data for ANNs training and also to revise the optimization results. Some practical test parameters and conditions are suggested to adjust main parameters of the hybrid algorithm. Results show that introduced ANNs can be trained and estimate selected core parameters of a research reactor very quickly. It improves effectively optimization process. Final optimization results show that a uniform and dense diversity of Pareto fronts are gained over a wide range of fitness function values. To take a more careful selection of Pareto optimal solutions, a revision system is introduced and used. The revision of gained Pareto optimal set is performed by using developed software package. Also some secondary operational and safety terms are suggested to help for final trade-off. Results show that the selected benchmark case study is dominated by gained Pareto fronts according to the main objectives while safety and operational constraints are preserved. (C) 2009 Elsevier B.V. All rights reserved. 2009 * 586(<-460): Estimation of research reactor core parameters using cascade feed forward artificial neural networks The pattern of the core reload program is very important for an optimize use of research reactors. Reactor safety issues and economic efficiency should be considered during pattern studies. In order to find the best core pattern for a research reactor, its reloading program should be solved as a multi-objective and constrained optimization problem. If considered objective functions of the optimization problem can be estimated in very short time, the optimal fuel reloading pattern can be used effectively. In this research a very fast estimation system for suggested core parameters has been developed using cascade feed-forward type of artificial neural networks (ANNs). Four main core parameters are suggested to optimize reactor core adequately. And also to get larger thermal fluxes in the desired irradiation box, a new flexible method was selected. A Software package has been developed to prepare and reform required data for ANNs training. The gradient descent method with momentum weight/bias learning rule has been used to train ANNs. To get the best conditions for considered ANNs training a vast study has been performed. It includes the effects of variation of hidden neurons, hidden layers, activation functions, learning and momentum coefficients, and also the number of training data sets on the training and simulation results. Some experimental convergence criteria are used to study them. A comparison selection rule has been used to adjust desirable conditions. Final training and simulation results show that developed ANNs can be trained and estimate suggested core parameters of research reactors very quickly. It improves effectively pattern optimization process of core reload program. (C) 2009 Elsevier Ltd. All rights reserved. 2009 * 587(<- 21): ANN-based interval forecasting of streamflow discharges using the LUBE method and MOFIPS The estimation of prediction intervals (PIs) is a major issue limiting the use of Artificial Neural Networks (ANN) solutions for operational streamflow forecasting. Recently, a Lower Upper Bound Estimation (LUBE) method has been proposed that outperforms traditional techniques for ANN-based PI estimation. This method construct ANNs with two output neurons that directly approximate the lower and upper bounds of the PIs. The training is performed by minimizing a coverage width-based criterion (CWC), which is a compound, highly nonlinear and discontinuous function. In this work, we test the suitability of the LUBE approach in producing Pis at different confidence levels (CL) for the 6 h ahead streamflow discharges of the Susquehanna and Nehalem Rivers, US. Due to the success of Particle Swarm Optimization (PSO) in LUBE applications, variants of this algorithm have been employed for CWC minimization. The results obtained are found to vary substantially depending on the chosen PSO paradigm. While the returned PIs are poor when single-objective swarm optimization is employed, substantial improvements are recorded when a multi-objective framework is considered for ANN development. In particular, the Multi-Objective Fully Informed Particle Swarm (MOFIPS) optimization algorithm is found to return valid PIs for both rivers and for the three CL considered of 90%, 95% and 99%. With average PI widths ranging from a minimum of 7% to a maximum of 15% of the range of the streamflow data in the test datasets, MOFIPS-based LUBE represents a viable option for straightforward design of more reliable interval-based streamflow forecasting models. (C) 2015 Elsevier Ltd. All rights reserved. 2015 * 588(<-100): Estimates of energy consumption in Turkey using neural networks with the teaching-learning-based optimization algorithm The main objective of the present study was to apply the ANN (artificial neural network) model with the TLBO (teaching-learning-based optimization) algorithm to estimate energy consumption in Turkey. Gross domestic product, population, import, and export data were selected as independent variables in the model. Performances of the ANN-TLBO model and the classical back propagation-trained ANN model (ANN-BP (teaching learning-based optimization) model) were compared by using various error criteria to evaluate the model accuracy. Errors of the training and testing datasets showed that the ANN-TLBO model better predicted the energy consumption compared to the ANN-BP model. After determining the best configuration for the ANN-TLBO model, the energy consumption values for Turkey were predicted under three scenarios. The forecasted results were compared between scenarios and with projections by the MENR (Ministry of Energy and Natural Resources). Compared to the MENR projections, all of the analyzed scenarios gave lower estimates of energy consumption and predicted that Turkey's energy consumption would vary between 142.7 and 158.0 Mtoe (million tons of oil equivalent) in 2020. (C) 2014 Elsevier Ltd. All rights reserved. 2014 * 589(<-224): Experimental investigation, modeling and optimization of membrane separation using artificial neural network and multi-objective optimization using genetic algorithm In this work, treatment of oily wastewaters with commercial polyacrylonitrile (PAN) ultrafiltration (UF) membranes was investigated. In order to do these experiments, the outlet wastewater of the API (American Petroleum Institute) unit of Tehran refinery, is used as the feed. The purpose of this paper was to predict the permeation flux and fouling resistance, by applying artificial neural networks (ANNs), and then to optimize the operating conditions in separation of oil from industrial oily wastewaters, including trans-membrane pressure (TMP), cross-flow velocity (CFV), feed temperature and pH, so that a maximum permeation flux accompanied by a minimum fouling resistance, was acquired by applying genetic algorithm as a powerful soft computing technique. The experimental input data, including TMP, CFV, feed temperature and pH, permeation flux and fouling resistance as outputs, were used to create ANN models. This fact that there is an excellent agreement between the experimental data and the predicted values was shown by the modeling results. Eventually, by multi-objective optimization, using genetic algorithm (GA), an optimization tool was created to predict the optimum operating parameters for desired permeation flux (i.e. maximum flux) and fouling resistance (i.e. minimum fouling) behavior. The accuracy of the model is confirmed by the comparison between the predicted and experimental data. (C) 2012 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved. 2013 * 590(<-279): A new weighted optimal combination of ANNs for catalyst design and reactor operation: Methane steam reforming studies Catalyst design and evaluation is a multifactorial multiobjective optimization problem and the absence of well-defined mechanistic relationships between wide ranging input-output variables has stimulated interest in the application of artificial neural network for the analysis of the large body of empirical data available. However, single ANN models generally have limited predictive capability and insufficient to capture the broad range of features inherent in the voluminous but dispersed data sources. In this study, we have employed a Fibonacci approach to select optimal number of neurons for the ANN architecture followed by a new weighted optimal combination of statistically-derived candidate ANN models in a multierror sense. Data from 200 cases for catalytic methane steam reforming have been used to demonstrate the veracity and robustness of the integrated ANN modeling technique. (C) 2011 American Institute of Chemical Engineers AIChE J, 58: 24122427, 2012 2012 * 591(<-373): Estimating Water Retention with Pedotransfer Functions Using Multi-Objective Group Method of Data Handling and ANNs Pedotransfer functions (PTFs) have been developed to estimate soil water retention curves (SWRC) by various techniques. In this study PTFs were developed to estimate the parameters (theta(s), theta(r), alpha and lambda) of the Brooks and Corey model from a data set of 148 samples. Particle and aggregate size distribution fractal parameters (PSDFPs and ASDFPs, respectively) were computed from three fractal models for either particle or aggregate size distribution. The most effective model in each group was determined by sensitivity analysis. Along with the other variables, the selected fractal parameters were employed to estimate SWRC using multi-objective group method of data handling (mGMDH) and different topologies of artificial neural networks (ANNs). The architecture of ANNs for parametric PTFs was different regarding the type of ANN, output layer transfer functions and the number of hidden neurons. Each parameter was estimated using four PTFs by the hierarchical entering of input variables in the PTFs. The inclusion of PSDFPs in the list of inputs improved the accuracy and reliability of parametric PTFs with the exception of theta(s). The textural fraction variables in PTF1 for the estimation of alpha were replaced with PSDFPs in PTF3. The use of ASDFPs as inputs significantly improved alpha estimates in the model. This result highlights the importance of ASDFPs in developing parametric PTFs. The mGMDH technique performed significantly better than ANNs in most PTFs. 2011 * 592(<-410): Multi-objective optimization of decoloration and lactosucrose recovery through artificial neural network and genetic algorithm Artificial neural network (ANN) and genetic algorithm (GA) with uniform design (UD) were used to optimize the decoloration and lactosucrose (LS) recovery in solution with granular charcoal. Three input variables (dosage of charcoal, time, and temperature) were chosen in constructing the back propagation neural networks (BPNN) model, and decoloration rate and LS recovery rate as output variables. GA was used to optimize the input space of the ANN model to find out the Pareto-optimal set. The best parameters were the dosage of charcoal varying from 2.1894 to 2.1897%, time from 64.05 to 64.06 min, and temperature from 74.22 to 78.90 degrees C. The optimal predicted decoloration rate is 96.30% and LS recovery rate is 97.35%. Results from confirmative studies showed that decoloration rate was 94.85% and LS recovery rate was 97.23%, and the relative error of network predicted values and actual measured values were 1.51% and 0.12%, respectively. The results suggested that the UD-ANN-GA could effectively solve the separation efficiency by column chromatography and the method was reliable. 2010 * 593(<-427): A Sequence Optimization Strategy for Chromatographic Separation in Reversed-Phase High-Performance Liquid Chromatography A sequence optimization strategy combining an artificial neural network (ANN) and a chromatographic response function (CRF) for chromatographic separation it? reversed-phase high-performance liquid chromatography has been proposed. Experiments were appropriately designed to obtain unbiased data concerning the effects of varying the mobile phase composition, flow-rate, and temperature. The ANN was then used to simultaneously predict the resolution and analysis time, which are the two most important features of chromatographic separation. Subsequently, a CRF consisting of resolution and analysis time was used to predict the optimum operating conditions for different specialized purposes. The experimental chromatograms were consistent with those predicted for given conditions, which verified the applicability of the method. Furthermore, the proposed optimization strategy was applied to literature data and very good agreement was obtained. The results show that a strategy of sequential combination of ANN and CRF can provide a more flexible and efficient optimization method for chromatographic separation. (C) 2009 American Institute of Chemical Engineers AIChE J, 56: 371-380, 2010 2010 * 594(<-492): Comparisons of grey and neural network prediction of industrial park wastewater effluent using influent quality and online monitoring parameters In this study, Grey model (GM) and artificial neural network (ANN) were employed to predict suspended solids (SSeff) and chemical oxygen demand (CODeff) in the effluent from a wastewater treatment plant in industrial park of Taiwan. When constructing model or predicting, the influent quality or online monitoring parameters were adopted as the input variables. ANN was also adopted for comparison. The results indicated that the minimum MAPEs of 16.13 and 9.85% for SSeff and CODeff could be achieved using GMs when online monitoring parameters were taken as the input variables. Although a good fitness could be achieved using ANN, they required a large quantity of data. Contrarily, GM only required a small amount of data (at least four data) and the prediction results were even better than those of ANN. Therefore, GM could be applied successfully in predicting effluent when the information was not sufficient. The results also indicated that these simple online monitoring parameters could be applied on prediction of effluent quality well. 2008 * 595(<-550): Optimizing the monitoring strategy of wastewater treatment plants by multiobjective neural networks approach This paper applies artificial neural network ( ANN) to model the observed effluent quality data. The ANN's structure, involving the number of hidden layer and node and their connection, is determined endogenously by resorting to the compromise of data cost minimization and prediction accuracy maximization. To obtain the best compromise possible, the model introduces an aspiration variable ( mu) that represents the level of aspiration achieved in one objective and the conjugate of mu, ( 1-mu), represents level of aspiration achieved in the other objective. Because a massive amount of calculation is required, the model applies genetic algorithm ( GA) for its computational flexibility and capability to ensure global solution. Feasibility and practicality of the model is tested by a case study with a set of 150 daily observations on 17 operational variables and quality parameters at an industrial wastewater treatment plant ( WTP) located in southern Taiwan. Of these 17 variables open to selection, only 6 variables, wastewater flow rate ( Q), CN-, SS, MLSS, pH and COD are selected by the model to achieve the maximum accuracy of prediction, 0.94, with a total cost of 5,950 NT$. By constraining budget availability, the variables included in the model are reduced in number, causing a concomitant reduction in prediction accuracy, that is, by varying mu ( aspiration level of accuracy), a trajectory of cost and accuracy is generated. The calculation results a cost of 3,650 NT$ and 0.54 accuracy for the case with variables including flow rate, SCN- and SS in equalization basin; aeration tank hydraulic retention time ( HRT) and percentage of returned sludge ( R%) are selected for building the prediction model when the importance of required budget is equal to the accuracy of prediction model. In addition, when required cost for building ANN model is between 3,650 NT$ and 3,900 NT$, the marginal return of budget input is highest in the entire range of calculation. 2007 * 596(<-242): Optimum culture medium composition for rhamnolipid production by pseudomonas aeruginosa AT10 using a novel multi-objective optimization method BACKGROUND: Rhamnolipid is a biosurfactant that finds wide applications in pharmaceuticals and beauty products. Pseudomonas aeruginosa is a producer of rhamnolipids, and the process can be implemented under laboratory-scale conditions. Rhamnolipid concentration depends on medium composition namely, carbon source concentration, nitrogen source concentration, phosphate content and iron content. In this work, existing data7 were used to develop an artificial neural network-based response surface model (ANN RSM) for rhamnolipid production by pseudomonas aeruginosa AT10. This ANN RSM model is integrated with non-dominated sorting differential evolution (DE) to identify the optimum medium composition for this process. RESULTS: Different strategies for optimization of culture medium composition for this process were evaluated, and the best determined to be an ANN model combined with DE involving a combination of Naive and Slow and e-constrained techniques. The optimal culture medium is determined to have carbon source concentration of 49.86 g dm-3, nitrogen source concentration of 4.99 g dm-3, phosphate content of 1.42 g dm-3, and iron content of 17.12 g dm-3. The maximum rhamnolipid activity was found to be 18.07 g dm-3, which compares favorably with that previously reported (18.66 g dm-3), and is in fact closer to the experimentally determined value of 16.50 g dm-3. CONCLUSION: This method has distinct advantages over methods using statistical regression models, and can be used for optimization of other multi-objective biosurfactant production processes. (c) 2012 Society of Chemical Industry 2013 * 597(<- 28): Optimization of controlled release nanoparticle formulation of verapamil hydrochloride using artificial neural networks with genetic algorithm and response surface methodology This study was performed to optimize the formulation of polymer-lipid hybrid nanoparticles (PLN) for the delivery of an ionic water-soluble drug, verapamil hydrochloride (VRP) and to investigate the roles of formulation factors. Modeling and optimization were conducted based on a spherical central composite design. Three formulation factors, i.e., weight ratio of drug to lipid (X-1), and concentrations of Tween 80 (X-2) and Pluronic F68 (X-3), were chosen as independent variables. Drug loading efficiency (Y-1) and mean particle size (Y-2) of PLN were selected as dependent variables. The predictive performance of artificial neural networks (ANN) and the response surface methodology (RSM) were compared. As ANN was found to exhibit better recognition and generalization capability over RSM, multi-objective optimization of PLN was then conducted based upon the validated ANN models and continuous genetic algorithms (GA). The optimal PLN possess a high drug loading efficiency (92.4%, w/w) and a small mean particle size (similar to 100 nm). The predicted response variables matched well with the observed results. The three formulation factors exhibited different effects on the properties of PLN. ANN in coordination with continuous GA represent an effective and efficient approach to optimize the PLN formulation of VRP with desired properties. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 598(<- 33): An optimal design of wind turbine and ship structure based on neuro-response surface method The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems. 2015 * 599(<-390): Neural model for the leaching of celestite in sodium carbonate solution A neural model for computing the conversion kinetics of SrSO(4) to SrCO(3) was investigated in sodium carbonate solution, based on the multilayered perceptrons was presented. For this purpose the artificial neural network (ANN) method was used. The effects of stirring speed, temperature, mole ratio Na(2)CO(3):SrSO(4) and particle size of the celestite on leaching kinetics were studied. The surface transformation of celestite to strontium carbonate in aqueous carbonate solutions was also supported by FT-IR spectroscopy. The conversion rate of celestite increases systematically with increasing temperature (up to 70 degrees C). Furthermore, the feasibility of replacing the SO(4)(2-) ions with CO(3)(2-) ions in the structure of the leached solid was also investigated by FT-IR. FT-IR results showed that the replacement of SO(4)(2-) ions in celestite by CO(3)(2-) ions in leaching conditions was nearly completed at 60 degrees C with a mole ratio Na(2)CO(3):SrSO(4) = 4:1, solid to liquid =5:500, -212+106 mu m particle size, and 400 rpm stirring rate for an interval of 240 min. The first (up to 90 min) conversion result obtained was trained with an extended delta-bar-delta algorithm (EDBD), which is in the multilayered perceptions and is a neural model structure. Results of other conversion times (90-240 min) results were predicted. Results predicted by the neural model were in very good agreement with the experimental results. (C) 2010 Elsevier B.V. All rights reserved. 2010 * 600(<-198): Multi-Objective Prediction Model and Parameter Optimization Model for the Sputtering of Aluminum Zinc Oxide Semiconducting Transparent Thin Films The industry demand for both high conductivity and transmittance in thin films has made it essential to develop a multi-objective prediction model for resistivity and transmittance. This study combined Taguchi methods and artificial neural networks (ANN) to construct a multi-objective prediction model for the sputtering of AZO (ZnO:Al = 97:3 wt%) to produce semiconducting transparent thin films. The Levenberg-Marquardt method was incorporated into the multi-objective prediction model to construct a multi-objective parameter optimization model for AZO semiconducting transparent thin films. The squared difference of the objective values and the predicted values of each objective served as the error function, which was then multiplied by the individual weight values and summed to derive the objective function of the system. In conjunction with the Levenberg-Marquardt method and reasonable convergence criteria, the optimal combination of parameters for the sputtering objectives was obtained. These parameters included radio frequency power (R. F. power) power of 120 W, process pressure of 15 mTorr, film thickness of 300 nm, and substrate temperature of 74 degrees C. The objective resistivity was 11.4 x 10(-3) Omega . cm, and the objective transmittance was 88.9%. In this experiment, resistivity resulted in 10.6 x 10(-3) Omega . cm, with an error of 7.5% between the predicted value and the experiment results. Transmittance reached 89.1% in the experiment, accounting for an error of -0.2%. 2013 * 601(<-376): Maximizing the native concentration and shelf life of protein: a multiobjective optimization to reduce aggregation A multiobjective optimization was performed to maximize native protein concentration and shelf life of ASD, using artificial neural network (ANN) and genetic algorithm (GA). Optimum pH, storage temperature, concentration of protein, and protein stabilizers (Glycerol, NaCl) were determined satisfying the twin objective: maximum relative area of the dimer peak (native state) after 48 h of storage, and maximum shelf life. The relative area of the dimer peak, obtained from size exclusion chromatography performed as per the central composite design (CCD), and shelf life (obtained as turbidity change) served as training targets for the ANN. The ANN was used to establish mathematical relationship between the inputs and targets (from CCD). GA was then used to optimize the above determinants of aggregation, maximizing the twin objectives of the network. An almost fourfold increase in shelf life (similar to 196 h) was observed at the GA-predicted optimum (protein concentration: 6.49 mg/ml, storage temperature: 20.8 degrees C, Glycerol: 10.02%, NaCl: 51.65 mM and pH: 8.2). Since no aggregation was observed at the optimum till 48 h, all the protein was found at the dimer position with maximum relative area (64.49). Predictions of the finally adapted network also reveal that storage temperature and solvent glycerol concentration plays key role in deciding the degree of ASD aggregation. This multiobjective optimization strategy was also successfully applied in minimizing the batch culture period and determining optimum combination of medium components required for most economical production of actinomycin D. 2011 * 602(<-495): Optimization of composition and technology for phosphate graphite mold In present work, the characteristics of three methods such as the orthogonal design, Fuzzy optimum method and artificial neural network modeling technique were made on the basis of the optimization and evaluation of the performance of the phosphate graphite mold. The variance analysis indicates that the phosphoric acid has greatest influence on both compression strength and tension strength of phosphate graphite mold, both drying temperature and drying time greater, and Al2O3 minor, respectively. The Fuzzy multi-objective comprehensive evaluation shows that the optimum technology for phosphate graphite mold designed by us is phosphoric acid 30%, Al2O3 30%, drying temperature 400 degrees C and drying time 60 min. In addition, the ANN can be used to establish mono- and multi-objective models for the prediction of other tests outside orthogonal test with rather high accuracy. However, the predicted results are worse for the linear regressive equations by the orthogonal analysis. (C) 2008 Elsevier Ltd. All rights reserved. 2008 * 603(<-191): Modeling and Optimization of Electrodialytic Desalination of Fish Sauce Using Artificial Neural Networks and Genetic Algorithm Electrodialysis (ED) has been proposed as a means to reduce sodium ion concentration in fish sauce. However, no information is so far available on the optimum condition to operate the ED process. Artificial neural network (ANN)-based models were therefore developed to predict the ED performance and changes in selected quality attributes of ED-treated fish sauce; optimum operating condition of the process was then determined via multi-objective optimization using genetic algorithm (MOGA). The optimal ANN models were able to predict the ED performance with R (2) = 0.995, fish sauce basic characteristics with R (2) = 0.992, and the concentrations of total aroma compounds and total amino acids, flavor difference, and saltiness of the treated fish sauce with R (2) = 0.999. Through the use of MOGA, the optimum condition of the ED process was the use of an applied voltage of 6.3 V and the maintenance of the residual salt concentration of the treated fish sauce of 14.3 % (w/w). 2013 * 604(<-346): Application of artificial bee colony-based neural network in bottom hole pressure prediction in underbalanced drilling Two phase flow through annulus is a complex area of study in evaluating the bottom hole circulating pressure (BHCP). Based on the over-prediction of empirical correlations and the erroneous assumption of hydraulic diameter concept, both methods suffer from a great deal of error. As a result, it is investigated in this work how artificial neural network (ANN) evolution with artificial bee colony (ABC) improves the efficiency and prediction capability of artificial neural network. The proposed methodology adopts a hybrid ABC-back propagation (BP) strategy (ABC-BP). The proposed algorithm combines the local searching ability of the gradient-based back-propagation (BP) strategy with the global searching ability of artificial bee colony. For an evaluation purpose, the performance and generalization capabilities of ABC-BP are compared with those of models developed with the common technique of BP. The results demonstrate that carefully designed hybrid artificial bee colony-back propagation neural network outperforms the gradient descent-based neural network. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 605(<- 32): Evaluation and prediction of membrane fouling in a submerged membrane bioreactor with simultaneous upward and downward aeration using artificial neural network-genetic algorithm This paper describes the effect of simultaneous upward and downward aeration on the membrane fouling and process performances of a submerged membrane bioreactor. Trans-membrane pressure (TMP) and membrane permeability (Perm) were simulated using multi-layer perceptron and radial basis function artificial neural networks (MLPANN and RBFANN). Genetic algorithm (GA) was utilized in order to optimize the weights and thresholds of the models. The results indicated that the simultaneous aeration does not significantly improve the removal efficiency of contaminants. The removal efficiencies of BOD, COD, total nitrogen, NH4+ - N and TSS were 97.5%, 97%, 94.6%, 96% and 98%, respectively. It was observed that the TMP increases and the Perm decreases as operational time increases. The TMP increasing rate (dTMP/dt) and the Perm decreasing rate (dPerm/dt) for the upward aeration were 2.13 and 2.66 times higher than that of simultaneous aeration, respectively. The training procedures of TMP and Perm models were successful for both RBFANN and MLPANN. The train and test models by MLPANN and RBFANN showed an almost perfect match between the experimental and the simulated values of TMP and Perm. It was illustrated that the GA-optimized ANN predicts TMP and permeability more accurately than a network with a trial-and-error approach calibration. (C) 2015 The Institution of Chemical Engineers. Published by Elsevier BM. All rights reserved. 2015 * 606(<-338): Separation of toluene/n-heptane mixtures experimental, modeling and optimization In this paper a composite membrane is used to separate toluene from n-heptane mixture. The aim is to optimize the separation process conditions through modeling. Therefore this model should be able to predict membrane performance demonstrated by total permeation flux and toluene selectivity as a function of operating condition. In order to create a black box model of the process, a multi layer feed forward artificial neural network is used. An algorithm based on evaluating all possible structures is employed to create an optimum ANN model. Number of hidden layers, transfer function, training method and hidden neurons are determined with the aid of this algorithm. Performance confirms that there is good agreement between the experimental data and the model predicted values, with correlation coefficients of more than 0.99 and mean square errors of less than 1%. Both model and experimental data show that increasing temperature and toluene concentration increase total flux and decrease toluene selectivity but increasing permeate pressure decreases both. Having created and trained an optimized ANN model a multi-objective genetic algorithm is employed to find optimum operating conditions with respect to permeation flux and toluene selectivity as two targets of this separation. Considering the obtained Pareto set and corresponding decision variables, it is found that permeate pressure in this set is almost constant and only variations in temperature and feed concentration eventuate to the creation of the Pareto front. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 607(<- 92): Multi Objective Optimization of Friction Stir Welding Parameters Using FEM and Neural Network In this study the inuence of rotational and traverse speed on the friction stir welding of AA5083 aluminum alloy has been investigated For this purpose a thermo-mechanically coupled, 3D FEM analysis was used to study the effect of rotational and traverse speed on welding force, peak temperature and HAZ width. Then, an Articial Neural Network (ANN) model was employed to understand the correlation between the welding parameters (rotational and traverse speed) and peak temperature, HAZ width and welding force values in the weld zone. Performance of the ANN model was found excellent and the model can be used to predict peak temperature, HAZ width and welding force. Furthermore, in order to find optimum values of traverse and rotational speed, the multi-objective optimization was used to obtain the Pareto front. Finally, the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) was employed to obtain the best compromised solution. 2014 * 608(<- 61): Modeling and optimization of antidepressant drug Fluoxetine removal in aqueous media by ozone/H2O2 process: Comparison of central composite design and artificial neural network approaches Modeling and optimization of Fluoxetine degradation in aqueous solution by ozone/H2O2 process was investigated using central composite design (CCD) and the results were compared with the artificial neural network (ANN) predicted values. We studied the influence of basic operational parameters such as ozone concentration, initial concentration of H2O2 and Fluoxetine and reaction time. The ANN model was developed by feed-forward back propagation network with trainscg algorithm and topology (4: 8: 1). A good agreement between predicted values of Fluoxetine removal using CCD and ANN with experimental results was observed (R-2 values were 0.989 and 0.975 for the ANN and CCD models, respectively). The results showed that ANNs were superior in capturing the nonlinear behavior of the system and could estimate the values of Fluoxetine removal efficiency accurately. Pareto analysis indicated that all selected factors and some interactions were effective on removal efficiency. It was found that the reaction time with a percentage effect of 45.04% was the most effective parameter in the ozone/H2O2 process. The maximum removal efficiency (86.14%) was achieved at ozone concentration of 30 mg L-1, initial H2O2 concentration of 0.02 mM, reaction time of 20 min and initial Fluoxetine concentration of 50 mg L-1 as the optimal conditions. (C) 2014 Taiwan Institute of Chemical Engineers. Published by Elsevier B.V. All rights reserved. 2015 * 609(<-217): Modelling and optimization of Mn/activate carbon nanocatalysts for NO reduction: comparison of RSM and ANN techniques A response surface methodology (RSM) involving a central composite design was applied to the modelling and optimization of a preparation of Mn/active carbon nanocatalysts in NH3-SCR of NO at 250 degrees C and the results were compared with the artificial neural network (ANN) predicted values. The catalyst preparation parameters, including metal loading (wt%), calcination temperature and pre-oxidization degree (v/v% HNO3) were selected as influence factors on catalyst efficiency. In the RSM model, the predicted values of NO conversion were found to be in good agreement with the experimental values. Pareto graphic analysis showed that all the chosen parameters and some of the interactions were effective on response. The optimization results showed that maximum NO conversion was achieved at the optimum conditions: 10.2v/v% HNO3, 6.1wt% Mn loading and calcination at 480 degrees C. The ANN model was developed by a feed-forward back propagation network with the topology 3, 8 and 1 and a Levenberg-Marquardt training algorithm. The mean square error for the ANN and RSM models were 0.339 and 1.176, respectively, and the R-2 values were 0.991 and 0.972, respectively, indicating the superiority of ANN in capturing the nonlinear behaviour of the system and being accurate in estimating the values of the NO conversion. 2013 * 610(<-519): Combining support vector regression and ant colony optimization to reduce NOx emissions in coal-fired utility boilers Combustion optimization has recently demonstrated its potential to reduce NOx emissions in high capacity coal-fired utility boilers. In the present study, support vector regression (SVR), as well as artificial neural networks (ANN), was proposed to model the relationship between NOx emissions and operating parameters of a 300 MW coal-fired utility boiler. The predicted NOx emissions from the SVR model, by comparing with that of the ANN-based model, showed better agreement with the values obtained in the experimental tests on this boiler operated at different loads and various other operating parameters. The mean modeling error and the correlation factor were 1.58% and 0.94, respectively. Then, the combination of the SVR model with ant colony optimization (ACO) to reduce NOx emissions was presented in detail. The experimental results showed that the proposed approach can effectively reduce NOx emissions from the coal-fired utility boiler by about 18.69% (65 ppm). A time period of less than 6 min was required for NOx emissions modeling, and 2 min was required for a run of optimization under a PC system. The computing times are suitable for the online application of the proposed method to actual power plants. 2008 * 611(<-601): Multi-objective optimization of the coal combustion performance with artificial neural networks and genetic algorithms The present work introduces an approach to predict the nitrogen oxides (NOx) emissions and carbon burnout characteristics of a large capacity pulverized coal-fired boiler with an artificial neural network (ANN). The NOx emissions and carbon burnout characteristics are investigated by parametric field experiments. The effects of over-fire-air (OFA) flow rates, coal properties, boiler load, air distribution scheme and nozzle tilt are studied. An ANN is used to model the NOx emissions characteristics and the carbon burnout characteristics. A genetic algorithm (GA) is employed to perform a multi-objective search to determine the optimum solution of the ANN model, finding the optimal setpoints, which can suggest operators' correct actions to decrease NOx emissions and the carbon content in the flyash simultaneously, namely, get a good boiler combustion performance with high boiler efficiency while keeping the NOx emission concentration meet the requirement. Copyright © 2005 John Wiley & Sons, Ltd. 2005 * 612(<-623): Application of multivariate calibration and artificial neural networks to simultaneous kinetic-spectrophotometric determination of carbamate pesticides A method for the simultaneous determination of the pesticides, carbofuran, isoprocarb and propoxur in fruit and vegetable samples has been investigated and developed. It is based on reaction kinetics and spectrophotometry, and results are interpreted with the aid of chemometrics. The analytical method relies on the differential rates of coupling reactions between the hydrolysis product of each carbamate and 4-aminophenol in the presence of potassium periodate in an alkaline solution. The optimized method was successfully tested by analyzing each of the carbamates independently, and linear calibration models are described. For the simultaneous determinations of the carbamates found in ternary mixtures, kinetic and spectral data were processed either by three-way data unfolding method or decomposed by trilinear modeling. Subsequently, 10 different RBF-ANN, PARAFAC and NPLS calibration models were constructed with the use of synthetic ternary mixtures of the three carbamates, and were validated with a separate set of mixtures. The performance of the calibration models was then ranked on the basis of several different figures of merit with the aid of the multi-criteria decision making approach, PROMETHEE and GAIA. RBF-ANN and PC-RBF-ANN were the best performing methods with %Relative Prediction Errors (R-PE) in the 3-4% range and Recovery of about 97%. When compared with other recent studies, it was also noted that RBF-ANN has consistently outperformed the more common prediction methods such as PLS and PCR as well as BP-ANN. The successful RBF-ANN method was then applied for the determination of the three carbamate pesticides in purchased vegetable and fruit samples. (C) 2004 Elsevier B.V. All rights reserved. 2004 * 613(<-654): Spectrophotometric determination of metal ions in electroplating solutions in the presence of EDTA with the aid of multivariate calibration and artificial neural networks Metal ions such as Co(II), Ni(II), Cu(II), Fe(III) and Cr(Ill), which are commonly present in electroplating baths at high concentrations, were analysed simultaneously by a spectrophotometric method modified by the inclusion of the ethylenediaminetetraacetate (EDTA) solution as a chromogenic reagent. The prediction of the metal ion concentrations was facilitated by the use of an orthogonal array design to build a calibration data set consisting of absorption spectra collected in the 370-760 nm range from solution mixtures containing the five metal ions earlier. With the aid of this data set, calibration models were built based on 10 different chemometrics methods such as classical least squares (CLS), principal component regression (PCR), partial least squares (PLS), artificial neural network's (ANN) and others. These were tested with the use of a validation data set constructed from synthetic solutions of the five metal ions. The analytical performance of these chemometrics methods were characterized by relative prediction errors and recoveries (%). On the basis of these results, the computational methods were ranked according to their performances using the multi-criteria decision making procedures preference ranking organization method for enrichment evaluation (PROMETHEE) and geometrical analysis for interactive aid (GAIA). PLS and PCR models applied to the spectral data matrix that used the first derivative pre-treatment were the preferred methods. They together with ANN-radial basis function (RBF) and PLS were applied for analysis of results from some typical industrial samples analysed by the EDTA-spectrophotometric method described. DPLS. DPCR and the ANN-RBF chemometrics methods performed particularly well especially when compared with some target values provided by industry. (C) 2002 Elsevier Science B.V. All rights reserved. 2002 * 614(<- 4): MODELING ENGINE FUEL CONSUMPTION AND NOx WITH RBF NEURAL NETWORK AND MOPSO ALGORITHM In this study, artificial neural network (ANN) modeling is used to predict the fuel consumption and NOx emission of a four stroke spark ignition (SI) engine. Calibration engineers frequently want to know the responses of an engine for the entire range of operating conditions in order to change engine control parameters in the electronic control unit (ECU), to improve performance and reduce emissions. However, testing the engine for the complete range of operating conditions is a very time and labor consuming task. As alternative, ANN is used in order to predict fuel consumption and NOx emission. In the proposed approach, the multi-objective particle swarm optimization (MOPSO) is used to determine weights of radial basis function (RBF) neural networks. The goal is to minimize performance criteria as root mean square error (RMSE) and model complexity. A sensitivity analysis is performed on MOPSO parameters in order to provide better solutions along the optimal Pareto front. In order to select a compromised solution among the obtained Pareto solutions, a fuzzy decision maker is employed. The correlation coefficient R-2 is used to compare the engine responses with the obtained by the proposed approach. 2015 * 615(<- 12): Combining artificial neural network and multi-objective optimization to reduce a heavy-duty diesel engine emissions and fuel consumption Nondominated sorting genetic algorithm II (NSGA-II) is well known for engine optimization problem. Artificial neural networks (ANNs) followed by multi-objective optimization including a NSGA-II and strength pareto evolutionary algorithm (SPEA2) were used to optimize the operating parameters of a compression ignition (CI) heavy-duty diesel engine. First, a multi-layer perception (MLP) network was used for the ANN modeling and the back propagation algorithm was utilized as training algorithm. Then, two different multi-objective evolutionary algorithms were implemented to determine the optimal engine parameters. The objective of the present study is to decide which algorithm is preferable in terms of performance in engine emission and fuel consumption optimization problem. 2015 * 616(<-121): Artificial neural network based on genetic algorithm for emissions prediction of a SI gasoline engine This paper proposes a hybrid learning of artificial neural network (ANN) with the nondominated sorting genetic algorithm-II (NSGAII) to improve accuracy in order to predict the exhaust emissions of a four stroke spark ignition (SI) engine. In the proposed approach, the genetic algorithm (GA) determines initial weights of local linear model tree (LOLIMOT) neural networks. A multi-objective optimization problem is determined. A sensitivity analysis is performed on NSGA-II parameters in order to provide better solutions along the optimal Pareto front. Then, a fuzzy decision maker and the technique for order preference by similarity to ideal solution (TOPSIS) are employed to select compromised solutions among the obtained Pareto solutions. The LOLIMOT-GA responses are compared with the provided by radial basis function (RBF) and multilayer perceptron (MLP) neural networks in terms of correlation coefficient R (2). 2014 * 617(<-192): A hybrid method of modified NSGA-II and TOPSIS to optimize performance and emissions of a diesel engine using biodiesel This paper addresses artificial neural network (ANN) modeling followed by multi-objective optimization process to determine optimum biodiesel blends and speed ranges of a diesel engine fueled with castor oil biodiesel (COB) blends. First, an ANN model was developed based on standard back-propagation algorithm to model and predict brake power, brake specific fuel consumption (BSFC) and the emissions of engine. In this way, multi-layer perception (MLP) network was used for non-linear mapping between the input and output parameters. Second, modified NSGA-II by incorporating diversity preserving mechanism called the e-elimination algorithm was used for multi-objective optimization process. Six objectives, maximization of brake power and minimization of BSFC, PM, NOx, CO and CO2 were simultaneously considered in this step. Optimization procedure resulted in creating of non-dominated optimal points which gave an insight on the best operating conditions of the engine. Third, an approach based on TOPSIS method was used for finding the best compromise solution from the obtained set of Pareto solutions. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 618(<-197): Modeling and multi-objective optimization of a gasoline engine using neural networks and evolutionary algorithms In this paper, a multi-objective particle swarm optimization (MOPSO) algorithm and a nondominated sorting genetic algorithm II (NSGA-II) are used to optimize the operating parameters of a 1.6 L, spark ignition (SI) gasoline engine. The aim of this optimization is to reduce engine emissions in terms of carbon monoxide (CO), hydrocarbons (HC), and nitrogen oxides (NO (x) ), which are the causes of diverse environmental problems such as air pollution and global warming. Stationary engine tests were performed for data generation, covering 60 operating conditions. Artificial neural networks (ANNs) were used to predict exhaust emissions, whose inputs were from six engine operating parameters, and the outputs were three resulting exhaust emissions. The outputs of ANNs were used to evaluate objective functions within the optimization algorithms: NSGA-II and MOPSO. Then a decision-making process was conducted, using a fuzzy method to select a Pareto solution with which the best emission reductions can be achieved. The NSGA-II algorithm achieved reductions of at least 9.84%, 82.44%, and 13.78% for CO, HC, and NO (x) , respectively. With a MOPSO algorithm the reached reductions were at least 13.68%, 83.80%, and 7.67% for CO, HC, and NO (x) , respectively. 2013 * 619(<-613): Multi-objective optimization of heavy-duty diesel engines under stationary conditions New technological developments are helping to control contaminants in diesel engines but, as new degrees of freedom become available, the assessment of optimal values that combine to reduce different emissions has become a difficult task. This paper studies the feasibility of using artificial neural networks (ANNs) as models to be integrated in the optimization of diesel engine settings, with the objective of complying with the increasingly stringent emission regulations while also keeping, or even reducing, the fuel consumption. A large database of stationary engine tests covering a wide range of experimental conditions was used for the development of the ANN models. The optimization was developed within the frame of the European legislation for heavy duty diesel engines. Experimental validation of the optimized results was carried out and compared with the ANN predictions, showing a high level of accuracy, especially for fuel consumption and nitrogen oxides (NOx). 2005 * 620(<-642): Neural network based optimization of drug formulations A pharmaceutical formulation is composed of several formulation factors and process variables. Several responses relating to the effectiveness, usefulness, stability, as well as safety must be optimized simultaneously. Consequently, expertise and experience are required to design acceptable pharmaceutical formulations. A response surface method (RSM) has widely been used for selecting acceptable pharmaceutical formulations. However, prediction of pharmaceutical responses based on the second-order polynomial equation commonly used in an RSM, is often limited to low levels, resulting in poor estimations of optimal formulations. The purpose of this review is to describe the basic concept of the multi-objective simultaneous optimization technique, in which an artificial neural network (ANN) is incorporated. ANN's are being increasingly used in pharmaceutical research to predict the nonlinear relationship between causal factors and response variables. Superior function of the ANN approach was demonstrated by the optimization for typical numerical examples. (C) 2003 Elsevier B.V. All rights reserved. 2003 * 621(<-668): Formula optimization based on artificial neural networks in transdermal drug delivery The promoting effect of O-ethylmenthol (MET) on the percutaneous absorption of ketoprofen from alcoholic hydrogels was evaluated in rats in vitro and in vivo. Furthermore, a novel simultaneous optimization technique incorporating an artificial neural network (ANN) was applied to a design of a ketoprofen hydrogel containing MET. When a small quantity of MET (0.25-0.5%) was added to the hydrogels, the permeation of ketoprofen increased remarkably, compared with the control. On the other hand, little change in permeation was observed when small amounts of menthol were used (<1%), and at least 2% menthol was required to obtain a promoting efficiency comparable with 0.25% MET. The partitioning of ketoprofen from the hydrogel to the skin was improved by the addition of a small amount of MET, whereas the diffusivity of the drug was enhanced at higher concentration of MET (0.5-1%). For the optimization study, the amount of ethanol and MET were selected as causal factors. A rate of penetration (R-p) and lag time (t(L)) and total irritation score (TIS) were selected as response variables. A set of causal factors and response variables was used as tutorial data for ANN and fed into a computer. Nonlinear relationships between the causal factors and the response variables were represented well with the response surface predicted by ANN. The optimization of the ketoprofen hydrogel was performed according to the generalized distance function method. The observed results of R-p and TIS, which had a lot of influence on the effectiveness and safety, coincided well the predictions. (C) 1999 Elsevier Science B.V. All rights reserved. 1999 * 622(<-672): Artificial neural network as a novel method to optimize pharmaceutical formulations One of the difficulties in the quantitative approach to designing pharmaceutical formulations is the difficulty in understanding the relationship between causal factors and individual pharmaceutical responses. Another difficulty is desirable formulation for one property is not always desirable for the other characteristics. This is called a multi-objective simultaneous optimization problem. A response surface method (RSM) has proven to be a useful approach for selecting pharmaceutical formulations. However, prediction of pharmaceutical responses based on the second-order polynomial equation commonly used in RSM, is often limited to low levels, resulting in poor estimations of optimal formulations. The aim of this review is to describe the basic concept of the multi-objective simultaneous optimization technique in which an artificial neural network (ANN) is incorporated. ANNs are being increasingly used in pharmaceutical research to predict the non-linear relationship between causal factors and response variables. The usefulness and reliability of this ANN approach is demonstrated by the optimization for ketoprofen hydrogel ointment as a typical numerical example, in comparison with the results obtained with a classical RSM approach. 1999 * 623(<-676): Multi-objective simultaneous optimization based on artificial neural network in a ketoprofen hydrogel formula containing O-ethylmenthol as a percutaneous absorption enhancer The aim of this study was to apply a novel simultaneous optimization technique incorporating an artificial neural network (ANN) to a design of a ketoprofen hydrogel containing O-ethylmenthol (MET). For model formulae, 12 kinds of ketoprofen hydrogels were prepared. The amount of ethanol and MET were selected as causal factors. A percutaneous absorption study in vivo in rats was performed and irritation evoked on rat skin was microscopically judged after the end of the experiments. The rate of penetration (R-p), lag time (t(L)) and total irritation score (TIS) were selected as response variables. A set of causal factors and response variables was used as tutorial data for ANN and fed into a computer. Nonlinear relationships between the causal factors and the release parameters were represented well with the response surface predicted by ANN. The optimization of the ketoprofen hydrogel was performed according to the generalized distance function method. The observed results of R-p and TIS, which had a lot of influence on the effectiveness and safety, coincided well with the predictions. It was suggested that the multi-objective simultaneous optimization technique incorporating ANN Was quite useful for optimizing pharmaceutical formulae when pharmaceutical responses were nonlinearly related to the formulae and process variables. (C) 1997 Elsevier Science B.V. 1997 * 624(<-677): Multi-objective simultaneous optimization technique based on an artificial neural network in sustained release formulations The aim of this study was to develop a multi-objective simultaneous optimization technique in which an artificial neural network (ANN) was incorporated. As model formulations, 18 kinds of Trapidil tablet were prepared. The amounts of microcrystalline cellulose, hydroxypropyl methylcellulose and compression pressure were selected as causal factors. In order to characterize the release profiles of Trapidil, the release order and the rate constant were estimated. A set of release parameters and causal factors was used as tutorial data for ANN and fed into a computer. Non-linear relationships between causal factors and the release parameters were represented well with the response surface of ANN. The simultaneous optimization of the sustained-release tablet containing Trapidil was performed by minimizing the generalized distance between the predicted values of each response and the optimized one that was obtained individually. The optimal formulations gave satisfactory release profiles, since the observed results coincided well with the predicted results. These findings demonstrate that a multi-objective optimization technique incorporating ANN is quite useful in the optimization of pharmaceutical formulations. (C) 1997 Elsevier Science B.V. 1997 * 625(<- 37): Computational intelligence based designing of microalloyed pipeline steel Computational intelligence based modeling and optimization techniques are employed primarily to investigate the role of the composition and processing parameters on the mechanical properties of API grade microalloyed pipeline steel and then to design steel having improved performance in respect to its strength, impact toughness and ductility. Artificial Neural Network (ANN) models, capable of prediction and diagnosis in non-linear and complex systems, are used to obtain the relationship of composition and processing parameters with said mechanical properties. Then the models are used as objective functions for the multi-objective genetic algorithms for evolving the tradeoffs between the conflicting objectives of achieving improved strength, ductility and impact toughness. The Pareto optimal solutions are analyzed successfully to study the role of various parameters for designing pipeline steel with such improved performance. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 626(<-153): Informatics based design of prosthetic Ti alloys Informatics based approaches are employed to find a suitable composition of Ti alloy, with high strength, low elastic modulus, adequate biocompatibility and low cost. Artificial neural network, capable of prediction and diagnosis in non-linear and complex systems, is used to obtain the relationship of composition and processing parameters with elastic modulus and yield strength. As the objectives are conflicting, multiobjective optimisation using genetic algorithm is employed to optimally design titanium alloys suitable for prosthetic applications using the above models as objective functions for the mechanical properties. The Pareto solutions provide the desired alloy compositions where such properties may be achieved. 2014 * 627(<-244): Modelling and Pareto optimization of mechanical properties of friction stir welded AA7075/AA5083 butt joints using neural network and particle swarm algorithm Friction Stir Welding (FSW) has been successfully used to weld similar and dissimilar cast and wrought aluminium alloys, especially for aircraft aluminium alloys, that generally present with low weldability by the traditional fusion welding process. This paper focuses on the microstructural and mechanical properties of the Friction Stir Welding (FSW) of AA7075-O to AA5083-O aluminium alloys. Weld microstructures, hardness and tensile properties were evaluated in as-welded condition. Tensile tests indicated that mechanical properties of the joint were better than in the base metals. An Artificial Neural Network (ANN) model was developed to simulate the correlation between the Friction Stir Welding parameters and mechanical properties. Performance of the ANN model was excellent and the model was employed to predict the ultimate tensile strength and hardness of butt joint of AA7075-AA5083 as functions of weld and rotational speeds. The multi-objective particle swarm optimization was used to obtain the Pareto-optimal set. Finally, the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) was applied to determine the best compromised solution. (C) 2012 Elsevier Ltd. All rights reserved. 2013 * 628(<-352): Designing cold rolled IF steel sheets with optimized tensile properties using ANN and GA Artificial neural network (ANN) models, correlating the mechanical properties (yield strength, tensile strength, %elongation and f) of the cold rolled interstitial free (IF) steel sheets with compositional and processing parameters, are used to find the importance of different variables. Further the above models are used as objective functions for the evolutionary multi-objective optimization algorithms by evolving tradeoffs, which gives a range of combination of strength and ductility. The optimal solutions are also analysed to extract further knowledge on the role of various parameters, which could be used for designing the chemistry of the alloy as well as process parameters for producing cold rolled strips of IF steel with tailor made property. (C) 2011 Elsevier B.V. All rights reserved. 2011 * 629(<-458): An approach for the aging process optimization of Al-Zn-Mg-Cu series alloys A new model based on least square support vector machines (LSSVM) and capable of forecasting mechanical and electrical properties of Al-Zn-Mg-Cu series alloys has been proposed for the first time. Data mining and artificial intelligence techniques of aluminum alloys are used to examine the forecasting capability of the model. In order to improve predictive accuracy and generalization ability of LSSVM model, a grid algorithm and cross-validation technique has been adopted to determine the optimal hyper-parameters of LSSVM automatically. The forecasting performance of the LSSVM model and the artificial neural network (ANN) has been compared with the experimental values. The result shows that the LSSVM model provides slightly better capability of generalized prediction compared to back propagation network (BPN) in combination with the gradient descent training algorithm. Considering its advantages of the computation speed, unique optimal solution, and generalization performance, the LSSVM model is therefore considered to be used as an alternative powerful modeling tool for the aging process optimization of aluminum alloys. Furthermore. a novel methodology hybridizing nondominated sorting-based multi-objective genetic algorithm (MOGA) and LSSVM has been proposed to make tradeoffs between the mechanical and electrical properties. A desirable nondominated solution set has been obtained and reported. (c) 2008 Elsevier Ltd. All rights reserved. 2009 * 630(<-549): Aging process optimization for a copper alloy considering hardness and electrical conductivity A multi-objective optimization methodology for the aging process parameters is proposed which simultaneously considers the mechanical performance and the electrical conductivity. An optimal model of the aging processes for Cu-Cr-Zr-Mg is constructed using artificial neural networks and genetic algorithms. A supervised artificial neural network (ANN) to model the non-linear relationship between parameters of aging treatment and hardness and conductivity properties is considered for a Cu-Cr-Zr-Mg lead frame alloy. Based on the successfully trained ANN model, a genetic algorithm is adopted as the optimization scheme to optimize the input parameters. The result indicates that an artificial neural network combined with a genetic algorithm is effective for the multi-objective optimization of the aging process parameters. (c) 2006 Elsevier B.V. All rights reserved. 2007 * 631(<-168): DEVELOPMENT OF A CLOSED-LOOP DIAGNOSIS SYSTEM FOR REFLOW SOLDERING USING NEURAL NETWORKS AND SUPPORT VECTOR REGRESSION This study presents an industrial application of artificial neural network (ANN) and support vector regression (SVR) to diagnose control reflow soldering process in a closed-loop framework. Reflow soldering is the principal process for the fabrication of a variety of modern computer, communication, and consumer (3C) electronics products. It is important to achieve robust electrical connections without changing the mechanical and electronic characteristics of the components during reflow soldering process. In this study, a 3(8-4) experimental design was conducted to collect the structured process information. The experimental data was then used for data-training via the ANN and SVR techniques to investigate both the forward and backward relationships between the heating factors and the resultant reflow thermal profile (RTP) and so as to develop a closed-loop reflow soldering diagnosis system. The proposed system includes two modules: (1) a forward-flow module used to predict the output elements of the RTP and evaluate its performance based on ANN and a multi-criteria decision-making (MCDM) criterion; (2) a backward-flow module employed to ascertain the set of heating parameter combinations which best fulfill the production requirements of the expected throughput rate, product configuration, and the desired solderability. The efficiency and cost-effectiveness of this methodology were empirically evaluated and the results show the promising to improve soldering quality and productivity. Significance: The proposed closed-loop reflow soldering process diagnosis system can predict the output elements of a reflow temperature profile according to process inputs. This system is also able to ascertain the set of heating parameter combinations which best fulfill the production requirements and the desired solderability. The empirical evaluation demonstrates the efficiency and cost-effectiveness for the improvements of soldering quality and productivity. 2014 * 632(<-421): Multi-target spectral moment QSAR versus ANN for antiparasitic drugs against different parasite species There are many of pathogen parasite species with different susceptibility pro. le to antiparasitic drugs. Unfortunately, almost QSAR models predict the biological activity of drugs against only one parasite species. Consequently, predicting the probability with which a drug is active against different species with a single unify model is a goal of the major importance. In so doing, we use Markov Chains theory to calculate new multi-target spectral moments to fit a QSAR model that predict by the first time a mt-QSAR model for 500 drugs tested in the literature against 16 parasite species and other 207 drugs no tested in the literature using spectral moments. The data was processed by linear discriminant analysis (LDA) classifying drugs as active or non-active against the different tested parasite species. The model correctly classifies 311 out of 358 active compounds (86.9%) and 2328 out of 2577 non-active compounds (90.3%) in training series. Overall training performance was 89.9%. Validation of the model was carried out by means of external predicting series. In these series the model classified correctly 157 out 190, 82.6% of antiparasitic compounds and 1151 out of 1277 non-active compounds (90.1%). Overall predictability performance was 89.2%. In addition we developed four types of non Linear Artificial neural networks ( ANN) and we compared with the mt-QSAR model. The improved ANN model had an overall training performance was 87%. The present work report the first attempts to calculate within a unify framework probabilities of antiparasitic action of drugs against different parasite species based on spectral moment analysis. (C) 2010 Elsevier Ltd. All rights reserved. 2010 * 633(<-203): Multi-objective prediction model for the establishment of sputtered GZO semiconducting transparent thin films In recent years, semiconducting transparent thin films have undergone rapid development. Today, excellent conductivity and transmittance are the qualities sought in the manufacturing of these films. Most manufacturers have the objective of enhancing both conductivity and transmittance, developing a multi-objective model for the prediction of resistivity and transmittance in semiconducting transparent thin films is essential. Taguchi analysis results indicate that among the factors influencing resistivity, radio frequency power (R. F. power) is the most significant, followed by process pressure. Among the factors influencing transmittance, target-to-substrate distance is the most significant, followed by R. F. power. This study proposed a progressive Taguchi-neural network model, combining Taguchi method with an artificial neural network for the development of a multi-objective prediction model for use with sputtered gallium zinc oxide (GZO, ZnO:Ga=97:3 wt%) semiconducting transparent thin films. Analysis results have shown that in the Stage-1 of the initial network, prediction results were ineffective due to insufficient network training examples. The refined network in the Stage-3 however, provided improved global prediction results. 2013 * 634(<-239): Integration of finite element simulation and intelligent methods for evaluation of thermo-mechanical loads during hard turning process The machined surfaces are mainly affected by thermo-mechanical loads during machining processes. In this regard, thermal loads increase tensile residual stress and heat-affected zone; however, mechanical loads increase fatigue strength and compressive residual stress on the machined workpiece during the process. Since experimental investigation is difficult, the problem becomes more difficult if the aim is minimizing thermal loads, while maximizing mechanical loads during the hard turning process. This article presents a hybrid method based on the artificial neural networks, multiobjective optimization, and finite element analysis for evaluation of thermo-mechanical loads during the orthogonal turning of AISI H13-hardened die steel (52HRC). First, using an iterative procedure, controllable parameters of simulation (including contact conditions and flow stress) are determined by comparison between finite element and experimental results from the literature. Then, the results of finite element simulation at the different cutting conditions and tool geometries were employed for training neural networks by genetic algorithm. Finally, the functions implemented by neural networks were considered as objective functions of nondominated genetic algorithm and optimal nondominated solution set were determined at the different states of thermal loads (workpiece temperature) and mechanical loads (workpiece effective strain). Comparison between the obtained results of nondominated genetic algorithm and predicted results of finite element simulation showed that the hybrid technique of finite element method-artificial neural networks-multiobjective optimization provides a robust framework for machining simulation of AISI H13. 2013 * 635(<-301): Particle swarm optimization in multi-stage operations for operation sequence and DT allocation Improved operation sequence and economic tolerance allocation directly influence product quality and manufacturing costs. The purpose of this study is to generate the optimal operation sequence and allocate economic tolerances to cutting surfaces to achieve the specified quality and minimize the manufacturing costs. Because this type of problem is a multi-objective optimization problem subject to various constraints, it is defined as an NP-hard problem. A three-step procedure is used to solve the problem. First, a mathematical model is developed to define the relationships between manufacturing costs and tolerances. Second, an artificial neural network (ANN) is applied to obtain the best fitting cost-tolerance function. Finally, the formulated mathematical models are solved by using particle swarm optimization (PSO) in order to determine the optimal operation sequence. In addition, both the effectiveness and efficiency of the proposed methodologies are tested and verified for a given workpiece that needs multi-stage operations. The key contributions of this study are the generation of the optimal operation sequence and the effective allocation of the optimal dimensional tolerance (DT) using an advanced computational intelligence algorithm with consideration for multi-stage operations. (C) 2011 Elsevier Ltd. All rights reserved. 2012 * 636(<-319): Constrained optimum surface roughness prediction in turning of X20Cr13 by coupling novel modified harmony search-based neural network and modified harmony search algorithm Nowadays, manufacturers rely on trustworthy methods to predict the optimal cutting conditions which result in the best surface roughness with respect to the fact that some constraining functions should not exceed their critical values because of current restrictions considering competition found among them in delivering economical and high-quality products to the stringent customers in the shortest time. The present research deals with a modified optimization algorithm of harmony search (MHS) coupled with modified harmony search-based neural networks (MHSNN) to predict the cutting condition in longitudinal turning of X20Cr13 leading to optimum surface roughness. To this end, several experiments were carried out on X20Cr13 stainless steel to attain the required data for training of MHSNN. Feed-forward artificial neural network was utilized to create predictive models of surface roughness and cutting forces exploiting experimental data, and the MHS algorithm was used to find the constrained optimum of surface roughness. Furthermore, simple HS algorithm was used to solve the same optimization problem to illustrate the capabilities of the MHS algorithm. The obtained results demonstrate that the MHS algorithm is more effective and authoritative in approaching the global solution compared with the HS algorithm. 2012 * 637(<-375): Modelling Power Consumption in Ball-End Milling Operations Power consumption is a factor of increasing interest in manufacturing due to its obvious impact on production costs and the environment. The aim of this work is to analyze the influence of process parameters on power consumption in high-speed ball-end milling operations carried out on AISI H13 steel. A total of 300 experiments were carried out in a 3-axis vertical milling center, the Deckel-Maho 105V linear. The power consumed by the spindle and by the X, Y, and Z machine tool axes was measured using four ammeters located in the respective power cables. The data collected was used to develop an artificial neural network (ANN) which was used to predict power consumption during operations. The results obtained from the ANN are very accurate. Power consumption predictions can help operators to determine the most effective cutting parameters for saving energy and money while bringing the milling process closer to the goal of environmentally sensitive manufacturing which has become a topic of general importance. 2011 * 638(<- 29): MULTI-OBJECTIVE OPTIMIZATION OF CUT QUALITY CHARACTERISTICS IN CO2 LASER CUTTING OF STAINLESS STEEL In this paper, multi-objective optimization of the cut quality characteristics in CO2 laser cutting of AISI 304 stainless steel was discussed. Three mathematical models for the prediction of cut quality characteristics such as surface roughness, kerf width and heat affected zone were developed using the artificial neural networks (ANNs). The laser cutting experiment was planned and conducted according to the Taguchi's L-27 orthogonal array and the experimental data were used to train single hidden layer ANNs using the Levenberg-Marquardt algorithm. The ANN mathematical models were developed considering laser power, cutting speed, assist gas pressure, and focus position as the input parameters. Multi-objective optimization problem was formulated using the weighting sum method in which the weighting factors that are used to combine cut quality characteristics into the single objective function were determined using the analytic hierarchy process method. 2015 * 639(<-255): MULTI-OBJECTIVE OPTIMIZATION OF SURFACE ROUGHNESS AND MATERIAL REMOVAL RATE IN CO2 LASER CUTTING USING ANN AND NSGA-II The prediction of optimal laser cutting conditions for satisfying different requirements is of great importance in process planning. hi this paper, multi-objective optimization of CO2 laser cutting AISI 304 stainless steel using the non-dominated sorting genetic algorithm (NSGA-II), with surface roughness and material removal rate (MRR) as the objective functions was presented. The laser cutting experiments were conducted based on Taguchi's experimental design using L-27 orthogonal array by varying the laser power, cutting speed, assist gas pressure and focus position at three levels. Using these experimental data, the mathematical models of surface roughness and kerf width were developed using artificial neural network (ANN). The later ANN model was then used for calculating the MRR considering that the MRR is the function of cutting speed, workpiece thickness and kerf width. On the basis of a computer code written for ANN function models, the optimization problem was formulated and solved using NSGA-II. The obtained optimal solution set was plotted as Pareto optimal front. It was observed that the functional dependence between the surface roughness and material removal rate is nonlinear and can be expressed with a second degree polynomial. 2013 * 640(<-165): Artificial intelligence based modeling and optimization of heat affected zone in Nd:YAG laser cutting of duralumin sheet Duralumin is an alloy of aluminium which has some unique properties such as high strength to weight ratio, high resistance to corrosion, light in weight, and more demanding alloy in various sectors such as space craft, marine, chemical industries, construction and automobile. These applications require very precise and complex shapes which may not be obtained with conventional machining. Pulsed Nd:YAG laser cutting may be used to fulfill these objectives by using optimum setting of process parameters. The present research paper has experimentally investigated the modeling and optimization of heat affected zone in the pulsed Nd:YAG laser cutting of Duralumin sheet with the aim to minimize heat affected zone. The quality is improved by the proper control of different process parameters such as gas pressure, pulse width, pulse frequency and scanning speed. Artificial intelligence (AI) algorithms have been used to solve the many engineering problems successfully through development of Genetic Algorithm (GA), Fuzzy Logic (FL) and Artificial Neural Network (ANN) systems. The optimization of heat affected zone has been carried out by using Hybrid Approach of Multiple Regression Analysis (MRA) and GA. In this methodology, the second order regression model has been developed by using MRA with the help of experimental data obtained by L-27 orthogonal array (OA). Further this equation has been used as objective function in GA based optimization. The significant factors have been found with further discussion of their effect on the heat affected zone. 2014 * 641(<-456): Modeling and optimization on Nd:YAG laser turned micro-grooving of cylindrical ceramic material Nd:YAG laser turning is a new technique for manufacturing micro-grooves on cylindrical surface of ceramic materials needed for the present day precision industries. The importance of laser turning has directed the researchers to search how accurately micro-grooves can be obtained in cylindrical parts. In this paper, laser turning process parameters have been determined for producing square micro-grooves on cylindrical surface. The experiments have been performed based on the statistical five level central composite design techniques. The effects of laser turning process parameters i.e. lamp current, pulse frequency, pulse width, cutting speed (revolution per minute, rpm) and assist gas pressure on the quality of the laser turned micro-grooves have been studied. A predictive model for laser turning process parameters is created using a feed-forward artificial neural network (ANN) technique utilized the experimental observation data based on response surface methodology (RSM). The optimization problem has been constructed based on RSM and solved using multi-objective genetic algorithm (GA). The neural network coupled with genetic algorithm can be effectively utilized to find the optimum parameter value for a specific laser micro-turning condition in ceramic materials. The optimal process parameter settings are found as lamp current of 19 A, pulse frequency of 3.2 kHz, pulse width of 6% duty cycle, cutting speed as 22 rpm and assist air pressure of 0.13 N/mm(2) for achieving the predicted minimum deviation of upper width of -0.0101 mm, lower width 0.0098 mm and depth -0.0069 mm of laser turned micro-grooves. (C) 2009 Elsevier Ltd. All rights reserved. 2009 * 642(<-488): Neural Network Modeling and Particle Swarm Optimization (PSO) of Process Parameters in Pulsed Laser Micromachining of Hardened AISI H13 Steel This article focuses on modeling and optimizing process parameters in pulsed laser micromachining. Use of continuous wave or pulsed lasers to perform micromachining of 3-D geometrical features on difficult-to-cut metals is a feasible option due the advantages offered such as tool-free and high precision material removal over conventional machining processes. Despite these advantages, pulsed laser micromachining is complex, highly dependent upon material absorption reflectivity, and ablation characteristics. Selection of process operational parameters is highly critical for successful laser micromachining. A set of designed experiments is carried out in a pulsed Nd:YAG laser system using AISI H13 hardened tool steel as work material. Several T-shaped deep features with straight and tapered walls have been machining as representative mold cavities on the hardened tool steel. The relation between process parameters and quality characteristics has been modeled with artificial neural networks (ANN). Predictions with ANNs have been compared with experimental work. Multiobjective particle swarm optimization (PSO) of process parameters for minimum surface roughness and minimum volume error is carried out. This result shows that proposed models and swarm optimization approach are suitable to identify optimum process settings. 2009 * 643(<-588): Multi criteria optimization of laser percussion drilling process using artificial neural network model combined with genetic algorithm This is a study of laser percussion drilling optimization by combining the neural network method with the genetic algorithm. First, optimum input parameters of the process were obtained in order to optimize every single output parameter (response) of the zprocess regardless of their effect on each other (single criterion optimization). Then, optimum input parameters were obtained in order to optimize the effect of all output parameters in a multicriteria manner. Artificial neural network (ANN) method was employed to develop an experimental model of the process according to the experimental results. Then optimum input parameters (peak power, pulse width, pulse frequency, number of pulses, assist gas pressure, and focal plane position) were specified by using the genetic algorithm (GA). The output parameters include the hole entrance diameter, circularity of hole entrance and hole exit, and hole taper. The tests were carried out on mild steel EN3 sheets, with 2.5 mm thickness. The sheets were drilled by a 400 w pulsed Nd:YAG laser emitting at 1.06 mu m wave length. Oxygen was employed as the assist gas. Considering the accuracy of the optimum numerical results and the high capability of the neural network in modeling, this method is reliable and precise and confirms the qualitative results in the previous studies. As a result, one can use this method to optimally adjust input parameters of the process in multicriteria optimization mode, which indicates substitute application of the method for optimizing the laser percussion drilling process. 2006 * 644(<-436): Determination of Optimal Pulse Metal Inert Gas Welding Parameters with a Neuro-GA Technique Optimization of a manufacturing process is a rigorous task because it has to take into account all the factors that influence the product quality and productivity. Welding is a multi-variable process, which is influenced by a lot of process uncertainties. Therefore, the optimization of welding process parameters is considerably complex. Advancement in computational methods, evolutionary algorithms, and multiobjective optimization methods create ever-more effective solutions to this problem. This work concerns the selection of optimal parameters setting of pulsed metal inert gas welding (PMIGW) process for any desired output parameters setting. Six process parameters, namely pulse voltage, background voltage, pulse frequency, pulse duty factor, wire feed rate and table feed rate were used as input variables, and the strength of the welded plate, weld bead geometry, transverse shrinkage, angular distortion and deposition efficiency were considered as the output variables. Artificial neural network (ANN) models were used for mapping input and output parameters. Neuro genetic algorithm (Neuro-GA) technique was used to determine the optimal PMIGW process parameters. Experimental result shows that the designed parameter setting of PMIGW process, which was obtained from Neuro-GA optimization, indeed produced the desired weld-quality. 2010 * 645(<-524): Multi-criteria optimization in nonlinear predictive control The multi-criteria predictive control of nonlinear dynamical systems based on Artificial Neural Networks (ANNs) and genetic algorithms (GAs) are considered. The (ANNs) are used to determine process models at each operating level; the control action is provided by minimizing a set of control objective which is function of the future prediction output and the future control actions in tacking in account constraints in input signal. An aggregative method based on the Non-dominated Sorting Genetic Algorithm (NSGA) is applied to solve the multi-criteria optimization problem. The results obtained with the proposed control scheme are compared in simulation to those obtained with the multi-model control approach. (c) 2007 IMACS. Published by Elsevier B.V. All rights reserved. 2008 * 646(<-540): Optimization of the characteristic parameters in milling using the PSO evolution technique The selection of machining parameters is an important step in process planning; therefore, a new evolutionary computation technique has been developed to optimize the machining process. In this paper Particle Swarm Optimization (PSO) is used to efficiently optimize the machining parameters simultaneously in milling processes where multiple conflicting objectives are present. First, an artificial neural network (AAW) predictive model is used to predict the cutting forces during machining and then the PSO algorithm is used to obtain the optimum cutting speeds and feed rates. The goal of the optimization is to determine the objective function maximum (the predicted cutting-force surface) by considering the cutting constraints. During optimization the particles fly intelligently in the solution space and search for optimal cutting conditions according to the strategies of the PSO algorithm. The results showed that an integrated system of neural networks and swarm intelligence is an effective method for solving multi-objective optimization problems. The high accuracy of the results within a wide range of machining parameters indicates that the system can be practically applied in industry. The simulation results show that compared with genetic algorithms (GAs) and simulated annealing (SA) the proposed algorithm can improve the quality of the solution while speeding up the convergence process. The new computational technique has several advantages and benefits and is suitable for use when combined with ANN-based models where no explicit relation between the inputs and the outputs is available. This research opens the door for a new class of optimization techniques that are based on evolution computation in the area of machining. (C) 2007 Journal of Mechanical Engineering. All rights reserved. 2007 * 647(<- 55): Applying the ensemble artificial neural network-based hybrid data-driven model to daily total load forecasting Accurate electricity forecasting has become a very important research field for high-efficiency electricity production. But the hybrid data-driven models for load forecasting are rarely studied. This paper presents a novel hybrid data-driven "PEK" model for predicting the daily total load. The proposed hybrid model is mainly constructed by various function approximators, which containing the partial mutual information (PMI)-based input variable selection (IVS), ensemble artificial neural network-based output estimation and K-nearest neighbor regression-based output error estimation. The PMI-based IVS algorithm is used to select the input variables, resulting in a good compromise between the parsimony and adequacy of the input information. After that, the topology and parameter calibration of the PEK model are implemented by the NSGA-II multi-objective optimization algorithm. The electricity load demands from years 2010 to 2012 of the Shuyang hydrothermal station are chosen as a case study to verify the performance of the PEK model. Simulation results show that this model obtains significantly better accuracy in the prediction of daily total load. 2015 * 648(<-110): A Methodological Study for Optimizing Material Selection in Sustainable Product Design A computational intelligence-based identification of the properties of maximally sustainable materials for a given application, as derived from key properties of existing candidate materials, is put forward. The correlation surface between material properties (input) and environmental impact (EI) values (output) of the candidate materials is initially created using general regression (GR) artificial neural networks (ANNs). Genetic algorithms (GAs) are subsequently employed for swiftly identifying the minimum point of the correlation surface, thus exposing the properties of the maximally sustainable material. The ANN is compared to and found to be more accurate than classic polynomial regression (PR) interpolation/prediction, with sensitivity and multicriteria analyses further confirming the stability of the proposed methodology under variations in the properties of the materials as well as the relative importance values assigned to the input properties. A nominal demonstration concerning material selection for manufacturing maximally sustainable liquid containers is presented, showing that by appropriately picking the pertinent input properties and the desired material selection criteria, the proposed methodology can be applied to a wide range of material selection tasks. 2014 * 649(<-163): WELDING PROCESS OPTIMIZATION WITH ARTIFICIAL NEURAL NETWORK APPLICATIONS Correct detection of input and output parameters of a welding process is significant for successful development of an automated welding operation. In welding process literature, we observe that output parameters are predicted according to given input parameters. As a new approach to previous efforts, this paper presents a new modeling approach on prediction and classification of welding parameters. 3 different models are developed on a critical welding process based on Artificial Neural Networks (ANNs) which are (0 Output parameter prediction, (ii) Input parameter prediction (reverse application of output prediction model) and (iii) Classification of products. In this study, firstly we use Pareto Analysis for determining uncontrollable input parameters of the welding process based on expert views. With the help of these analysis, 9 uncontrollable parameters are determined among 22 potential parameters. Then, the welding process of ammunition is modeled as a multi-input multi-output process with 9 input and 3 output parameters. 1st model predicts the values of output parameters according to given input values. 2nd model predicts the values of correct input parameter combination for a defect-free weld operation and 3rd model is used to classify the products whether defected or defect-free. 3rd model is also used for validation of results obtained by 1st and 2nd models. A high level of performance is attained by all the methods tested in this study. In addition, the product is a strategic ammunition in the armed forces inventory which is manufactured in a limited number of countries in the world. Before application of this study, the welding process of the product could not be carried out in a systematic way. The process was conducted by trial-and-error approach by changing input parameter values at each operation. This caused a lot of costs. With the help of this study, best parameter combination is found, tested, validated with ANNs and operation costs are minimized by 30%. 2014 * 650(<-615): A hybrid analytical-neural network approach to the determination of optimal cutting conditions In the contribution, a new hybrid optimization technique for complex optimization of cutting parameters is proposed. The developed approach is based on the maximum production rate criterion and incorporates 10 technological constraints. It describes the multi-objective technique of optimization of cutting conditions by means of the artificial neural network (ANN) and OPTIS routine by taking into consideration the technological, economic and organizational limitations. The analytical module OPTIS selects the optimum cutting conditions from commercial databases with respect to minimum machining costs. By selection of optimum cutting conditions, it is possible to reach a favourable ratio between the low machining costs and high productivity taking into account the given limitation of the cutting process. To reach higher precision of the predicted results, a hybrid optimization algorithm is developed and presented to ensure simple, fast and efficient optimization of all important turning parameters. Experimental results show that the proposed optimization algorithm for solving the nonlinear-constrained programming problems (NCP) is both effective and efficient, and can be integrated into an intelligent manufacturing system for solving complex machining optimization problems. To demonstrate the procedure and performance of the proposed approach, an illustrative example is discussed in detail. (C) 2004 Elsevier B.V. All rights reserved. 2004 * 651(<- 74): ARTIFICIAL NEURAL NETWORK MODELING AND OPTIMIZATION OF HALL-HEROULT PROCESS FOR ALUMINUM PRODUCTION Experience in applying a hybrid artificial neural network (ANN)-genetic algorithm for modeling and optimizing the Hall-Heroult process for aluminum extraction is described in this study. During the stage of modeling, the most important and effective process variables including temperature and cell voltage, metal and bath heights, purity of CaF2 and Al2O3, and bath ratio are chosen as input variables whilst outputs of the model are product purity, ampere efficiency, and product rate. During three years of operation, 19 points were selected for building and training, 7 points for testing, and 7 data points for validating the model. Results show that a feed-forward Artificial Neural Network (ANN) model with 3 neurons in the hidden layer can acceptably simulate the mentioned output variables with the Mean Squared Error (MSE) of 0.002%, 0.108% and 0.407%, respectively. Utilizing the validated model and multi-objective genetic algorithms, aluminum purity and the rate of production are maximized by manipulating decision variables. Results show that setting these decision variables at the optimal values can increase approximately the metal purity, ampere efficiency, and product rate by 0.007%, 0.185%, and 20kg/h, respectively. 2015 * 652(<-551): Multi-step-ahead neural networks for flood forecasting A reliable flood warning system depends on efficient and accurate forecasting technology. A systematic investigation of three common types of artificial neural networks (ANNs) for multi-step-ahead (MSA) flood forecasting is presented. The operating mechanisms and principles of the three types of MSA neural networks are explored: multi-input multi-output (MIMO), multi-input single-output (MISO) and serial-propagated structure. The most commonly used multi-layer feed-forward networks with conjugate gradient algorithm are adopted for application. Rainfall-runoff data sets from two watersheds in Taiwan are used separately to investigate the effectiveness and stability of the neural networks for MSA flood forecasting. The results indicate consistently that, even though the MIMO is the most common architecture presented in ANNs, it is less accurate because its multi-objectives (predicted many time steps) must be optimized simultaneously. Both MISO and serial-propagated neural networks are capable of performing accurate short-term (one- or two-step-ahead) forecasting. For long-term (more than two steps) forecasts, only the serial-propagated neural network could provide satisfactory results in both watersheds. The results suggest that the serial-propagated structure can help in improving the accuracy of MSA flood forecasts. 2007 * 653(<-664): An artificial neural network approach to multicriteria model selection This paper presents an intelligent decision support system based on neural network technology for multicriteria model selection. This paper categorizes the problem as simple, utility / value, interactive and outranking type of problem according to six basic features. The classification of the problem is realized based on a two-step neural network analysis applying back-propagation algorithm. The first Artificial Neural Network (ANN) model that is used for the selection of an appropriate solving method cluster consists of one hidden layer. The six input neurons of the model represent the MCDM problem features while the two output neurons represent the four MCDM categories. The second ANN model is used for the selection of a specific method within the selected cluster. 2001 * 654(<- 38): Multiobjective optimization of friction welding of UNS S32205 duplex stainless steel The present study is to optimize the process parameters for friction welding of duplex stainless steel (DSS UNS S32205). Experiments were conducted according to central composite design. Process variables, as inputs of the neural network, included friction pressure, upsetting pressure, speed and burn-off length. Tensile strength and microhardness were selected as the outputs of the neural networks. The weld metals had higher hardness and tensile strength than the base material due to grain refinement which caused failures away from the joint interface during tensile testing. Due to shorter heating time, no secondary phase intermetallic precipitation was observed in the weld joint. A multi-layer perceptron neural network was established for modeling purpose. Five various training algorithms, belonging to three classes, namely gradient descent, genetic algorithm and Levenberg-Marquardt, were used to train artificial neural network. The optimization was carried out by using particle swarm optimization method. Confirmation test was carried out by setting the optimized parameters. In conformation test, maximum tensile strength and maximum hardness obtained are 822 MPa and 322 Hv, respectively. The metallurgical investigations revealed that base metal, partially deformed zone and weld zone maintain austenite/ferrite proportion of 50:50. Copyright (C) 2015, China Ordnance Society. Production and hosting by Elsevier B.V. All rights reserved. 2015 * 655(<-234): Influence of Ultrasonic and Microwave Irradiation on Cation Exchange Properties of Clay Material This study deals with optimization of the clay activation process using artificial neural network models and multi-objective optimization function. Different artificial neural network models were used for description of the relation between clay sorption capacity and the activation treatment process (power and time of clay exposure to ultrasonic and/or microwave irradiation). Two methodologies (feed-forward and cascade-forward) in combination with five different training algorithms (random order incremental training with learning functions, resilient backpropagation, one-step secant backpropagation, Levenberg-Marquardt backpropagation, Bayesian regularization backpropagation) were applied in order to obtain an optimal artificial neural network model. The optimal artificial neural network model showed good predictive ability (relative error 6.02 % based on external validation data set). In-house developed multi-objective criteria function was used in combination with the developed artificial neural network model and calculated optimal activation was determined (5 minutes of ultrasonic 120 W and microwave 60 W treatment) increasing the sorption capacity by 15 %. 2013 * 656(<-236): Artificial Neural Network Modeling of ECAP Process Equal channel angular pressing (ECAP) is a type of severe plastic deformation procedure for achieving ultra-fine grain structures. This article investigates artificial neural network (ANN) modeling of ECAP process based on experimental and three-dimensional (3D) finite element methods (FEM).In order to do so, an ECAP die was designed and manufactured with the channel angle of 90 degrees and the outer corner angle of 15 degrees. Commercial pure aluminum was ECAPed and the obtained data was used for validating the FEM model. After confirming the validity of the model with experimental data, a number of parameters are considered. These include the die channel angles (angle between the channels phi and the outer corner angle ) and the number of passes which were subsequently used for training the ANN. Finally, experimental and numerical data was used to train neural networks. As a result, it is shown that a feed forward back propagation ANN can be used for efficient die design and process determination in the ECAP. There is satisfactory agreement between results according to comparisons. 2013 * 657(<-286): Bayesian regularization-based Levenberg-Marquardt neural model combined with BFOA for improving surface finish of FDM processed part Fused deposition modeling has a complex part building mechanism making it difficult to obtain reasonably good functional relationship between responses and process parameters. To solve this problem, present study proposes use of artificial neural network (ANN) model to determine the relationship between five input parameters such as layer thickness, orientation, raster angle, raster width, and air gap with three output responses viz., roughness in top, bottom, and side surface of the built part. Bayesian regularization is adopted for selection of optimum network architecture because of its ability to fix number of network parameters irrespective of network size. ANN model is trained using Levenberg-Marquardt algorithm, and the resulting network has good generalization capability that eliminates the chance of over fitting. Finally, bacterial foraging optimization algorithm which attempts to model the individual and group behavior of Escherichia coli bacteria as a distributed optimization process is used to suggest theoretical combination of parameter settings to improve overall roughness of part. This paper also investigates use of chaotic time series sequence known as logistic function and demonstrates its superiority in terms of convergence and solution quality. 2012 * 658(<-317): Artificial Neural Network-Based Multiobjective Optimization of Mechanical Alloying Process for Synthesizing of Metal Matrix Nanocomposite Powder The aim of this article was to optimize the mechanical alloying process for synthesizing of Al-8vol%SiC nanocomposite powders through an artificial neural network based on multiobjective optimization procedure. First, a suitable trained multi-layer perceptron (MLP) neural network was established for modeling purpose. Process variables as inputs of the network included milling time, milling speed, and balls to powders weight ratio. Parameters of the nanocomposite as outputs of the network were the crystallite size and the lattice strain of the aluminum matrix. The optimization was carried out by using two methods: gradient descent and pattern search. The aim of the optimization was to determine the minimum crystallite size and the maximum lattice strain of the aluminum matrix that could be obtained by regulating the mechanical alloying process variables. The response surfaces and the contour plots showed that the combination of the artificial neural network (ANN) and the optimization procedure were able to optimize the mechanical alloying process to synthesize Al-8vol%SiC nanocomposite. 2012 * 659(<-385): Artificial Neural Network Modeling of Forming Limit Diagram Forming limit diagram (FLD) provides the limiting strains a sheet metal can sustain whilst being formed. In this article, the formability of Ti6Al4V titanium alloy and Al6061-T6 aluminum alloy sheets is investigated experimentally using hydroforming deep drawing. Hecker's simplified technique [1] was used to obtain experimental FLDs for these sheet materials. Artificial neural network (ANN) modeling of the process based on experimental results is introduced to predict FLDs. It is shown that a feed forward back propagation (BP) ANN can predict the FLDs, therefore, indicating the possibility of ANN as a strong tool in simulating the process. According to comparisons there is a good agreement between experimental and neural network results. 2011 * 660(<-438): FEM and ANN Analysis in Fine-Blanking Process Fine-blanking (FB) is an effective and economical shearing process that offers a precise and clean cutting-edge finish, eliminates unnecessary secondary operations, and increases quality. In the traditional blanking product development paradigm, the design of the formed product and tooling is usually based on know-how and experience, which are generally obtained through long years of apprenticeship and skilled craftsmanship. In this study, the possibility of using finite element method (FEM) together with artificial neural networks (ANN) was investigated to analysis the fine-blanking process. Finite element analysis was used to simulate the process with an isotropic elastic-plastic material model. The results compare well with experimental results available in the literature; after confirming the validity of the model with experimental data, a number of parameters such as V-ring height effect, punch and holder force on die-roll, hydrostatic pressure status as an important factor in increasing burnish zone, and accuracy of part and radial stress status as a factor in increasing die erosion, which were also used for training the ANN, were considered. Finally, numerical data were used to train neural networks. The Levenberg-Marquardt (LM) algorithm with three neurons in the hidden layer (LM-3) appeared to be the most optimal topology and gives the best results. It was found that the coefficient of multiple determinations (R2 value) between the FEM and ANN predicted data is equal to about 0.999 for the size of die-roll, therefore indicating the possibility of FEM and ANN as a powerful design tool for the fine-blanking process. 2010 * 661(<- 15): Analyzing the effect of various soil properties on the estimation of soil specific surface area by different methods Depending on the method used, measuring the specific surface area (SSA) can be expensive and time consuming and limited numbers of studies have been conducted to predict SSA from soil properties. In this study, 127 soil sample data were gathered from the available literature. The data set included SSA values and some of the soil physical and chemical index properties. At the first step, linear regression, non-linear regression, regression trees, artificial neural networks, and a multi-objective group method of data handling were used to develop seven pedotransfer functions (PTFs) for the purpose of finding the best method in predicting SSA. Results showed that the artificial neural networks performed better than the other methods used in the development and validation of PTFs. At the second step, to find the best set of SSA for predicting input variables and to investigate the importance of the input parameters, the artificial neural networks were further used and 25 models were developed. The results showed that the PTF, containing the input variables of sand%, clay%, plastic limit, liquid limit, and free swelling index performed better than the other PTEs. This can be attributed to the close relation between the free swelling index and Atterberg limits with the soil clay mineralogy, which is one of the most important factors controlling SSA. The sensitivity analysis showed that the greatest sensitivity coefficients were found for the cation exchange capacity, clay content, liquid limit, and plasticity index in different models. Overall, the artificial neural networks method was proper to predict SSA from soil variables. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 662(<-117): Optimization of response surface and neural network models in conjugation with desirability function for estimation of nutritional needs of methionine, lysine, and threonine in broiler chickens The optimization algorithm of a model may have significant effects on the final optimal values of nutrient requirements in poultry enterprises. In poultry nutrition, the optimal values of dietary essential nutrients are very important for feed formulation to optimize profit through minimizing feed cost and maximizing bird performance. This study was conducted to introduce a novel multi-objective algorithm, desirability function, for optimization the bird response models based on response surface methodology (RSM) and artificial neural network (ANN). The growth databases on the central composite design (CCD) were used to construct the RSM and ANN models and optimal values for 3 essential amino acids including lysine, methionine, and threonine in broiler chicks have been reevaluated using the desirable function in both analytical approaches from 3 to 16 d of age. Multi-objective optimization results showed that the most desirable function was obtained for ANN-based model (D - 0.99) where the optimal levels of digestible lysine (dLys), digestible methionine (dMet), and digestible threonine (dThr) for maximum desirability were 13.2, 5.0, and 8.3 g/kg of diet, respectively. However, the optimal levels of dLys, dMet, and dThr in the RSM-based model were estimated at 11.2, 5.4, and 7.6 g/kg of diet, respectively. This research documented that the application of ANN in the broiler chicken model along with a multi-objective optimization algorithm such as desirability function could be a useful tool for optimization of dietary amino acids in fractional factorial experiments, in which the use of the global desirability function may be able to overcome the underestimations of dietary amino acids resulting from the RSM model. 2014 * 663(<-196): Crashworthiness design of multi-component tailor-welded blank (TWB) structures Crashworthiness of tailor-welded blank (TWB) structures signifies an increasing concern in lightweight design of vehicle. Although multiobjective optimization (MOO) has to a considerable extent been successfully applied to enhance crashworthiness of vehicular structures, majority of existing designs were restricted to single or uniform thin-walled components. Limited attention has been paid to such non-uniform components as TWB structures. In this paper, MOO of a multi-component TWB structure that involves both the B-pillar and inner door system subjected to a side impact, is proposed by considering the structural weight, intrusive displacements and velocity of the B-pillar component as objectives, and the thickness in different positions and the height of welding line of B-pillar as the design variables. The MOO problem is formulated by using a range of different metamodeling techniques, including response surface methodology (RSM), artificial neural network (ANN), radial basis functions (RBF), and Kriging (KRG), to approximate the sophisticated nonlinear responses. By comparison, it is found that the constructed metamodels based upon the radial basis function (RBF, especially multi-quadric model, namely RBF-MQ) fit to the design of experiment (DoE) checking points well and are employed to carry out the design optimization. The performance of the TWB B-pillar and indoor panel system can be improved by optimizing the thickness of the different parts and height of the welding line. This study demonstrated that the multi-component TWB structure can be optimized to further enhance the crashworthiness and reduce the weight, offering a new class of structural/material configuration for lightweight design. 2013 * 664(<-266): Holistic Approach to Decision Making in the Formulation and Selection of Anti-Icing Products To effectively fight snow storms in the challenging funding environment, many maintenance agencies in North America have started to produce their own anti-icing liquids, instead of procuring commercial anti-icers. This work demonstrates a systematic approach to collaborative, data-driven, and multicriteria decision making by conducting a set of laboratory tests to assess twenty blended chloride-based anti-icing formulations. The laboratory data were then used to establish predictive models correlating the multiple design parameters with the anti-icer performance and effects or with an anti-icer composite index. The authors used artificial neural networks for modeling and examined anti-icer performance (characteristic temperature and ice-melting capacity at 30 and 15 degrees F (-1.1 and -9.4 degrees C), respectively) and effects (splitting tensile strength of concrete after ten freeze-thaw cycles and corrosivity to mild steel) as a function of the formulation design. The anti-icer composite index was calculated for four different user priority scenarios (cost-first, performance-first, impacts-first, or a balanced approach), each of which placed a different set of decision weights on various target attributes. Three-dimensional response surfaces were then constructed to illustrate such predicted correlations and to guide the direction for formulation improvements. DOI: 10.1061/(ASCE)CR.1943-5495.0000039. (C) 2012 American Society of Civil Engineers. 2012 * 665(<-296): Prediction of surface roughness and delamination in end milling of GFRP using mathematical model and ANN Glass fiber reinforced plastics (GFRP) composite is considered to be an alternative to heavy exortic materials. Accordingly, the need for accurate machining of composites has increased enormously. During machining, the reduction of delamination and obtaining good surface roughness is an important aspect. The present investigation deals with the study and development of a surface roughness and delamination prediction model for the machining of GFRP plate using mathematical model and artificial neural network (ANN) multi objective technique. The mathematical model is developed using RSM in order to study main and interaction effects of machining parameters. The competence of the developed model is verified by using coefficient of determination and residual analysis. ANN models have been developed to predict the surface roughness and delamination on machining GFRP components within the range of variables studied. Predicted values of surface roughness and delamination by both models are compared with the experimental values. The results of the prediction models are quite close with experiment values. The influences of different parameters in machining GFRP composite have been analyzed. 2012 * 666(<-475): MODELING AND OPTIMIZATION OF A PHARMACEUTICAL FORMULATION SYSTEM USING RADIAL BASIS FUNCTION NETWORK A Pharmaceutical formulation is composed of several formulation factors and process variables. Quantitative model based pharmaceutical formulation involves establishing mathematical relations between the formulation variables and the resulting responses, and optimizing the formulation conditions. In a formulation system involving several objectives, the desirable formulation conditions for one property may not always be desirable for other characteristics, thus leading to the problem of conflicting objectives. Therefore, efficient modeling and optimization techniques are needed to devise an optimal formulation system. In this work, a novel method based on radial basis function network (RBFN) is proposed for modeling and optimization of pharmaceutical formulations involving several objectives. This method has the advantage that it automatically configures the RBFN using a hierarchically self organizing learning algorithm while establishing the network parameters. This method is evaluated by using a trapidil formulation system as a test bed and compared with that of a response surface method (RSM) based on multiple regression. The simulation results demonstrate the better performance of the proposed RBFN method for modeling and optimization of pharmaceutical formulations over the regression based RSM technique. 2009 * 667(<-485): Baking of Flat Bread in an Impingement Oven: Modeling and Optimization An artificial neural network (ANN) was developed to model the effect of baking parameters on the quality attributes of flat bread; i.e., crumb temperature, moisture content, surface color change and bread volume increase during baking process. As the hot air impinging jets were employed for baking, the baking control parameters were the jet temperature, the jet velocity, and the time elapsed from the beginning of the baking. The data used in the training of the network were acquired experimentally. In addition, using the data provided by ANN, a multi-objective optimization algorithm was employed to achieve the baking condition that provides the quality of the bread in all aspects simultaneously. 2009 * 668(<-219): Modeling and optimization of laser beam percussion drilling of thin aluminum sheet Modeling and optimization of machining processes using coupled methodology has been an area of interest for manufacturing engineers in recent times. The present paper deals with the development of a prediction model for Laser Beam Percussion Drilling (LBPD) using the coupled methodology of Finite Element Method (FEM) and Artificial Neural Network (ANN). First, 2D axisymmetric FEM based thermal models for LBPD have been developed, incorporating the temperature-dependent thermal properties, optical properties, and phase change phenomena of aluminum. The model is validated after comparing the results obtained using the FEM model with self-conducted experimental results in terms of hole taper. Secondly, sufficient input and output data generated using the FEM model is used for the training and testing of the ANN model. Further, Grey Relational Analysis (GRA) coupled with Principal Component Analysis (PCA) has been effectively used for the multi-objective optimization of the LBPD process using data predicted by the trained ANN model. The developed ANN model predicts that hole taper and material removal rates are highly affected by pulse width, whereas the pulse frequency plays the most significant role in determining the extent of HAZ. The optimal process parameter setting shows a reduction of hole taper by 67.5%, increase of material removal rate by 605%, and reduction of extent of HAZ by 3.24%. (C) 2012 Elsevier Ltd. All rights reserved. 2013 * 669(<-220): Modeling and optimization of laser beam percussion drilling of nickel-based superalloy sheet using Nd: YAG laser The creation of small diameter holes in thin sheets (< 3 mm) of superalloys using a laser beam is a challenging task. Knowledge of the effect of laser related process variables on hole related responses with respect to variation of sheet thickness is essential to obtain a hole of requisite quality. Therefore, in this paper a coupled methodology comprising of Finite Element Method (FEM) and Artificial Neural Network (ANN) has been used to develop a prediction model for the Laser Beam Percussion Drilling (LBPD) process. First, a 2D axisymmetric FEM-based thermal model for LBPD has been developed incorporating temperature-dependent thermal properties, optical properties and phase change phenomena of the sheet material. The developed FEM-based thermal model is validated with self-conducted experimental results in terms of hole taper which is further used to generate adequate input and output data for training and testing of the ANN model. Gray Relational Analysis (GRA) coupled with Principal Component Analysis (PCA) has been effectively used for the multi-objective optimization of the LBPD process utilizing the data predicted by the trained ANN model. The developed ANN model has been used to predict the performance characteristics of the LBPD process. The results predicted by the ANN model show that with the increase in pulse width and peak power the hole taper, material removal rate (MRR) and heat-affected zone (HAZ) increases. The acquired combination of optimal process variables produce a hole with good integral quality, i.e., a reduction of hole taper by 32.1%, increase of material removal rate by 28.9% and reduction of extent of HAZ by 4.5%. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 670(<-282): Selection of wire electrical discharge machining process parameters using non-traditional optimization algorithms Selection of the optimal values of different process parameters, such as pulse duration, pulse frequency, duty factor, peak current, dielectric flow rate, wire speed, wire tension, effective wire offset of wire electrical discharge machining (WEDM) process is of utmost importance for enhanced process performance. The major performance measures of WEDM process generally include material removal rate, cutting width (kerf), surface roughness and dimensional shift. Although different mathematical techniques, like artificial neural network, gray relational analysis, simulated annealing, desirability function, Pareto optimality approach, etc. have already been applied for searching out the optimal parametric combinations of WEDM processes, but in most of the cases, sub-optimal or near-optimal solutions have been arrived at. In this paper, an attempt is made to apply six most popular population-based non-traditional optimization algorithms, i.e. genetic algorithm, particle swarm optimization, sheep flock algorithm, ant colony optimization, artificial bee colony and biogeography-based optimization for single and multi-objective optimization of two WEDM processes. The performance of these algorithms is also compared and it is observed that biogeography-based optimization algorithm outperforms the others. (C) 2012 Elsevier B.V. All rights reserved. 2012 * 671(<-387): Optimization of WEDM Process Parameters of gamma-TiAl Alloy Using SVM Method Wire electrical discharge machining (Wire-EDM) process is a widely used method to produce precision tools. There are many parameters that have influence on the process such as pulse on time, pulse off time, voltage, wire tension, wire diameter, material etc. In certain cases values of the parameters (e.g. cutting speed and surface roughness) conflict with each other. Usually surface quality decreases while cutting speed increases, and vice versa. It is difficult to improve both properties simultaneously. Due to the complexity of process, it is convenient to use some stochastic methods to find optimal process parameters. In this study, Wire-EDM process of gamma titanium aluminide alloy was optimized by support vector machines (SVM) method. To achieve this goal, Wire-EDM experimental results of the alloy were used as training set, and then predictions were made using this set. Obtained results were submitted as graphs and Pareto optimal points were determined among predicted points. Lastly, an optimum point was selected according to desired surface roughness value using multi-objective optimization methodology. Results showed that using SVM is effective as much as traditional prediction methods like Artificial Neural Networks (ANN). 2011 * 672(<-437): An Integrated Approach to Optimization of WEDM Combining Single-Pass and Multipass Cutting Operation This research article presents an integrated approach to optimization of wire electrical discharge machining (WEDM) of gamma titanium aluminide (-TiAl) with the assistance of artificial neural network (ANN) modeling. Four process parameters, pulse on time, peak current, dielectric flow rate, and effective wire offset, were investigated to study their influence on the process outputs; that is, cutting speed, surface roughness, and dimensional shift in the multipass cutting operation. Two ANN models, based on Bayesian (automated) regularization and early stopping method, have been developed and compared. The model based on Bayesian regularization method was selected because the prediction accuracy was superior compared to the early stopping method. The Pareto optimization was applied to determine the maximum cutting speed corresponding to the required surface roughness for the trim cutting process. Finally, by combining the results of the single- and multipass cutting and introducing the new concept of effective cutting speed, a machining strategy based on the novel concept of critical surface roughness has been developed for selecting the machining process, either single cutting or multipass cutting, so that the maximum productivity can be ensured according to the surface finish requirements. 2010 * 673(<-261): Modeling of wire electrical discharge machining of alloy steel (HCHCr) This study provides predictive models for the functional relationship between input and output variables of wire cut electrical discharge machine (WEDM) environment using alloy steel (HCHCr). Multi-objective optimization of the process parametric combinations is attempted by modeling WEDM process by use of artificial neural networks (ANN). This work provide an optimized input data set to WEDM system and the results show improvement with better productivity, reduced cutting time and product cost at the cutting speed and surface finish. At experimental result, the surface quality decreases as cutting speed increases and 1.371 mm/ min becomes the maximum cutting speed obtained with good surface finish of 0.387 micron. The results show the potential to improve production efficiency and part quality. 2012 * 674(<-667): An interactive multi-objective artificial neural network approach for machine setup optimization In this paper, we develop an artificial neural network method for machine setup problems. We show that our new approach solves a very challenging problem in the area of machining i.e. machine setup. A review of machine setup concepts and methods, along with feedforward artificial neural network is presented. We define the problem of machine setup to assessing the values of machine speed, feed and depth of cut (process inputs) for a particular objective such as minimize cost, maximize productivity or maximize surface finish. We use cutting temperature, cutting force, tool life, and surface roughness (process outputs) rather than objective functions to communicate with the decision maker. We show the relationship between process inputs to process outputs. This relationship is used in determining machine setup parameters (speed, feed, and depth of cut). Back propagation neural network is used as a decision support tool. The network maps, the forward relationship, and backward relationship between process inputs and process outputs. This mapping facilitates an interactive session with the decision maker. The process input is appropriately selected. Our method has the advantage of forecasting machine setup parameters with very little resource requirement in terms of time, machine tool, and people. Forecast time is almost instantaneous. Accuracy of the forecast depends on training and a well determined training sample provides very high accuracy. Trained network replaces the knowledge of an experienced worker, hence labor cost can be potentially reduced. 2000 * 675(<-190): Parametric study along with selection of optimal solutions in dry wire cut machining of cemented tungsten carbide (WC-Co) This work deals with parametric study of dry wire EDM (WEDM) process of cemented tungsten carbide. Experiments have been conducted using air as dielectric medium to investigate effects of pulse on time, pulse off time, gap set voltage, discharge current and wire tension on cutting velocity (CV) surface roughness (SR) and oversize (OS). Firstly, a series of exploratory experiments were carried out to identify appropriate gas and its pressure. Afterward, preliminary experiments were conducted to investigate effects of process parameters on dry WEDM characteristics and find appropriate ranges for each factor. Then a central composite rotatable method was employed to design experiments based on response surface methodology (RSM). Empirical models were developed to create relationships between process factors and responses by considering to analysis of variances (ANOVA). To increase the predictability of the process, intelligent models have been developed based on back-propagation neural network (BPNN) and accuracy of these models was compared with mathematical models based on root mean square error (RMSE) and prediction error percent (PEP). In order to select optimal solutions in the cases of single-objective and multi-objectives optimization problems, optimization includes two main approaches. First approach was based on mathematical model and desirability function. Also second approach was designed based on neural network and particle swarm optimization. These approaches were applied in both cases of single-objective and multi-objectives optimization problems and their results were compared with together. Results indicated that selection of air at inlet pressure of 1.5 bar is really appropriate for conducting experiments of next stages. Also, the BPNN creates more accurate prediction rather than mathematical model. Moreover, the BPNN-PSO approach was more efficient in optimization of process rather than mathematical model-desirability function in respect with validation tests. (C) 2013 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved. 2013 * 676(<-473): An adaptive neuro-fuzzy inference system (ANFIS) model for wire-EDM A wire electrical discharge machined (WEDM) surface is characterized by its roughness and metallographic properties. Surface roughness and white layer thickness (WLT) are the main indicators of quality of a component for WEDM. In this paper an adaptive neuro-fuzzy inference system (ANFIS) model has been developed for the prediction of the white layer thickness (WLT) and the average surface roughness achieved as a function of the process parameters. Pulse duration, open circuit voltage, dielectric flushing pressure and wire feed rate were taken as model's input features. The model combined modeling function of fuzzy inference with the learning ability or artificial neural network; and a set of rules has been generated directly from the experimental data. The model's predictions were compared with experimental results for verifying the approach. (C) 2008 Elsevier Ltd. All rights reserved. 2009 * 677(<-202): Multi-objective optimization of material removal rate and surface roughness in wire electrical discharge turning Wire electrical discharge turning (WEDT) is an emerging area, and it can be used to generate cylindrical forms on difficult to machine materials by adding a rotary axes to WEDM. The selection of optimum cutting parameters in WEDT is an important step to achieve high productivity while making sure that there is no wire breakage. In the present work, the WEDT process is modelled using an artificial neural network with feed-forward back-propagation algorithm and using adaptive neuro-fuzzy inference system. The experiments were designed based on Taguchi design of experiments to train the neural network and to test its performance. The process is optimized considering the two output process parameters, material removal rate, and surface roughness, which are important for increasing the productivity and quality of the products. Since the output parameters are conflicting in nature, a multi-objective optimization method based on non-dominated sorting genetic algorithm-II is used to optimize the process. A pareto-optimal front leading to the set of optimal solutions for material removal rate and surface roughness is obtained using the proposed algorithms. The results are verified with experiments, and it is found to improve the performance of WEDT process. Using this set of solutions, required input parameters can be selected to achieve higher material removal rate and good surface finish. 2013 * 678(<-305): Intelligent optimization and selection of machining parameters in finish turning and facing of Inconel 718 The heat-resistant super alloy material like Inconel 718 machining is an inevitable and challenging task even in modern manufacturing processes. This paper describes the genetic algorithm coupled with artificial neural network (ANN) as an intelligent optimization technique for machining parameters optimization of Inconel 718. The machining experiments were conducted based on the design of experiments full-factorial type by varying the cutting speed, feed, and depth of cut as machining parameters against the responses of flank wear and surface roughness. The combined effects of cutting speed, feed, and depth of cut on the performance measures of surface roughness and flank wear were investigated by the analysis of variance. Using these experimental data, the mathematical model and ANN model were developed for constraints and fitness function evaluation in the intelligent optimization process. The optimization results were plotted as Pareto optimal front. Optimal machining parameters were obtained from the Pareto front graph. The confirmation experiments were conducted for the optimal machining parameters, and the betterment has been proved. 2012 * 679(<-494): Development of multi-objective optimization models for electrochemical machining process Owing to the complexity of electrochemical machining (ECM), it is very difficult to determine optimal cutting parameters for improving cutting performance. Hence, optimization of operating parameters is an important step in machining, particularly for unconventional machining procedures like ECM. A suitable selection of machining parameters for the ECM process relies heavily on the operator's technologies and experience because of their numerous and diverse range. Machining parameters provided by the machine tool builder cannot meet the operator's requirements. Since for an arbitrary desired machining time for a particular job, they do not provide the optimal conditions. To solve this task, multiple regression model and ANN model are developed as efficient approaches to determine the optimal machining parameters in ECM. In this paper, current, voltage, flow rate and gap are considered as machining parameters and metal removal rate and surface roughness are the objectives. Then by applying grey relational analysis, we calculate the grey grade for representing multi-objective model. Multiple regression model and ANN model have been developed to map the relationship between process parameters and objectives in terms of grade. The experimental data are divided into training and testing data. The predicted grade is found and then the percentage deviation between the experimental grade and predicted grade is calculated for each model. The average percentage deviations for the training data of the linear regression model, logarithmic transformation model, excluding interaction terms and ANN model, are 12.7, 25.6 and 3.03, respectively. The average percentage deviations for the testing data of the three models are 9.83, 26.8 and 2.67. While examining the average percentage deviations of three models, ANN is having less percentage deviation. So ANN is considered as the best prediction model. Based on the testing results of the artificial neural network, the operating parameters are optimized. Finally, ANOVA is used to identify the significance of multiple regression model and ANN model. 2008 * 680(<-285): Multi-objective optimization of electric-discharge machining process using controlled elitist NSGA-II Parametric optimization of electric discharge machining (EDM) process is a multi-objective optimization task. In general, no single combination of input parameters can provide the best cutting speed and the best surface finish simultaneously. Genetic algorithm has been proven as one of the most popular multi-objective optimization techniques for the parametric optimization of EDM process. In this work, controlled elitist non-dominated sorting genetic algorithm has been used to optimize the process. Experiments have been carried out on die-sinking EDM by taking Inconel 718 as work piece and copper as tool electrode. Artificial neural network (ANN) with back propagation algorithm has been used to model EDM process. ANN has been trained with the experimental data set. Controlled elitist non-dominated sorting genetic algorithm has been employed in the trained network and a set of pareto-optimal solutions is obtained. 2012 * 681(<-542): Modeling of electrical discharge machining process using back propagation neural network and multi-objective optimization using non-dominating sorting genetic algorithm-II Present study attempts to model and optimize the complex electrical discharge machining (EDM) process using soft computing techniques. Artificial neural network (ANN) with back propagation algorithm is used to model the process. As the output parameters are conflicting in nature so there is no single combination of cutting parameters, which provides the best machining performance. A multi-objective optimization method, non-dominating sorting genetic algorithm-II is used to optimize the process. Experiments have been carried out over a wide range of machining conditions for training and verification of the model. Testing results demonstrate that the model is suitable for predicting the response parameters. A pareto-optimal set has been predicted in this work. (c) 2006 Elsevier B.V. All rights reserved. 2007 * 682(<-316): Intelligent Modeling and Multiobjective Optimization of Die Sinking Electrochemical Spark Machining Process Die sinking-electrochemical spark machining (DS-ECSM) is one of the hybrid machining processes, combining the features of electrochemical machining (ECM) and electro-discharge machining (EDM), used for machining of nonconducting materials. This article reports an intelligent approach for the modelling of DS-ECSM process using finite element method (FEM) and artificial neural network (ANN) in integrated manner. It primarily comprises development of two models. The first one is the development of a thermal finite element model to estimate the temperature distribution within the heat-affected zone (HAZ) of single spark on the workpiece during DS-ECSM. The estimated temperature field is further post-processed for determination of material removal rate (MRR) and average surface roughness (ASR). The second one is a back propagation neural network (BPNN)-based process model used in a simulation study to find optimal machining parameters. The BPNN model has been trained and tested using the data generated from the FEM simulations. The trained neural network system has been used in predicting MRR and ASR for different input conditions. The ANN model is found to accurately predict DS-ECSM process responses for chosen process conditions. This article also presents an effective approach for multiobjective optimization of DS-ECSM process using grey relational analysis. 2012 * 683(<-435): Modelling and Optimization of Multiple Process Attributes of Electrodischarge Machining Process by Using a New Hybrid Approach of Neuro-Grey Modeling In the present article, a new hybrid approach of neuro-grey modeling (NGM) technique has been proposed for modeling and optimization of multiple process attributes of the electro discharge machining (EDM) process. It is proposed to simulate through an artificial neural network (ANN) for characterization of multiple process attributes followed by multiple process attributes optimization by using grey relational analysis (GRA) technique. A multineuron ANN of logistic sigmoid activation function has been designed. Levenberg-Marquardt algorithm involving second order error optimization has been chosen for training of the ANN because of its inherent merits. Then, using grey relational analysis (GRA) technique, a grey relational grade has been determined, which effectively represents the aggregate of different process attributes. As a result, a multi-attribute optimization can be converted into optimization of a single grey relational grade. The ANN is simulated first to characterize surface roughness (Ra), depth of heat-affected zone, microhardness value of machined surface, and material removal rate (MRR) with respect to current and pulse duration. Then, optimal values of current and pulse duration have been obtained. The NGM technique is found to be better and easy to implement. 2010 * 684(<-188): Multiobjective optimization of slotted electrical discharge abrasive grinding of metal matrix composite using artificial neural network and nondominated sorting genetic algorithm The alternative use of electrical discharge grinding and abrasive grinding, which is applied with the application of slotted wheel named as slotted electrodischarge abrasive grinding, is much suitable for machining of metal matrix composites. But the selection of process parameters is a difficult task due to the complexity of the process. The aim of this study is to optimize the process parameters of slotted electrodischarge abrasive grinding process using a combined approach of artificial neural network and nondominated sorting genetic algorithm II. The artificial neural network architecture has been trained and tested with experimental data, and then the developed model is coupled with nondominated sorting genetic algorithm II to develop a hybrid approach of artificial neural network-nondominated sorting genetic algorithm II, which is used for optimization of process parameters. During experimentation, the effect of current, pulse on-time, pulse off-time, wheel speed and grit number has been studied on material removal rate and average surface roughness (Ra). The results have shown that prediction capability of artificial neural network model is within the range of acceptable limits. The developed hybrid approach of artificial neural network-nondominated sorting genetic algorithm II gives optimal solution with correlation coefficient of material removal rate and Ra as 0.9979 and 0.9982, respectively. 2013 * 685(<-195): Intelligent Modeling and Multiobjective Optimization of Electric Discharge Diamond Grinding The grinding of metal matrix composites (MMCs) is very difficult by conventional techniques due to its improved mechanical properties. It often results in poor surface quality (surface damage) in the form of surface cracks/residual stresses and requires frequent truing and dressing due to clogging of the grinding wheel. The electric discharge diamond grinding (EDDG), a hybrid process of electric discharge machining and grinding may overcome these problems up to some extent. But low material removal rate (MRR) and high wheel wear rate (WWR) are the main problems in EDDG to achieve economic performance. The present paper investigates the EDDG process performance during grinding of copper-iron-graphite composite by modeling and simultaneous optimization of two important performance characteristics such as MRR and WWR. A hybrid approach of artificial neural network, genetic algorithm, and grey relational analysis has been proposed for multi-objective optimization. The verification results show considerable improvement in the performance of both quality characteristics. 2013 * 686(<-526): Parameter optimization model in electrical discharge machining process Electrical discharge machining (EDM) process, at present is still an experience process, wherein selected parameters are often far from the optimum, and at the same time selecting optimization parameters is costly and time consuming. In this paper, artificial neural network (ANN) and genetic algorithm (GA) are used together to establish the parameter optimization model. An ANN model which adapts Levenberg-Marquardt algorithm has been set up to represent the relationship between material removal rate (MRR) and input parameters, and GA is used to optimize parameters, so that optimization results are obtained. The model is shown to be effective, and MRR is improved using optimized machining parameters. 2008 * 687(<-477): Genetic algorithms based multi-objective optimization of an iron making rotary kiln Industrial rotary kilns used in iron making are complex reactors having several functions. Raw materials, like iron ore and non-coking coal, are continuously fed whilst product sponge iron is continuously discharged from the downstream end, while the waste gases in counter current flow, exit through the uphill end. The outputs exhibit conflicting trends at the production level - an increase in daily production results in a decrease in the product's metallic iron content and vice versa. The optimization of the operation is thus a typical case of multi-objective optimization within constraints. The relationship between the various inputs and the above outputs, being very complex, is established by Artificial Neural Networks (ANN). As the search spaces for the inputs are not very well defined for the acceptable ranges of each of the outputs, the optimization task was carried out using multi-objective genetic algorithms and the resulting Pareto fronts are further analyzed. The results conform to the existing trends and also suggest some possible improvements. (C) 2008 Elsevier B.V. All rights reserved. 2009