-*- mode: org -*- Supportive uses in general method classes for other areas * 72(<- 3): Closed-Loop Restoration Approach to Blurry Images Based on Machine Learning and Feedback Optimization Blind image deconvolution (BID) aims to remove or reduce the degradations that have occurred during the acquisition or processing. It is a challenging ill-posed problem due to a lack of enough information in degraded image for unambiguous recovery of both point spread function (PSF) and clear image. Although recently many powerful algorithms appeared; however, it is still an active research area due to the diversity of degraded images as well as degradations. Closed-loop control systems are characterized with their powerful ability to stabilize the behavior response and overcome external disturbances by designing an effective feedback optimization. In this paper, we employed feedback control to enhance the stability of BID by driving the current estimation quality of PSF to the desired level without manually selecting restoration parameters and using an effective combination of machine learning with feedback optimization. The foremost challenge when designing a feedback structure is to construct or choose a suitable performance metric as a controlled index and a feedback information. Our proposed quality metric is based on the blur assessment of deconvolved patches to identify the best PSF and computing its relative quality. The Kalman filter-based extremum seeking approach is employed to find the optimum value of controlled variable. To find better restoration parameters, learning algorithms, such as multilayer perceptron and bagged decision trees, are used to estimate the generic PSF support size instead of trial and error methods. The problem is modeled as a combination of pattern classification and regression using multiple training features, including noise metrics, blur metrics, and low-level statistics. Multi-objective genetic algorithm is used to find key patches from multiple saliency maps which enhance performance and save extra computation by avoiding ineffectual regions of the image. The proposed scheme is shown to outperform corresponding open-loop schemes, which often fails or needs many assumptions regarding images and thus resulting in sub-optimal results. 2015 * 78(<-307): A note on the estimation of the Pareto efficient set for multiobjective matrix permutation problems There are a number of important problems in quantitative psychology that require the identification of a permutation of the n rows and columns of an n x n proximity matrix. These problems encompass applications such as unidimensional scaling, paired-comparison ranking, and anti-Robinson forms. The importance of simultaneously incorporating multiple objective criteria in matrix permutation applications is well recognized in the literature; however, to date, there has been a reliance on weighted-sum approaches that transform the multiobjective problem into a single-objective optimization problem. Although exact solutions to these single-objective problems produce supported Pareto efficient solutions to the multiobjective problem, many interesting unsupported Pareto efficient solutions may be missed. We illustrate the limitation of the weighted-sum approach with an example from the psychological literature and devise an effective heuristic algorithm for estimating both the supported and unsupported solutions of the Pareto efficient set. 2012 * 212(<-557): A neural stochastic multiscale optimization framework for sensor-based parameter estimation This work presents a novel neural stochastic optimization framework for reservoir parameter estimation that combines two independent sources of spatial and temporal data: oil production data and dynamic sensor data of flow pressures and concentrations. A parameter estimation procedure is realized by minimizing a multi-objective mismatch function between observed and predicted data. In order to be able to efficiently perform large-scale parameter estimations, the parameter space is decomposed in different resolution levels by means of the singular value decomposition (SVD) and a wavelet upscaling process. The estimation is carried out incrementally from low to higher resolution levels by means of a neural stochastic multilevel optimization approach. At a given resolution level, the parameter space is globally explored and sampled by the simultaneous perturbation stochastic approximation (SPSA) algorithm. The sampling yielded by SPSA serves as training points for an artificial neural network that allows for evaluating the sensitivity of different multi-objective function components with respect to the model parameters. The proposed approach may be suitable for different engineering and scientific applications wherever the parameter space results from discretizing a set of partial differential equations on a given spatial domain. 2007 * 220(<-537): GIS and neural network method for potential road identification Global positioning system (GPS)-based vehicle tracking systems were used to track 20 vehicles involved in an 8-day field training exercise at Yakima Training Center, Washington. A 3-layer feed-forward artificial neural network (NN) with a backpropagation learning algorithm was developed to identify potential roads. The NN was trained using a subset of the GPS data that was supplemented with field observations that documented newly formed road segments resulting from concentrated vehicle traffic during the military training exercise. The NN was subsequently applied to the full vehicle movement data set to predict potential roads for the entire training exercise. Model predictions were validated using additional installation and site visit data. The first validation used the NN to identify the existing road network as represented in the Yakima Training Center GIS roads data layer Next, the NN was used to predict emerging road networks that had not previously existed. The NN method accurately classified approximately 94% of the training data, 85% of the on-road movement data, and 78% of potential roads. The proposed NN method more accurately classified potential roads than the previously used multicriteria method, which was able to identify 10 out of 17 potential road segments across the entire training center. 2007 * 237(<-445): Photochemistry and chemometrics-An overview Photochemistry has made significant contributions to our understanding of many important natural processes as well as the scientific discoveries of the man-made world. The measurements from such studies are often complex and may require advanced data interpretation with the use of multivariate or chemometrics methods. In general, such methods have been applied successfully for data display, classification, multivariate curve resolution and prediction in analytical chemistry, environmental chemistry, engineering, medical research and industry. However, in photochemistry, by comparison, applications of such multivariate approaches were found to be less frequent although a variety of methods have been used, especially with spectroscopic photochemical applications. The methods include Principal Component Analysis (PCA; data display), Partial Least Squares (PLS; prediction), Artificial Neural Networks (ANN; prediction) and several models for multivariate curve resolution related to Parallel Factor Analysis (PARAFAC; decomposition of complex responses). Applications of such methods are discussed in this overview and typical examples include photodegradation of herbicides, prediction of antibiotics in human fluids (fluorescence spectroscopy), non-destructive in- and on-line monitoring (near infrared spectroscopy) and fast-time resolution of spectroscopic signals from photochemical reactions. It is also quite clear from the literature that the scope of spectroscopic photochemistry was enhanced by the application of chemometrics. To highlight and encourage further applications of chemometrics in photochemistry, several additional chemometrics approaches are discussed using data collected by the authors. The use of a PCA biplot is illustrated with an analysis of a matrix containing data on the performance of photocatalysts developed for water splitting and hydrogen production. In addition, the applications of the Multi-Criteria Decision Making (MCDM) ranking methods and Fuzzy Clustering are demonstrated with an analysis of water quality data matrix. Other examples of topics include the application of simultaneous kinetic spectroscopic methods for prediction of pesticides, and the use of response fingerprinting approach for classification of medicinal preparations. In general, the overview endeavours to emphasise the advantages of chemometrics' interpretation of multivariate photochemical data, and an Appendix of references and summaries of common and less usual chemometrics methods noted in this work, is provided. Crown Copyright (C) 2010 Published by Elsevier B.V. All rights reserved. 2009 * 240(<-146): Genetic Algorithm Based Multi-Objective Least Square Support Vector Machine for Simultaneous Determination of Multiple Components by Near Infrared Spectroscopy The near infrared (NIR) spectrum contains a global signature of composition, and enables to predict different properties of the material. In the present paper, a genetic algorithm and an adaptive modeling technique were applied to build a multi-objective least square support vector machine (MLS-SVM), which was intended to simultaneously determine the concentrations of multiple components by NIR spectroscopy. Both the benchmark corn dataset and self-made Forsythia suspense dataset were used to test the proposed approach. Results show that a genetic algorithm combined with adaptive modeling allows to efficiently search the LS-SVM hyperparameter space. For the corn data, the performance of multi-objective LS-SVM was significantly better than models built with PLS1 and PLS2 algorithms. As for the Forsythia suspense data, the performance of multi-objective LS-SVM was equivalent to PLS1 and PLS2 models. In both datasets, the over-fitting phenomena were observed on RBFNN models. The single objective LS-SVM and MLS-SVM didn't show much difference, but the one-time modeling convenience allows the potential application of MLS-SVM to multicomponent NIR analysis. 2014 * 313(<-468): Signal denoising in engineering problems through the minimum gradient method This paper applies the minimum gradient method (MGM) to denoise signals in engineering problems. The MGM is a novel technique based on the complexity control, which defines the learning as a bi-objective problem in such a way to find the best trade-off between the empirical risk and the machine complexity. A neural network trained with this method can be used to pre-process data aiming at increasing the signal-to-noise ratio (SNR). After training, the neural network behaves as an adaptive filter which minimizes the cross-validation error. By applying the general singular value decomposition (GSVD), we show the relation between the proposed approach and the Wiener filter. Some results are presented, including a toy example and two complex engineering problems, which prove the effectiveness of the proposed approach. (C) 2009 Elsevier B.V. All rights reserved. 2009 * 337(<-208): Evolutionary Surrogate Optimization of an Industrial Sintering Process Despite showing immense potential as an optimization technique for solving complex industrial problems, the use of evolutionary algorithms, especially genetic algorithms (GAs), is restricted to offline applications in many industrial cases due to their computationally expensive nature. This problem becomes even more severe when the underlying function as well as constraint evaluation is computationally expensive. To reduce the overall application time under this kind of scenario, a combined usage of the original expensive model and a relatively less expensive surrogate model built around the data provided by the original model in the course of optimization has been proposed in this work. Use of surrogates provides the quickness in the application, thereby saving the execution time, and the use of original model allows the optimization tool to be in the right path of the search process. Switching to the surrogate model happens if predictability of the model is of acceptable accuracy (to be decided by the decision maker), and thereby the optimization time is saved without compromising the solution quality. This concept of successive use of surrogate (artificial neural network [ANN]) and original expensive model is applied on an industrial two-layer sintering process where optimization decides the individual thickness and coke content of each layer to maximize sinter quality and minimize coke consumption simultaneously. The use of surrogate could reduce the execution time by 60% and thereby improve the decision support system utilization without compromising the solution quality. 2013 * 338(<-344): Successive approximate model based multi-objective optimization for an industrial straight grate iron ore induration process using evolutionary algorithm Multi-objective optimization of any complex industrial process using first principle computationally expensive models often demands a substantially higher computation time for evolutionary algorithms making it less amenable for real time implementation. A combination of the above-mentioned first principle model and approximate models based on artificial neural network (ANN) successively learnt in due course of optimization using the data obtained from first principle models can be intelligently used for function evaluation and there by reduce the aforementioned computational burden to a large extent. In this work, a multi-objective optimization task (simultaneous maximization of throughput and Tumble index) of an industrial iron ore induration process has been studied to improve the operation of the process using the above-mentioned metamodeling approach. Different pressure and temperature values at different points of the furnace bed, grate speed and bed height have been used as decision variables where as the bounds on cold compression strength, abrasion index, maximum pellet temperature and burn-through point temperature have been treated as constraints. A popular evolutionary multi-objective algorithm, NSGA II, amalgamated with the first principle model of the induration process and its successively improving approximation model based on ANN, has been adopted to carryout the task. The optimization results show that as compared to the PO solutions obtained using only the first principle model, (i) similar or better quality PO solutions can be achieved by this metamodeling procedure with a close to 50% savings in function evaluation and there by computation time and (ii) by keeping the total number of function evaluations same, better quality PO solutions can be obtained. (C) 2011 Elsevier Ltd. All rights reserved. 2011 * 448(<- 96): High-level design space exploration of locally linear neuro-fuzzy models for embedded systems Recently, artificial neural networks and neuro-fuzzy systems are being introduced in embedded systems due to often-used solution for classification and nonlinear system identification. In this paper, we present a parametric neuro-fuzzy hardware and a framework for exploring design space for an efficient hardware realization of neuro-fuzzy models for embedded systems. The proposed hardware can be used as a stand-alone core or be coupled with a central processing unit for the purpose of online training. We also present a framework to explore the design space for optimal design parameters (hardware core parameters) so that an efficient neuro-fuzzy hardware in terms of area, power consumption, and performance (delay) can be selected. It examines whole design space to find Pareto efficient hardware without increasing time-to-market and non-recurring engineering cost with the aid of high-level design space exploration. Also, the performance of the proposed hardware is compared against a soft core embedded processor named NIOS II/s. The results obtained show that the selected core is able to perform actions faster than NIOS II while it dissipates less power. Moreover, the proposed framework is put into action through three different scenarios to show off the capabilities of framework for generating Pareto optimal points upon different application demands. (C) 2013 Elsevier B.V. All rights reserved. 2014