-*- mode: org -*- Seemingly unrelated noise from the query or database (duplicate entries) * 104(<-336): Resource-Aware Compiler Prefetching for Fine-Grained Many-Cores Super-scalar, out-of-order processors that can have tens of read and write requests in the execution window place significant demands on Memory Level Parallelism (MLP). Multi- and many-cores with shared parallel caches further increase MLP demand. Current cache hierarchies however have been unable to keep up with this trend, with modern designs allowing only 4-16 concurrent cache misses. This disconnect is exacerbated by recent highly parallel architectures (e.g. GPUs) where power and area per-core budget favor numerous lighter cores with less resources, further reducing support for MLP on a per-core basis. Support for hardware and software prefetch increases MLP pressure since these techniques overlap multiple memory requests with existing computation. In this paper, we propose and evaluate a novel Resource-Aware Prefetching (RAP) compiler algorithm that is aware of the number of simultaneous prefetches supported, and optimized for the same. We implemented our algorithm in a GCC-derived compiler and evaluated its performance using an emerging fine-grained many-core architecture. Our results show that the RAP algorithm outperforms a well-known loop prefetching algorithm by up to 40.15% in run-time on average across benchmarks and the state-of-the art GCC implementation by up to 34.79%, depending upon hardware configuration. Moreover, we compare the RAP algorithm with a simple hardware prefetching mechanism, and show run-time improvements of up to 24.61%. To demonstrate the robustness of our approach, we conduct a design-space exploration (DSE) for the considered target architecture by varying (i) the amount of chip resources designated for per-core prefetch storage and (ii) off-chip bandwidth. We show that the RAP algorithm is robust in that it improves performance across all design points considered. We also identify the Pareto-optimal hardware-software configuration which delivers 53.66% run-time improvement on average while using only 5.47% more chip area than the bare-bones design. 2011 * 109(<-471): Functional organization of the major late transcriptional unit of canine adenovirus type 2 Vectors derived from canine adenovirus type 2 (CAV-2) are attractive candidates for gene therapy and live recombinant vaccines. CAV-2 vectors described thus far have been generated by modifying the virus genome, most notably early regions 1 and 3 or the fiber gene. Modification of these genes was underpinned by previous descriptions of their mRNA and protein-coding sequences. Similarly, the construction of new CAV-2 vectors bearing changes in other genomic regions, in particular many of those expressed late in the viral cycle, will require prior characterization of the corresponding transcriptional units. In this study, we provide a detailed description of the late transcriptional organization of the CAV-2 genome. We examined the major late transcription unit (MLTU) and determined its six families of mRNAs controlled by the putative major late promoter (MLP). All mRNAs expressed from the MLTU had a common non-coding tripartite leader (224 nt) at their 5' end. In transient transfection assays, the predicted MLP sequence was able to direct luciferase gene expression and the TPL sequence yielded a higher amount of transgene product. Identification of viral transcriptional products following in vitro infection confirmed most of the predicted protein-coding regions that were deduced from computer analysis of the CAV-2 genome. These findings contribute to a better understanding of gene expression in CAV-2 and lay the foundation required for genetic modifications aimed at vector optimization. 2009 * 112(<- 9): A new methodology for investigating the cost-optimality of energy retrofitting a building category According to the Energy Performance of Buildings Directive (EPBD) Recast, building energy retrofitting should aim "to achieving cost-optimal levels". However, the detection of cost-optimal levels for an entire building stock is a complex task. This paper tackles such issue by introducing a novel methodology, aimed at supporting robust cost-optimal energy retrofit solutions for building categories. Since the members of one building category provide highly different energy performance, they cannot be correctly represented by only one reference design as stipulated by the EPBD Recast. Therefore, a representative building sample (RBS) is here used to consider potential variations in all parameters affecting energy performance. Simulation-based uncertainty analysis is employed to identify the optimal RBS size, followed by simulation-based sensitivity analysis to identify proper retrofit actions. Then post-processing is performed to investigate the cost-effectiveness of all possible retrofit packages including energy-efficient HVAC systems, renewables, and energy saving measures. The methodology is denoted as SLABE, 'Simulation-based Large-scale uncertainty/sensitivity Analysis of Building Energy performance'. It employs EnergyPlus and MATLAB (R). For demonstration, SLABE is applied to office buildings built in South Italy during 1920-1970. The results show that the cost-optimal retrofit package includes the installation of condensing gas boiler, water-cooled chiller and full-roof photovoltaic system. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 113(<- 84): Recent development and application of several high-efficiency surface heat exchangers for energy conversion and utilization In the present study, the recent research of three kinds of surface heat exchangers, i.e., shell-and-tube heat exchangers with helical baffles, air-cooled heat exchangers used in large air-cooled systems, and primary surface heat exchangers are reviewed. They are used in the energy conversion and utilization for liquid to liquid, gas to gas and liquid to gas heat exchange, respectively. It can be concluded that the helical baffled shell-and-tube heat exchangers (STHXs) should be used to replace the conventional segmental baffled STHXs in industries, despite there are a lot of research work have to be done, especially on the novel combined helical baffles. The primary surface gas to gas heat exchangers are developing towards to the snore complex 3D CC primary surfaces, such as the double-wave CC primary surface, offset-bubble primary surface and 3D anti-phase secondary corrugation. The whole performance for the air-cooled heat exchangers in the air cooling system and the multi-objectives optimization for air-cooled heat exchangers should be paid more attention, considering the heat transfer, pumper power, space usage and other economic influence factors. (C) 2014 Elsevier Ltd. All rights reserved. 2014 * 114(<-432): Multi-criteria Axiomatic Design Approach to Evaluate Sites for Grid-connected Photovoltaic Power Plants: A Case Study in Turkey Global warming and climate change are the most serious problems for developing countries as well as the world. Therefore, the usage of renewable energy sources is gaining importance for sustainable energy development and environmental protection. Turkey is one of the developing countries whose demand of electricity is sharply increasing. In order to meet this demand by means of renewable sources, solar power is a suitable source due to the high solar energy potential of Turkey among the renewable sources. In this article, the multi-criteria axiomatic design (AD) approach is proposed for the evaluation of sites for a grid-connected photovoltaic power plant (GCPP) in Turkey. For this aim, four evaluation criteria, which have great influence on determination of a GCPP site are taken into account. 2010 * 115(<- 45): Cyclone optimization by COMPLEX method and CFD simulation The most important performance parameters in cyclone design are the pressure drop and collection efficiency. In general, the best designs provide relatively high efficiency with a low pressure drop. In this study, the multiobjective optimization of cyclones operating with a low particle loading (15 g/m(3)) and small particle diameter (5 mu m to 15 mu m) is performed using the COMPLEX algorithm, a constrained derivative-free optimization method. The objective function is formulated to maximize the collection efficiency with a maximum pressure drop restriction. All objective function evaluations are carried out by CFD simulation with the code CYCLO-EE5 based on the Eulerian multi-fluid concept. An optimized design cyclone is obtained applying the proposed methodology in a feasible time 15 days of computational effort). Also, in comparison with the Stairmand and Lapple cyclones the collection efficiency was 3.5% and 9.2% higher and the pressure drop was 63% and 11.4% lower, respectively. The increase in the collection efficiency with lower pressure drop was due to the displacement of the tangential velocity peak toward the wall and an increase in the tangential velocity near the wall. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 116(<- 59): Design of a novel gas cyclone vortex finder using the adjoint method Gas cyclones have many industrial applications in solid-gas separation. The vortex finder is an essential part in gas cyclones where the shape and diameter highly affect the cyclone performance. Many optimization studies have been conducted to optimize the cylindrical vortex finder diameter. This study introduces a new vortex finder shape optimized for minimum pressure drop using the discrete adjoint method. The new optimum cyclone will save 66% from the driving power needed for the Stairmand cyclone. To efficiently perform the grid independence study for the new cyclone, a new framework using the adjoint solver and the grid convergence index is proposed and tested. The proposed framework relies on local mesh adaptation instead of the global mesh refinement approach. A comparison of numerical simulation of the new cyclone and the Stairmand cyclone confirms the superior performance of the new vortex finder shape for the pressure drop and the cut-off diameter. The results of this study open a new era gas cyclones geometry optimization by using the adjoint method instead of the traditional surrogate based optimization technique. Moreover, the computational costs for the grid independence studies will be reduced via the application of the adjoint methods. (C) 2015 Elsevier B.V. All rights reserved. 2015 * 119(<-212): Investigation of PWR core optimization using harmony search algorithms This work addresses applications of the classical harmony search (HS), improved harmony search (IHS) and the harmony search with differential mutation based pith adjustment (HSDM) to PWR core reloading pattern optimization problems. Proper loading pattern of fuel assemblies (FAs) depends on both neutronic and thermal-hydraulic aspects; obtaining optimal arrangement of fuel assemblies, FA, in a core to meet special objective functions is a complex problem. In this paper, in the first step HS, IHS and HSDM methods are implemented and compared with other meta-heuristic algorithms on Shekel's Foxholes problem. In the second step to evaluate proposed techniques in PWR cores, maximization of multiplication factor, k(eff), decreasing of power picking factor (PPF) as much as possible and power density flattening are chosen as neutronic objective functions for two PWR test cases although other variables can be taken into account. In the third step, obtaining maximum core average critical heat flux (CHF) along no void generation throughout the cores are two thermal-hydraulic objective functions which are included to the desired neutronic objective functions. For neutronic and thermal-hydraulic computation, PARCS (Purdue Advanced Reactor Core Simulator) and COBRA-EN codes are used respectively. Coupling the harmony search with the PARCS code and the COBRA-EN code, we developed a core reloading pattern optimization code. The results, convergence rate and reliability of the techniques are quiet promising and show that the harmony algorithms perform very well. Furthermore, it is found that harmony searches have potential for other optimization applications in other nuclear engineering field. (C) 2013 Elsevier Ltd. All rights reserved. 2013 * 122(<-591): Optimization of a reverse osmosis system using genetic algorithm Reverse Osmosis (RO) has found extensive application in industry as a highly efficient separation process. In most cases, it is required to select the optimum set of operating variables such that the performance of the system is maximized. In this work, an attempt has been made to optimize the performance of RO system with a cellulose acetate membrane to separate NaCl-Water system using Genetic Algorithm (GA). The GAs are faster and more efficient than conventional gradient based optimization techniques. The optimization problem was to maximize the observed rejection of the solute by varying the feed flowrate and overall permeate flux across the membrane for a constant feed concentration. To model the system, a well-established transport model for RO system, the Spiegler-Kedem model was used. It was found that the GA converged rapidly to the optimal solution at the 8th generation. The effect of varying GA parameters like size of population, crossover probability, and mutation probability on the result was also studied. The algorithm converged to the optimum solution set at the 8th generation. It was also seen that varying the computational parameters significantly affected the results. 2006 * 123(<-530): Multi-objective optimization in combinatorial chemistry applied to the selective catalytic reduction of NO with C3H6 A high-throughput approach, aided by multi-objective experimental design of experiments based on a genetic algorithm, was used to optimize the combinations and concentrations of a noble metal-free solid catalyst system active in the selective catalytic reduction of NO with C3H6. The optimization framework is based on PISA [S. Bleuler, M. Laumanns, L. Thiele, E. Zitzler, Proc. of EMO'03 (2003) 4941, and two state-of-the-art evolutionary multi-objective algorithms-SPEA2 [E. Zitzler, M. Laumanns, L. Thiele, in: K.C. Giannakoglou, et al. (Eds.), Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems (EUROGEN 2001), International Center for Numerical Methods in Engineering (CIMNE), 2002, p. 95] and IBEA [E. Zitzler, S. Kunzli, Conference on Parallel Problem Solving from Nature (PPSN VIII), 2004, p. 832]-were used for optimization. Constraints were satisfied by using so-called "repair algorithms." The results show that evolutionary algorithms are valuable tools for screening and optimization of huge search spaces and can be easily adapted to direct the search towards multiple objectives. The best noble metal free catalysts found by this method are combinations of Cu, Ni, and Al. Other catalysts active at low temperature include Co and Fe. (C) 2007 Elsevier Inc. All rights reserved. 2007 * 124(<-554): Implementation of the multi-channel monolith reactor in an optimisation procedure for heterogeneous oxidation catalysts based on genetic algorithms A multi-criteria optimisation procedure based on genetic algorithms is carried out in search of advanced heterogeneous catalysts for total oxidation. Simple but flexible software routines have been created to be applied within a search space of more then 150,000 individuals. The general catalyst design includes mono-, bi- and trimetallic compositions assembled out of 49 different metals and depleted on an Al2O3 support in up to nine amount levels. As an efficient tool for high-throughput screening and perfectly matched to the requirements of heterogeneous gas phase catalysis-especially for applications technically run in honeycomb structures-the multi-channel monolith reactor is implemented to evaluate the catalyst performances. Out of a multi-component feed-gas, the conversion rates of carbon monoxide (CO) and a model hydrocarbon (HC) are monitored in parallel. In combination with further restrictions to preparation and pretreatment a primary screening can be conducted, promising to provide results close to technically applied catalysts. Presented are the resulting performances of the optimisation process for the first catalyst generations and the prospect of its auto-adaptation to specified optimisation goals. 2007 * 138(<-320): A Multi-Objective Approach to Force Field Optimization: Structures and Spin State Energetics of d(6) Fe(II) Complexes The next generation of force fields (FFs), regardless of the accuracy of the potential energy representation, will always have parameters that must be fitted in order to reproduce experimental and/or ab initio data accurately. Single objective methods have been used for many years to automate the obtaining of parameters, but this leads to ambiguity. The solution depends on the chosen weights and is therefore not unique. There have been few advances in solving this problem, which thus remains a major hurdle for the development of empirical FF methods. We propose a solution based on multi-objective evolutionary algorithms (MOEAs). MOEAs allow the FF to be tuned against the desired objectives and offer a powerful, efficient, and automated means to reparameterize FFs, or even discover the parameters for a new potential. Here, we illustrate the application of MOEAs by reparameterizing the ligand field molecular mechanics (LFMM) FF recently reported for modeling spin crossover in iron-(II)-amine complexes (Deeth et al. J. Am. Chem. Soc. 2010, 132, 6876). We quickly recover the performance of the original parameter set and then significantly improve it to reproduce the geometries and spin state energy differences of an extended series of complexes with RMSD errors in Fe-N and N-N distances reduced from 0.06 angstrom to 0.03 angstrom and spin state energy difference RMSDs reduced from 1.5 kcal mol(-1) to 0.2 kcal mol(-1). The new parameter sets highlight, and help resolve, shortcomings both in the non-LFMM FF parameters and in the interpretation of experimental data for several other Fe(II)N-6 amine complexes not used in the FF optimization. 2012 * 154(<-673): Multiobjective Linear Programming model on injection oilfield recovery system This paper proposes a Multiobjective Linear Programming (MLP) model on injection oilfield recovery system. A modified interior-point algorithm to MLP problems has been constructed by using concepts of Kamarkar's interior point algorithm and the Analytic Hierarchy Process (AHP). This algorithm is shown to likely be more efficient than other MLP's algorithms in the application of decision making on the petroleum industry through the demonstration of a numerical example. The MLP model's optimal solution allows decision makers to optimally design the developing plan of the injection oilfield recovery system. (C) 1998 Elsevier Science Ltd. All rights reserved. 1998 * 155(<-680): PERCEPTRONS PLAY THE REPEATED PRISONERS-DILEMMA We examine the implications of bounded rationality in repeated games by modeling the repeated game strategies as perceptrons (F. Rosenblatt, ''Principles of Neurodynamics,'' Spartan Books, and M. Minsky and S. A. Papert, ''Perceptions: An Introduction to Computational Geometry,'' MIT Press, Cambridge, MA, 1988). In the prisoner's dilemma game, if the cooperation outcome is Pareto efficient, then we can establish the folk theorem by perceptrons with single associative units (Minsky and Papert), whose computational capability barely exceeds what we would expect from players capable of fictitious plays (e.g., L. Shapley, Some topics in two-person games, Adv. Game Theory 5 (1964), 1-28). (C) 1995 Academic Press, Inc. 1995 * 157(<-300): Dynamic equivalence by an optimal strategy Due to the curse of dimensionality, dynamic equivalence remains a computational tool that helps to analyze large amount of power systems' information. In this paper, a robust dynamic equivalence is proposed to reduce the computational burden and time consuming that the transient stability studies of large power systems represent. The technique is based on a multi-objective optimal formulation solved by a genetic algorithm. A simplification of the Mexican interconnected power system is tested. An index is used to assess the proximity between simulations carried out using the full and the reduced model. Likewise, it is assumed the use of information stemming from power measurements units (PMUs), which gives certainty to such information, and gives rise to better estimates. (C) 2011 Elsevier B.V. All rights reserved. 2012 * 160(<-558): Optimization strategies in ion chromatography The ion chromatographer is often concerned with the separation of complex mixtures with a variable behavior of their components, which makes good resolution and reasonable analysis time sometimes extremely difficult. Several optimization strategies have been proposed to solve this problem. The most reliable and less time consuming strategies apply resolution criteria based on theoretical or empirical retention models to describe the retention of particular components. This review focuses on optimization strategies in ion chromatography with a detailed description of the ion chromatographic retention model, objective functions, multi criteria decision making, and peak modeling. 2007 * 166(<-628): Digital filter design using multiple pareto fronts Evolutionary approaches have been used in a large variety of design domains, from aircraft engineering to the designs of analog filters. Many of these approaches use measures to improve the variety of solutions in the population. One such measure is clustering. In this paper, clustering and Pareto optimisation are combined into a single evolutionary design algorithm. The population is split into a number of clusters, and parent and offspring selection, as well as fitness calculation, are performed on a per-cluster basis. The objective of this is to prevent the system from converging prematurely to a local minimum and to encourage a number of different designs that fulfil the design criteria. Our approach is demonstrated in the domain of digital filter design. Using a polar coordinate based pole-zero representation, two different lowpass filter design problems are explored. The results are compared to designs created by a human expert. They demonstrate that the evolutionary process is able to create designs that are competitive with those created using a conventional design process by a human expert. They also demonstrate that each evolutionary run can produce a number of different designs with similar fitness values, but very different characteristics. 2004 * 179(<-397): Tolerance design optimization on cost-quality trade-off using the Shapley value method Part tolerance design is important in the manufacturing process of many complex products because it directly affects manufacturing cost and product quality. It is significant to develop a reasonable tolerance scheme considering the demands of cost and quality to reduce the production risk and provide a guide for supplier management. Traditionally, some kinds of cost objective functions or variation propagation models are often applied in part tolerance design. Moreover, designers usually solve the tolerance design problem by constructing a single-objective model, dealing with several single-objective problems, or establishing a comprehensive evaluating function combining several optimization objectives with different weights. These approaches may not adequately consider the interdependent and the interactional relations of various demands and balance them. This paper presents a kind of tolerance design approach at the early design stage of automotive parts based on the Shapley value method (SVM) of coalitional game theory considering the demands of manufacturing cost and product quality. First the part tolerance design problem is defined. The measuring data in regular production is collected instead of working on specific objective functions or design models. Then how the SVM is adopted to solve the tolerance design problem is discussed. Lastly, a tolerance design example of a vehicle front lamp demonstrates the application and the performance of the proposed method. (C) 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved. 2010 * 211(<-324): Mobility Timing for Agent Communities, a Cue for Advanced Connectionist Systems We introduce a wait-and-chase scheme that models the contact times between moving agents within a connectionist construct. The idea that elementary processors move within a network to get a proper position is borne out both by biological neurons in the brain morphogenesis and by agents within social networks. From the former, we take inspiration to devise a medium-term project for new artificial neural network training procedures where mobile neurons exchange data only when they are close to one another in a proper space (are in contact). From the latter, we accumulate mobility tracks experience. We focus on the preliminary step of characterizing the elapsed time between neuron contacts, which results from a spatial process fitting in the family of random processes with memory, where chasing neurons are stochastically driven by the goal of hitting target neurons. Thus, we add an unprecedented mobility model to the literature in the field, introducing a distribution law of the intercontact times that merges features of both negative exponential and Pareto distribution laws. We give a constructive description and implementation of our model, as well as a short analytical form whose parameters are suitably estimated in terms of confidence intervals from experimental data. Numerical experiments show the model and related inference tools to be sufficiently robust to cope with two main requisites for its exploitation in a neural network: the nonindependence of the observed intercontact times and the feasibility of the model inversion problem to infer suitable mobility parameters. 2011