Workshops
and Tutorials Schedule (PDF)
Introductory Tutorials
Genetic
Algorithms |
Erik Goodman |
Genetic
Programming |
John
Koza |
Evolution
Strategies |
Thomas
Baeck |
A
Unified Approach to EC |
Ken
DeJong |
Evolvable
Hardware I |
Tetsuya
Higuchi |
Linear
GP |
Wolfgang
Banzhaf |
Ant
Colony Optimization |
Christian
Blum |
Particle
Swarm Intelligence |
Russell
Eberhart |
Learning
Classifier Systems more
info |
Tim Kovacs |
Probabilistic
Model-Building GAs more info |
Martin
Pelikan |
Advanced
Tutorials
No
Free Lunch (NFL) |
Darrell
Whitley |
Genetic
Algorithm Theory |
Jonathan
Rowe |
Bioinformatics |
James
A. Foster |
Taxonomy
and Coarse Graining in EC
more
info |
Chris
Stephens |
Multiobjective
Optimization with EC
more
info |
Eckart
Zitzler |
Computational
Complexity and EC |
Ingo
Wegener |
Evolvable
Hardware II |
Adrian
Stoica |
Representations |
Franz
Rothlauf |
Building
on Biological Evolution |
Ingo
Rechenberg |
Principled
Efficiency Enhancement
more info |
Kumara
Sastry |
Generalized Hill
Climbing Algorithms
more
info |
Sheldon
H. Jacobson |
Statistics
for EC |
Steffan
Christensen, Mark Wineberg |
Tutorials
on Specialized Techniques and Applications
Symbolic
Regression in Genetic
Programming
more
info |
Maarten
Keijzer |
Grammatical Evolution |
Conor
Ryan |
Quantum
Computing |
Lee
Spector |
Evolutionary
Robotics |
Dario
Floreano |
Evolutionary
Music |
Al
Biles |
Evolution
and Resiliency |
Terry
Soule |
Evolutionary Design |
Ian
Parmee |
Interactive
Evolution |
Hideyuki
Takagi |
Optimization
of Dynamic Environments |
Juergen
Branke |
Spatially
Structured EAs |
Marco
Tomassini |
Industrial
Evolutionary Computation |
A.
Kordon, G. Smits, M. Kotanchek |
In Vitro Molecular Evolution more
info |
Byoung-Tak Zhang |
Evolving
Neural Networks more
info |
Risto Mikkulainen |
Experimental Research in EC |
Mike Preuss, Thomas Bartz-Beielstein |
Fitness Approximation in EC more
info |
Yaochu Jin, Khaled Rasheed |
Constraint-handling Techniques used with EAs
more
info |
Carlos Coello-Coello |
The
XCS Learning Classifier System: From Theory to Application more
info |
Martin Butz |
Experiences Implementing a GA-Based Optimizer in
an Aerospace Engineering Application more
info |
Thomas Dickens |
Fitness Landscapes and Problem Difficulty more
info |
Jean-Paul Watson |
--------------------------------------------------------------------------------------
Introductory
Tutorials
Learning Classifier Systems
Description: Since the first Learning Classifier System
(LCS) was introduced by Holland and Reitman in 1978,
the LCS paradigm has broadened greatly into a framework
encompassing many representations, rule discovery mechanisms,
and credit assignment schemes. Current LCS applications
range from data mining to automated innovation to on-line
control. Classifier systems are currently enjoying
a renaissance, with newer approaches, in particular
Wilson's accuracy-based XCS, receiving a great deal
of attention. LCS are also benefiting from advances
in the field of Reinforcement Learning, and there is
a trend toward developing connections between the two
areas. The tutorial begins with an introduction to
the basics of classifier systems, reviews more advanced
techniques such as niche genetic algorithms and macroclassifiers,
and introduces XCS and contrasts it with earlier systems.
Speaker Bio:
Tim Kovacs, Lecturer
in Machine Learning, Department of Computer Science,
University of Bristol, www.cs.bris.ac.uk/~kovacs/,
holds a BA in Psychology, an MSc in Computer
Science and a PhD in Machine Learning. His main
interests are in intelligence, adaptive behaviour
and machine learning, with an emphasis on evolutionary
and other stochastic optimisation methods (e.g.
genetic algorithms and ant colony optimisation)
and reinforcement learning. Dr. Kovacs has published
20 papers in the last 4 years in areas including
learning classifier
systems theory, methodological issues in machine learning,
the complexity of learning, game playing and the behaviour
of social insects. |
|
His PhD thesis won a British Computer
Society / Conference of Professors and Heads of Computing
Distinguished Dissertation Award and was published
by Springer-Verlag. He compiled and maintains the on-line
Learning Classifier Systems bibilography. He is an
associate editor of the IEEE Transactions on Evolutionary
Computation, has presented several tutorials at major
international conferences and was an invited speaker
at Benelearn 2004. He won a GECCO-2004 best paper award.
He is a co-organiser of the Foundations of Learning
Classifier Systems workshop at PPSN 2004 and of the
inaugural 2004 Bristol/Bath regional workshop on Mathematics,
Computation and Biology.
|
Probabilistic
Model-Building GAs
Description:
Probabilistic model-building algorithms (PMBGAs)
replace traditional variation of genetic and evolutionary
algorithms
by (1) building a probabilistic model of promising
solutions and (2) sampling the built model to generate
new candidate solutions. Replacing traditional crossover
and mutation operators by building and sampling a probabilistic
model of promising solutions enables the use of machine
learning techniques for automatic discovery of problem
regularities and exploitation of these regularities
for effective exploration of the search space. Using
machine learning in optimization enables the design
of optimization techniques that can automatically adapt
to the given problem.
There are many successful applications
of PMBGAs, for example, Ising spin glasses in 2D and
3D, graph partitioning, MAXSAT, feature subset selection,
forest management, groundwater remediation design,
telecommunication network design, antenna design, and
scheduling. PMBGAs are also known as estimation of
distribution algorithms (EDAs) and iterated density-estimation
algorithms (IDEAs). The tutorial Probabilistic Model-Building
GAs will provide a gentle introduction to PMBGAs with
an overview of major research directions in this area.
Strengths and weaknesses of different PMBGAs will be
discussed and suggestions will be provided to help
practitioners to choose the best PMBGA for their problem.
Speaker Bio:
Martin Pelikan received Ph.D. in Computer
Science from the University of Illinois at Urbana-Champaign
in 2002.
Since 2003, he has been an assistant professor at the
Dept. of Mathematics and Computer Science at the University
of Missouri at St. Louis. Prior to joining the University
of Missouri, he worked at the Illinois Genetic Algorithms
Laboratory (IlliGAL), the German National Center for
Information Technology at Sankt Augustin, the Slovak
University of Technology at Bratislava, and the Swiss
Federal Institute of Technology (ETH) at Zurich.
Pelikan worked as a researcher in genetic and evolutionary
computation since 1995. His most important contributions
to genetic and evolutionary computation are the Bayesian
optimization algorithm (BOA), the hierarchical BOA
(hBOA), the scalability theory for BOA and hBOA,
and the efficiency enhancement techniques for BOA
and hBOA. His current research focuses on extending
BOA and hBOA to other problem domains, applying genetic
and evolutionary algorithms to real-world problems
with the focus on physics and machine learning, and
designing new efficiency enhancement techniques for
BOA and other evolutionary algorithms.
|
-------------------------------------------------------------------------------------
Advanced
Tutorials
Taxonomy
and Coarse Graining in EC
Description: The
theory of genetic dynamics, in both population biology
and evolutionary computation, is difficult. In the
latter it is also relatively undeveloped,there existing
many different types of evolutionary algorithm, such
as geneticalgorithms, genetic programming, evolutionary
strategies etc. whose theoreticalrelationships are
not very clear. Even for a particular type of algorithm,
suchas genetic algorithms, the theoretical underpinnings
have traditionally consisted of apparently antagonistic
elements such as, on one side, Holland's Schema theorem,
the associated Building Block Hypothesis, and on the
other side, exact, microscopic models such as the Vose
model. Besides offering a conceptual, qualitative understanding
theory should also make quantitative predictions. In
this aspect "engineering-type", approximate
models have had more success than their more mathematically
rigorous, exact counterparts. However, there are still
no known systematic, general approximation schemes
for describing the dynamics of an evolutionary algorithm.
Theoretical
understanding is greatly facilitated by an adequate
taxonomy - a practical example being
the periodic table in chemistry. In this tutorial
I showhow recent developments in exact, coarse-grained
(schema-based) formulations of genetic dynamics
point the way to a more adequate taxonomy of evolutionary
algorithms.In particular, I will show how genetic
algorithms and genetic programming can be"unified" in
this way, leading to exact schema theorems for
both, a deeper, more rigorousunderstanding of Holland's
Schema theorem and the Building Block hypothesis,
an extension of these to genetic programming, and
a reconciliation of them with microscopic formulations
such as the Vose model. We will see how the different
formulations are simply coordinate transformations
of one another in the space of representations.
I will also show how tools and concepts from theoretical
physics, such as therenormalization group, can
further the theoretical understanding of genetic
dynamicsand also offer potential systematic approximation
schemes for describing them.
Speaker
Bio:
Chris Stephens is Professor at the Institute
for Nuclear Sciences of the Universidad Nacional
Autonoma de Mexico. After receiving his undergraduate
degree at The Queen's College, Oxford he
completed his graduate work at the University
of Maryland in the area of theoretical physics.
He then had several postdoctoral positions,
including the University of Utrecht, where
he worked with Gerard 't Hooft, the 1999
Nobel Laureate in Physics and a Marie Curie
Fellowship at the Dublin Institute for Advanced
Studies. He has had visiting positions at
various leading academic institutions, including
the Weizmann Institute, the
|
|
Joint
Institute for Nuclear Research, Dubna, the University
of Birmingham and others. He is a founding partner
of Adaptive Technologies Inc. a research company dedicated
to the production of agent-based technologies for dynamical
optimization in finance and industry. He is author
or co-author of over 80 publications and his work has
been cited over 800 times. He has given over 120 invited
lectures in more than 20 countries. Among the academic
honours he has received is the Jorge Lomnitz Prize
of the Mexican Academy of Sciences. He is also a member
of the editorial board of Genetic Programming and Evolvable
Hardware.
His research interests are very broad, having published in a wide array of international
journals - ranging from Classical and Quantum Gravity to the Journal of Molecular
Evolution. An overiding theme, however, has been the Renormalization Group -
a general methodology for solving complex, non-linear problems with many degrees
of freedom via coarse graining - and, more recently, applying it to the area
of genetic dynamics. His principal contribution in Evolutionary Computation has
been to show how exact coarse-grained formulations lead to a unification and
reconciliation of many previously antagonistic theoretical elements, such as
Holland's Schema theorem and the Vose model.
|
Multi-objective
Evolutionary Optimization
Description: Multiple, often conflicting
objectives arise naturally in most real-world optimization
scenarios. As evolutionary algorithms possess several
characteristics that are desirable for this type of
problem, this class of search strategies has been used
for multiobjective optimization for more than a decade.
Meanwhile evolutionary multiobjective optimization
has become established as a separate subdiscipline
combining the fields of evolutionary computation and
classical multiple criteria decision making. This tutorial
gives an overview of evolutionary multiobjective optimization
with the focus on methods and theory. On the one hand,
basic principles of multiobjective optimization are
presented, and various algorithmic aspects such as
fitness assignment and environmental selection are
discussed in the light of state-of-the-art techniques.
On the other hand, the tutorial covers several theoretical
issues such as performance assessment and running-time
analysis.
Speaker Bio:
Eckart Zitzler
received degrees from University of Dortmund
in Germany (diploma in computer science) and
ETH Zurich in Switzerland (doctor of technical
sciences). Since 2003, he has been Assistant
Professor for Systems Optimization at the Computer
Engineering and Networks Laboratory at the Department
of Information Technology and Electrical Engineering
of ETH Zurich, Switzerland. His research focuses
on bio-inspired computation, multiobjective optimization,
computational biology, and computer engineering
applications. |
|
Dr. Zitzler
was General Co-Chairman of the first two international
conferences on evolutionary multi-criterion
optimization (EMO 2001 and EMO 2003), held in Zurich,
Switzerland, and Faro, Portugal,
respectively.
|
Principled
Efficiency Enhancement
Description: A key
challenge in genetic and evolutionary computation (GEC)
research
is the design of competent genetic
algorithms (GAs). By
competent we mean GAs that can solve hard
problems,
quickly, reliably, and accurately, and much progress
has been made
along these lines (see for example, Goldberg, D. E.
(2001). Design
of Innovation). In essence, competent GA
design takes problems
that were intractable with first generation GAs and
renders them
tractable, oftentimes requiring only a subquadratic
number of fitness
evaluations. However, for large-scale problems, the
task of computing
even a subquadratic number of function evaluations
can be daunting.
This is especially the case if the fitness evaluation
is a complex
simulation, model, or computation. This places a premium
on a variety
of efficiency enhancement techniques.
While competence leads us
from intractability to {\em
tractability\/}, efficiency
enhancement takes us from tractability
to practicality.
In this tutorial, we will consider
a four part decomposition of
efficiency-enhancement techniques: (1) Parallelization,
(2)
Hyrbridization, (3) Time Continuation, and (4) Evaluation
Relaxation.
We will develop a principled design methodology for
different
methodologies in each of efficiency-enhancement technique
categories
using facetwise modeling and dimensional arguments.
The principled
design methodology not only enables us to predict the
maximum speed-up
and scalability of each of the efficiency-enhancement
methods, but
also yields practical guidelines of using them in real-world
problems.
Speaker Bio:
Kumara Sastry is a graduate student of Systems and
Entrepreneurial
Engineering at the Univeristy of Illinois and a Member
of the Illinois
Genetic Algorithms Laboratory. He has been actively
consulting on
genetic and evolutionary algorithms to industry, including
a leading
Israeli wireless company and a Fortune 100 company.
His masters thesis
on efficiency enhancement techniques was awarded the
William A.
Chittenden award for best graduate thesis in the Department
of General
Engineering. His research interests include efficiency
enhancment of
genetic agorithms, estimation of distribution algorithms,
scalability
of genetic and evolutionary computation, facetwise
analysis of
evolutionary algorithms, and multi-scale modeling in
science and
engineering.
|
Generalized Hill Climbing Algorithms: Theory and
Practice
Description: Generalized hill climbing (GHC)
algorithms have been introduced as a unifying framework
for addressing intractable discrete optimization problems.
GHC algorithms provide a well-defined structure for
classifying and studying a large body of stochastic
and deterministic search strategies. Simulated annealing,
threshold accepting, and tabu search, among others,
can all be formulated as particular GHC algorithms.
This tutorial reviews the GHC algorithm structure,
and shows how many common search strategies can be
described using the GHC algorithm framework. The advantages
and disadvantages of the GHC framework are presented.
Convergence and performance results for GHC algorithms
are also discussed. Opportunities for future research
with GHC algorithms are presented.
Speaker Bio:
Sheldon H. Jacobson
is a Professor, Willett Faculty Scholar, and
Director of the Simulation and Optimization Laboratory
in the Department of Mechanical and Industrial
Engineering at the University of Illinois. He
has a B.Sc. and M.Sc. (both in Mathematics) from
McGill University, and a M.S. and Ph.D. (both
in Operations Research and Industrial Engineering)
from Cornell University. His theoretical research
interests include the analysis and design of
heuristics for intractable discrete optimization
problems. His applied research interests address
problems in the manufacturing, aviation security,
and health-care industries. |
|
In 1998, he received the Application Award
from the Institute of Industrial Engineers Operations
Research Division. In 2002, he was named an Associate
in the Center for Advanced Study at the University
of Illinois, and was awarded the Aviation Security
Research Award by Aviation Security International,
the International Air Transport Association, and the
Airports Council International. In 2003, he received
the Best Paper Award in IIE Transactions Focused Issue
on Operations Engineering and was named a Guggenheim
Fellow by the John Simon Guggenheim Memorial Foundation.
His research has been published in a wide spectrum
of journals, and he has received research funding from
several government agencies and industrial partners,
including the National Science Foundation and the Air
Force Office of Scientific Research.
|
-------------------------------------------------------------------------------------
Tutorials
on Specialized Techniques and Applications
Symbolic Regression
in GP
Description: The automated induction of mathematical
descriptions of data using genetic programming is commonly
referred to as symbolic regression. The main benefit
of this approach is the perceived usefulness of the
explicit mathematical expressions that are induced.
This tutorial will concentrate on issues and techniques
that arise when trying to evolve such explicit symbolic
descriptions on data. The tutorial will concentrate
on appropriateness of fitness measures; size control;
interpretability; optimization of constants; robustness
and generalization error; issues arising through finite
floating point precision. Finally the Bayesian framework
of induction will be presented in the context of symbolic
regression.
Speaker Bio:
Graduating
in 1995 in Cognitive Artificial Intelligence,
Maarten
Keijzer proceeded to work as a programmer/data
mining consultant at Cap Gemini Adaptive
Systems on the Predictive Data Mining software
called
OMEGA. Here much development was directed
at the induction of explicit symbolic expressions
on data ('symbolic regression'). In 1998
Maarten
Keijzer started to pursue a Ph.D. at the
Danish Technical University in conjunction
with the
Danish Hydraulic Institute under the supervision
of Lars Kai Hansen and Vladan Babovic.The
Ph.D. was concluded in 2001 with the thesis "Genetic
Programming for Scientific
Discovery".
|
|
From
2001 onwards he worked at the Free University of Amsterdam,
as a senior researcher at KiQ Ltd, (again on the toolset called OMEGA), and as
a senior researcher at WL | Delft Hydraulics. Maarten Keijzer's research interests
have centred around genetic programming with emphasis to symbolic regression
from as early as 1994.
|
In Vitro Molecular Evolution
Description: Unbeknown to most of
the evolutionary computation researchers, biologists
and biochemists
have long been using the principle of
genetic and evolutionary algorithms to design biomolecules with novel
enzymatic activities. This approach, generally known as in vitro
selection or directed evolution, starts with a library of candidate
molecules and uses biochemical variation and selection techniques to
derive fitter target molecules. On the other hand, biomolecules such as
DNA and RNA provide interesting alternative materials for building
evolutionary computers and other evolvable machines. In vitro molecular
experiments allow us to play with up to the Avogadro number (6 x 10^23)
of individuals in a single population. The existing biochemical
techniques, such as polymerase chain reaction, gel electrophoresis, and
fluorochromatography, provide massively parallel operators for
assembly, replication, variation, and selection of "molecular genetic
programs." In addition to solving computational problems in a more efficient
way, the use of biomolecules in genetic and evolutionary computation opens up
new applications in biomedical research, pharmaceutical industries, and nanotechnology.
This tutorial aims (1) to review recent results on directed evolution from life
sciences and biotechnology and (2) to provide the EC community with new research
issues as to the theory, methodology, technology, and applications inspired by
the in vitro molecular evolution approach. We will discuss the challenges and
opportunities we face as EC researchers. The tutorial assumes an introductory
level of knowledge in genetic and evolutionary computation, but does not require
backgrounds in molecular biology or chemistry. EC researchers interested in bio-
and
nano-technologies would find this tutorial most exciting.
Speaker Bio:
Byoung-Tak
Zhang is currently Associate Professor of the School of Computer Science
and Engineering and the Graduate
Programs in Bioinformatics and
Cognitive Science at Seoul National University (SNU), Korea, and directs
the Biointelligence Laboratory and the Center for Bioinformation Technology
(CBIT). He received his Ph.D. in Computer Science from University of Bonn,
Germany in 1992 and his BS and MS degrees in Computer Science and Engineering
from SNU in 1986 and 1988, respectively.Prior to joining SNU, he had been
a research associate at German National Research Center
for Information Technology (GMD) from 1992-1995. |
|
He
has been a visiting
professor at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)
from August 2003 to August 2004. Byoung-Tak Zhang serves as an associate editor
of IEEE Transactions on Evolutionary Computation, Advances in Natural Computation,
and Genomics & Informatics, and on the editorial board of Genetic Programming
and Evolvable Machines and Applied Soft Computing. His research interests include
probabilistic models of learning and evolution, biomolecular/DNA computing, molecular
learning/evolvable
machines.
|
Evolutionary Neural
Networks
Description: Neuroevolution, i.e. evolution of artificial
neural networks, has recently emerged as a powerful
technique for solving challenging reinforcement learning
problems. Compared to traditional (e.g. value-function
based) methods, neuroevolution is especially strong
in domains where the state of the world is not fully
known: the state can be disambiguated through recurrency,
and novel situations handled through pattern matching.
In this tutorial, we will review (1) neuroevolution
methods that evolve fixed-topology networks, network
topologies, and network construction processes, (2)
ways of combining traditional neural network learning
algorithms with evolutionary methods, and (3) applications
of neuroevolution to game playing, robot control, resource
optimization, and cognitive science.
Speaker Bio:
Risto Miikkulainen
is a Professor of Computer Sciences at the University
of Texas at Austin. He received an M.S. in Engineering
from the Helsinki University of Technology, Finland,
in 1986, and a Ph.D. in Computer Science from
UCLA in 1990. His recent research focuses on
methods for evolving neural networks and applying
these methods
to game playing, robotics, and intelligent control.
He is an author of over 150 articles on neuroevolution,
connectionist natural language processing, and the
computational neuroscience of the visual cortex. He
is an editor of the Machine Learning Journal and Journal
of Cognitive Systems Research. |
|
|
Fitness Approximation
in Evolutionary Computation
Description: Evolutionary algorithms need a large number of fitness
evaluations to get an acceptable solution. This poses huge difficulties in employing
EAs to solve a wider range of real-world problems because fitness evaluations
is often very expensive. This tutorial provides an overview of various methods
for reducing expensive fitness evaluations in evolutionary computation. The tutorial
covers the following major issues: * Motivations for fitness approximation in
evolutionary computation * General methods for fitness approximation, such as
problem approximation, functional approximation, fitness inheritance and imitation
* Various frameworks for using fitness approximations, including the multi-population
approaches, informed crossover and mutation, generation-based and individual-based
evolution control * A short introduction to different approximate models and
learning methods * Real-world applications of fitness approximation
Speaker Bio:
Yaochu Jin received the Ph.D. degree from Zhejiang University, Hangzhou,
China, and the Dr.-Ing. degree from Ruhr-Universitaet Bochum, Bochum,
Germany. He was an Associate Professor with the Electrical Engineering
Department, Zhejiang University, a Visiting Scholar and a Researcher
with the Institut fuer Neuroinformatik, Ruhr-Universitaet Bochum, and
a Postdoctoral Associate with the Industrial Engineering Department,
the State University of New Jersey, Piscataway. He joined the Honda R&D
Europe,Offenbach, Germany in 1999. Currently, he is a Principal Scientist
with the Honda Research Institute Europe. |
|
His
main research interests
are fuzzy systems, neural networks and evolutionary
computation with application to systems control and
design optimization. He is the author of the book "Advanced
Fuzzy Systems Design" (Heidelberg, Germany:Springer,
2003), and the editor of the book "Knowledge Incorporation
in Evolutionary Computation" (Berlin, Germany:Springer,
2004). Dr. Jin is an Associate Editor of the IEEE Transactions
on Control Systems Technology and the IEEE Transactions
on Systems, Man, and Cybernetics, Part C. He is a Guest
Editor of four journal special issues. He serves as
the Chair of the Working Group on "Evolutionary Computation
in Dynamic and Uncertain Environments" within the Evolutionary
Computation Technical Committee of the IEEE Computational
Intelligence Society. He is the Program Chair of the
Second International Conference on Fuzzy Systems and
Knowledge Discovery (FSKD'05), and the Co-chair of
the First and Second European Workshop on Evolutionary
Algorithms in Stochastic and Dynamic Environments.
He is a Senior Member of IEEE, and a member of Council
of Authors of ISGEC.
Speaker Bio:
Dr. Khaled Rasheed is an Assistant Professor at the Computer Science department,
University of Georgia (USA). He received his Ph.D. from Rutgers University
in January 1998 on his work on adapting genetic algorithms for problems
in engineering design. His research interests include artificial intelligence,
genetic algorithms, design optimization, computational biology, and machine
learning. He has served on the program committees of several conferences
including the Genetic and Evolutionary Computation |
|
Conference and the International Conference on Machine
Learning and as reviewer for several journals including
IEEE transactions on Evolutionary Computation, the
Journal of Artificial Intelligence Research, the Journal
of Machine Learning Research, Machine Learning Journal
and Artificial Intelligence in Engineering Design and
Manufacturing.
|
Constraint-Handling
Techniques used with Evolutionary Algorithms
Description: When
used for global optimization, Evolutionary Algorithms
(EAs) can be seen as unconstrained optimization techniques.
Therefore, they require an additional mechanism to
incorporate constraints of any kind (i.e., inequality,
equality, linear, nonlinear) into their fitness function.
Although the use of penalty functions (very popular
with mathematical programming techniques) may seem
an obvious choice, this sort of approach requires a
careful fine tuning of the penalty factors to be used.
Otherwise, an EA may be unable to reach the feasible
region (if the penalty is too low) or may reach quickly
the feasible region but being unable to locate solutions
that lie in the boundary with the infeasible region
(if the penalty is too severe). This has motivated
the development of a number of approaches to incorporate
constraints into the fitness function of an EA. This
tutorial will cover the main proposals in current use,
including novel approaches such as the use of tournament
rules based on feasibility, multiobjective optimization
concepts, hybrids with mathematical programming techniques
(e.g., Lagrange multipliers), cultural algorithms,
and artificial immune systems, among others. Other
topics such as the importance of maintaining diversity,
current benchmarks and the use of alternative search
engines (e.g., particle swarm optimization, differential
evolution, evolution strategies, etc.) will be briefly
discussed (as time allows).
Speaker
Bio:
Carlos Coello-Coello.
Carlos Artemio Coello Coello received a BSc in Civil Engineering from the
Universidad Autónoma de Chiapas in Mexico in 1991. Then, he was
awarded a scholarship from the Mexican government to pursue graduate studies
in Computer Science at Tulane University. He received a MSc and a PhD in
Computer Science in 1993 and 1996, respectively. His PhD thesis was one
of the first in the field now called evolutionary multiobjective optimization.
Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering
Design Centre (in England) and a Visiting Professor at DePauw University
(in the USA). |
|
He
is currently associate professor at CINVESTAV-IPN
in Mexico City, Mexico. He has published
over 130
papers
in international peer-reviewed journals and conferences
and one book on evolutionary multiobjective optimization
which is part of the Genetic Algorithms and Evolutionary
Com- putation Series edited by David E. Goldberg.
He has also served as a technical reviewer for
a number
of journals and international conferences and actually
serves as associate editor of the "IEEE Transactions
on Evolutionary Computation" and as a member of the
editorial board of "Engineering Optimization".
He is member of the Mexican Academy of Science,
Senior
Member
of the IEEE, and member of Sigma Xi, The Scientific
Research Society. His current research interests
are: evolutionary multiobjective optimization,
constraint-handling techniques for evolutionary
algorithms and evolvable
hardware.
|
The
XCS Learning Classifier System: From Theory to
Application
Description: Rule-based evolutionary
online learning systems, often referred to as Michigan-style
learning classifier systems (LCSs), were proposed nearly
thirty years ago originally calling them cognitive
systems. The XCS classifier system maybe its currently
most successful and most promising representative.
As all LCSs, XCS combines the strength of reinforcement
learning with the generalization and search capabilities
of genetic algorithms resulting in a flexible, online-learning
and generalizing predictive learning system. This tutorial
focuses on the questions how and when XCS works and,
derived from these questions, how XCS can be designed
and enhanced to solve diverse online reinforcement,
control, or general predictive problems. Particularly,
a facetwise approach is proposed that partitions the
learning biases of the system and analyzes the components
separately respecting their possible interactions.
The insights gained directly lead to a comprehensive
application manual for XCS that outlines its (representation-
and task-dependent) successful design and application
to the problem at hand. Due to the simple, facetwise
approach, the successful creation of more competent,
truly cognitive systems appears to be within our grasp.
Speaker Bio:
Martin
Volker Butz was born in Würzburg, Germany on August
4, 1975. Butz commenced his undergraduate studies
in computer science with a minor in psychology at
the Bayerische Julius-Maximilians Universität Würzburg
in fall 1995. During his studies he spent one year
at the Illinois Genetic Algorithms Laboratory (IlliGAL)
as a visiting scholar. He graduated from the Bayerische
Julius-Maximilians Universität Würzburg with honors
in August, 2001. Since then, he has been a research
assistant at the department of cognitive psychology
at the Bayerische Julius-Maximilians Universität
Würzburg. |
|
In January, 2002, Butz joined the University
of Illinois at Urbana-Champaign for his graduate studies.
He completed
his Ph.D. degree in Illinois in October,
2004. Since then, he is working on a European project on cognitive science at
the department of cognitive psychology at the Bayerische
Julius-Maximilians Universität Würzburg. His PhD thesis is called ``Rule-based
Evolutionary Online Learning Systems: Learning Bounds, Classification, and Prediction''.
The thesis proposes a general approach to classifier system analysis and analyzes
the XCS classifier system in detail by the means of the approach. Results confirm
the theoretical learning bounds as well as the wide applicability of the enhanced
system.
|
Experiences Implementing
a GA-Based Optimizer in an Aerospace Engineering Application
Description: The Aero Grid and Paneling System
(AGPS), is a Boeing-developed 3D geometry system which
is a
general-purpose tool for creating, interrogating, and
visualizing 3D geometry. Boeing uses AGPS in their
Engineering Analysis work throughout the company, for
commercial, military, and space systems, and also offers
AGPS to external customers. One key feature of AGPS
is that is was designed as a geometry programming language,
with a customizable GUI, allowing for custom applications
to be easily written and tailored for domain-specific
tasks. This also allows engineering processes to be
highly automated. I recently added a Genetic-Algorithm-based
parametric-optimizer to AGPS, with the goal of providing
our users with a simple, easy-to-use capability to
find geometric solutions. This tutorial presents my
experiences, lessons learned, and design trade-offs
with this effort, with the goal of providing others
with a roadmap about issues involved and hopefully
some helpful guidance in implementing an optimizer
in their own applications, and providing a view from
the user's perspective.
Speaker Bio:
Thomas Dickens
Tom Dickens has been a Boeing engineer for the
past 20 years, specializing in software architecture
for aerodynamics research at Boeing Commercial.
He became a Boeing Associate Technical Fellow
in 2001, and has more than 30 papers published
in various fields--including a paper at GECCO
2004. With a BS-EET and an MS-CE, Tom has designed
and built both software systems and embedded
hardware systems for Boeing, and is currently
working projects in direct support of designing
Boeing's new 7E7.
|
|
In the evenings,
Tom has taught college classes for the past 15 years
at Henry Cogswell College, the University
of Phoenix, and the University of
Washington.
|
Fitness Landscapes
and Problem Difficulty
Description: The effectiveness of any metaheuristic,
e.g., genetic algorithms or simulated annealing, is
dictated by the degree to which the algorithm is able
to exploit the structure of the underlying search space
or fitness landscape. This tutorial will begin with
a formal definition of the concept of a fitness landscape,
followed by an overview of the various structural features
present in the fitness landscapes of many well-known
NP-hard combinatorial optimization problems. Following
the introductory material, I will discuss the relationship
between fitness landscape structure and algorithm run-time
behavior, explain why certain fitness landscape features
often cause problems for various metaheuristics, and
illustrate how metaheuristics can be designed to exploit
the presence of specific fitness landscape features.
The tutorial concludes with a brief survey of open,
fundamental research questions in the area of metaheuristic
analysis and design.
Speaker Bio:
Dr. Jean-Paul
Watson is currently a researcher at Sandia National
Laboratories in Albuquerque, New Mexico. His
primary activities include the design and analysis
of metaheuristics and the development of metaheuristics
for difficult optimization problems originating
in both military and homeland security applications.
He holds a Ph.D. in Computer Science from Colorado
State University and has published numerous articles
and papers on metaheuristics, with an emphasis
on fitness landscape structure, genetic algorithms,
and tabu search. |
|
|
|