Evolutionary Algorithms (EAs) are
powerful tools used for solving difficult real-world problems. They have
been developed in order to solve some real-world problems that the classical
(mathematical) methods failed to successfully tackle. Many of these unsolved
problems are (or could be turned into) optimization problems. The solving of an
optimization problem means finding solutions that maximize or minimize a
criteria function.
Many Evolutionary Algorithms have
been proposed for dealing with optimization problems. Many solution
representations and search operators have been proposed and tested within a wide
range of evolutionary models. There are several natural questions to be answered
in all these evolutionary models:
What is the optimal population size?
What is the optimal individual representation?
What are the optimal probabilities for applying specific
genetic operators?
What is the optimal number of generations before halting the
evolution?
A breakthrough arose in 1995 when
Wolpert and McReady unveiled their work on No Free Lunch (NFL) theorems
for Search and Optimization. The No Free Lunch theorems
state that all the black-box algorithms have the same average performance over
the entire set of optimization problems. (A black-box algorithm does not take
into account any information about the problem or the particular instance being
solved.) The magnitude of the NFL results stroke all the efforts for developing
a universal black-box optimization algorithm capable of solving all the
optimization problems in the best manner.
In their attempt for solving
problems, men delegated computers to develop algorithms capable of performing
certain tasks. The most prominent effort in this direction is Genetic
Programming (GP), an evolutionary technique used for breeding a population of
computer programs. Instead of evolving solutions for a particular problem
instance, GP is mainly intended for discovering computer programs capable of
solving particular classes of optimization problems. (This statement is only
partially true since the discovery of computer programs may also be viewed as a
technique for solving a particular problem input. For instance, the problem may
be here: "Find a computer program that calculates the sum of the elements of an
array of integers.").
There are many such approaches in
literature concerning GP. Noticeable effort has been dedicated for evolving
deterministic computer programs capable of solving specific problems such as
symbolic regression, classification etc.
Instead of evolving such
deterministic computer programs we will evolve a full-featured evolutionary
algorithm (i.e. the output of our main program will be an EA capable of
performing a given task). Thus, we will work with EAs at two levels:
The first (macro) level consists in a steady-state EA
which uses a fixed population size, a fixed mutation probability, a fixed
crossover probability etc.
The second (micro) level consists in the solutions encoded
in a chromosome of the first level EA.
The rules employed by the evolved EAs
are not preprogrammed. These rules are automatically discovered by the
evolution. The evolved EA is a generational one.
This research was motivated by the
need of answering several important questions concerning Evolutionary
Algorithms. The most important question is
Can Evolutionary Algorithms be
automatically synthesized by using only the information about the problem being
solved?.
And, if yes, which are the genetic operators that have to be used in
conjunction with an EA (for a given problem)? Moreover, we are also interested
to find the optimal (or near-optimal) sequence of genetic operations
(selections, crossovers and mutations) to be performed during a generation of an
Evolutionary Algorithm for a particular problem. For instance, in a standard GA
the sequence is the following: selection, recombination and mutation. But, how
do we know that scheme is the best for a particular problem (or problem
instance)? We better let the evolution to find the answer for us.