Genetic Algorithms are very likely to be the most widely known type of Metaheuristic Algorithms, today receiving remarkable attention all over the world. Genetic Algorithms are computer procedures that employ the mechanics of natural selection and natural genetics to evolve solutions to problems. The basic concepts were developed by [Holland 1975], while the practicality of using the GA to solve complex problems was demonstrated in [De Jong 1975] and [Goldberg 1989]. GA evolves a population of individuals encoded as chromosomes by creating new generations of offspring through an iterative process until some convergence criteria are met. Such criteria might, for instance, refer to a maximum number of generations, the convergence to a homogeneous population composed of similar individuals, or getting an optimal solution. The best chromosome generated is then decoded, providing the corresponding solution.

Genetic algorithms work with a population of candidate solutions instead of just a single solution, so they make a multiple way search simultaneously. Each individual represents a potential solution for the problem. In the original GAs of Holland each solution may be represented as a string of bits, where the interpretation of the meaning of the string is problem specific.

The creation of a new generation of individuals involves three major steps or phases:

- The
**selection**phase consist of randomly choosing two parent individuals from the population for mating purposes. The probability of selecting a population member is generally proportional to its fitness in order to emphasize genetic quality while maintaining genetic diversity. Here, fitness refers to a measure of profit, utility or goodness to be maximized while exploring the solution space. - The
**recombination**or reproduction process makes use of genes of selected parents to produce offspring that will form the next generation. - The
**mutation**consists of randomly modifying some gene(s) of a single individual at a time to further explore the solution space and ensure, or preserve, genetic diversity. The occurrence of mutation is generally associated with a low probability.

A new generation is created by repeating the selection, reproduction and mutation processes until all chromosomes in the new population replace those from the old one. A proper balance between genetic quality and diversity is therefore required within the population in order to support efficient search. In the figure below we can see the pseudocode of a simple GA.

For solving VRP with GAs, it is usual to represent each individual by just one chromosome, which is a chain of integers, each of them representing a customer or a vehicle. So that each vehicle identifier represents in the chromosome a separator between two different routes, and a string of customer identifiers represents the sequence of deliveries that must cover a vehicle during its route. In the figure below we can see a representation of a possible solution for VRP with 10 customers and 4 vehicles. Each route begins and end at the depot (it will be assigned the number 0). If we find in a solution two vehicle identifiers not separated by any customer identifier, we will understand that the route is empty and, therefore, it will not be necessary to use all the vehicles available.

$${\underbrace{4-5-2}_{route1}-11-\underbrace{10-3-1}_{route2}-13-\underbrace{7-8-9}_{route3}-12-\underbrace{6}_{route4}}$$

A typical fitness function used for solving VRP with GA is ${f_{eval}(x) = f_{max} – f(x)}$, where ${f(x) = totaldistance(x) + \lambda overcapacity(x) + \mu overtime(x)}$.

Both **overcapacity** and **overtime** functions return the amount of capacity and time over the maximum allowed value. If none of the function restriction are violated, ${f}$ returns the total distance traveled. In other case both capacity and time are weighted with values ${\lambda}$ and ${\mu}$. The best solutions may have values close to ${f_{max}}$, while the solutions that break any restriction will see penalized their fitness value.