# Ant Algorithms

The first ant system for VRP has been designed very recently by [Bullnheimer et al. 1997], who considered the most elementary version of the problem: CVRP.

For more complex versions of VRP, [Gambardella, Taillard and Agazzi 1999] have developed a multiple ant colony system for VRPTW (MACS-VRPTW) which is organized with a hierarchy of artificial ant colonies designed to successively optimize a multiple objective function: the first colony minimizes the number of vehicles while the second colony minimizes the traveled distances. Cooperation between colonies is performed by exchanging information through pheromone updating.

In [Bullnheimer et al. 1997] design, there are two basic ant system phases: construction of vehicle routes and trail update. The AS algorithm is explained here.

## Ant System Algorithm

After initializing the AS, the two basic steps construction of vehicle routes and trail update, are repeated for a number of iterations. Concerning the initial placement of the artificial ants it was found that the number of ants should be equal at each customer at the beginning of an iteration. The 2-opt-heuristic (it is an exhaustive exploration of all the permutations obtainable by exchanging 2 cities) is used to shorten the vehicle routes generated by the artificial ants, considerably improves the solution quality. In addition to this straight forward local search we also introduce candidate lists for the selection of customers which are determined in the initialization phase of the algorithm. For each location ${d_{ij}}$ we sort ${V-\{v_{i}\}}$ according to increasing distances ${d_{ij}}$ to obtain the candidate list. The proposed AS for the CVRP can be described by the following schematic algorithm:

1. Initialize
2. For ${I^{max}}$ iterations do:
1. For all ants generate a new solution using Formula ${\ref{eq:ant1}}$ and the candidate lists
2. Improve all vehicle routes using the 2-opt-heuristic
3. Update the pheromone trails using Formula ${\ref{eq:ant2}}$

## Construction of Vehicle Routes

To solve the VRP, the artificial ants construct solutions by successively choosing cities to visit, until each city has been visited. Whenever the choice of another city would lead to an unfeasible solution for reasons of vehicle capacity or total route length, the depot is chosen and a new tour is started. For the selection of a (not yet visited) city, two aspects are taken into account: how good was the choice of that city, an information that is stored in the pheromone trails ${\tau_{ij}}$ is associated with each arc ${(v_{i}, v_{j})}$, and how promising is the choice of that city. This latter measure of desirability, called visibility and denoted by ${\eta_{ij}}$, is the local heuristic function mentioned above.

With ${\Omega = \left \lbrace v_{j} \in V : v_{j} \; \text{is feasible to be visited} \right \rbrace \bigcup \left \lbrace v_{0} \right \rbrace}$, city ${v_{j}}$ is selected to be visited as follows:

$${ p_{ij} = \begin{cases} \frac{[\tau_{ij}]^\alpha [\eta_{ij}]^\beta}{\sum_{k \in \Omega [\tau_{ik}]^\alpha [\eta_{ik}]^\beta}} & \text{if} \; v_{j} \in \Omega \\ 0 & \text{otherwise} \end{cases} \label{eq:ant1} }$$

This probability distribution is biased by the parameters ${\alpha}$ and ${\beta}$ that determine the relative influence of the trails and the visibility, respectively. The visibility is defined as the reciprocal of the distance, and the selection probability is then further extended by problem specific information. There, the inclusion of savings and capacity utilization both lead to better results. On the other hand, the latter is relative costly in terms of computation time (as it has to be calculated in each step of an iteration) and will therefore not be used in this paper. Thus, we introduce the parameters ${f}$ and ${g}$, and use the following parametrical saving function for the visibility: ${\eta_{ij} = d_{i0} + d_{0j} – gd_{ij} + f| d_{i0} – d_{0j} |}$.

## Trail Update

After an artificial ant has constructed a feasible solution, the pheromone trails are laid depending on the objective value of the solution. This update rule is as follows:

$${ \tau_{ij}^{new} = p\tau_{ij}^{old} + \sum_{\mu = 1}^{\sigma – 1} \Delta \tau_{ij}^{\mu} + \sigma \Delta \tau_{ij}^{*} \label{eq:ant2} }$$

where ${p}$ is the trail persistence (with ${0 \leq \rho \leq 1}$), thus the trail evaporation is given by ${(1-\rho)}$. Only if arc ${(v_{i}, v_{j})}$ was used by the ${\mu\text{-th}}$ best ant, the pheromone trail is increased by a quantity ${\Delta \tau_{ij}^{\mu}}$ which is then equal to ${(\sigma – \mu) / L_{\mu}}$, and zero otherwise (cf. second term in ${\ref{eq:ant2}}$). In addition to that, all arcs belonging to the so far best solution (objective value ${L^{*}}$) are emphasized as if ${\sigma}$ elitist ants had used them. Thus, each elitist ant increases the trail intensity by an amount ${\Delta \tau_{ij}^{*}}$ that is equal to ${1/L^{*}}$ if arc ${(v_{i}, v_{j})}$ belongs to the so far best solution, and zero otherwise (cf. third term in ${\ref{eq:ant2}}$).