The basic concept of Tabu Search (TS) as described by [Glover 1986] is a meta-heuristic superimposed on another heuristic. TS explores the solution space by moving at each iteration from a solution s to the best solution in a subset of its neighborhood N(s). Contrary to classical descent methods, the current solution may deteriorate from one iteration to the next. Thus, to avoid cycling, solutions possessing some attributes of recently explored solutions are temporarily declared tabu or forbidden. The duration that an attribute remains tabu is called its tabu-tenure and it can vary over different intervals of time. The tabu status can be overridden if certain conditions are met; this is called the aspiration criterion and it happens, for example, when a tabu solution is better than any previously seen solution.

The resulting tendency to deviate from a charted course, might be regretted as a source of error but can also prove to be source of gain. The Tabu method operates in this way with the exception that new courses are not chosen randomly. Instead the Tabu search proceeds according to the supposition that there is no point in accepting a new solution unless it is to avoid a path already investigated. This insures new regions of a problems solution space will be investigated in with the goal of avoiding local minima and ultimately finding the desired solution.

The initial solution is typically created with some cheapest insertion heuristic. After creating an initial solution, an attempt is made to improve it using local search with one or more neighborhood structures and a best-accept strategy. Most of the neighborhoods used are well known and were previously introduced in the context of various construction and improvement heuristics.

Here, we will describe three different algorithms for TS:

- Granular Tabu
- The adaptative memory procedure
- Kelly and Xu

# Granular Tabu

Granular Tabu Search (GTS) is a very promising concept. It was recently introduced by [Toth and Vigo 1998] and has yielded excellent results on the VRP. The main idea behind GTS stems from the observation that the longer edges of a graph only have a small likelihood if belonging to an optimal solution. Therefore, by eliminating all edges whose length exceeds a **granularity threshold** several unpromising solutions will never be considered by the search process. Toth and Vigo suggest using ${\nu = \beta \bar{c}}$, where ${\beta}$ a sparsification parameter typically chosen in the interval ${\left[1.0, 2.0\right]}$, and ${\bar{c}}$ is the average edge length of a solution obtained by a fast heuristic. If ${\beta \in \left[1.0, 2.0\right]}$, then the percentage of remaining edges in the graph tends to be in the 10% – 20% range. In practice the value of ${\beta}$ is dynamically adjusted whenever the incumbent has not improved for a set number of iterations, and periodically decreased to its initial value. Neighbor solutions are obtained by performing a limited number of edge exchanges within the same route or between two routes. The authors propose a procedure capable of examining all potential exchanges in ${O(\left|E(\nu)\right|)}$ time, where ${E(\nu) = \{(i,j) \in E : c_{ij} \le \nu\} \bigcup I}$, and ${I}$ is a set of important edges such as those incident to the depot or belonging to high quality solutions.

# The Adaptative Memory Procedure

One of the most interesting developments to have occurred in the area of TS in recent years is the concept of Adaptative Memory developed by [Rochat and Taillard 1995]. It is mostly used in TS, but its applicability is not limited to this type of metaheuristic. An adaptative memory is a pool of good solutions that is dynamically updated throughout the search process. Periodically, some elements of these solutions are extracted from the pool and combined differently to produce new good solutions. When selecting these routes, care must be taken to avoid including the same customer twice in a solution. This restriction means that the selection process will often terminate with a partial solution that will have to be completed using a construction heuristic. In the example depicted in the figure below, extracting routes A, D and H from a memory of two solutions results in a partial solution. Rochat and Taillard have shown that the application of an adaptative memory procedure can enhance a search strategy. This has enabled them to obtain two new best solutions on the 14 standard VRP benchmark instances.

# Kelly and Xu

In this case, [Kelly and Xu 1996] considered swaps of vertices between two routes, a global repositioning of some vertices into other routes, and local routes improvements. The global repositioning strategy solves a network flow model to optimally relocate given numbers of vertices into different routes. Approximations are developed to compute the ejection and insertion costs, taking vehicle capacity into account. Route optimizations are performed by means of 3-opt exchanges an a TS improvement routine. The algorithm is governed by several parameters which are dynamically adjusted through the search. A pool of best solutions is memorized and periodically used to reinitiate the search with new parameter values. Overall, this algorithm produced several best known solutions on benchmark instances, but it is fair to say that it is not as effective as some other TS implementations. It tends to require a meaty computational effort and properly tuning its many parameters can be problematic.