Tabu Search

The basic concept of Tabu Search (TS) as described by [Glover 1986] is a meta-heuristic superimposed on another heuristic. TS explores the solution space by moving at each iteration from a solution s to the best solution in a subset of its neighborhood N(s). Contrary to classical descent methods, the current solution may deteriorate from one iteration to the next. Thus, to avoid cycling, solutions possessing some attributes of recently explored solutions are temporarily declared tabu or forbidden. The duration that an attribute remains tabu is called its tabu-tenure and it can vary over different intervals of time. The tabu status can be overridden if certain conditions are met; this is called the aspiration criterion and it happens, for example, when a tabu solution is better than any previously seen solution.

The resulting tendency to deviate from a charted course, might be regretted as a source of error but can also prove to be source of gain. The Tabu method operates in this way with the exception that new courses are not chosen randomly. Instead the Tabu search proceeds according to the supposition that there is no point in accepting a new solution unless it is to avoid a path already investigated. This insures new regions of a problems solution space will be investigated in with the goal of avoiding local minima and ultimately finding the desired solution.

The initial solution is typically created with some cheapest insertion heuristic. After creating an initial solution, an attempt is made to improve it using local search with one or more neighborhood structures and a best-accept strategy. Most of the neighborhoods used are well known and were previously introduced in the context of various construction and improvement heuristics.

Here, we will describe three different algorithms for TS:

• Granular Tabu
Granular Tabu Search (GTS) is a very promising concept. It was recently introduced by [Toth and Vigo 1998] and has yielded excellent results on the VRP. The main idea behind GTS stems from the observation that the longer edges of a graph only have a small likelihood if belonging to an optimal solution. Therefore, by eliminating all edges whose length exceeds a granularity threshold several unpromising solutions will never be considered by the search process. Toth and Vigo suggest using ${\nu = \beta \bar{c}}$, where ${\beta}$ a sparsification parameter typically chosen in the interval ${\left[1.0, 2.0\right]}$, and ${\bar{c}}$ is the average edge length of a solution obtained by a fast heuristic. If ${\beta \in \left[1.0, 2.0\right]}$, then the percentage of remaining edges in the graph tends to be in the 10% – 20% range. In practice the value of ${\beta}$ is dynamically adjusted whenever the incumbent has not improved for a set number of iterations, and periodically decreased to its initial value. Neighbor solutions are obtained by performing a limited number of edge exchanges within the same route or between two routes. The authors propose a procedure capable of examining all potential exchanges in ${O(\left|E(\nu)\right|)}$ time, where ${E(\nu) = \{(i,j) \in E : c_{ij} \le \nu\} \bigcup I}$, and ${I}$ is a set of important edges such as those incident to the depot or belonging to high quality solutions.