# State space search: heuristics

Last time, I wrote, without any explanation

Using a FIFO queue searches the state spacebreadth-first, visiting all of the states at a given distance from the initial state before any states at a greater distance. Using a LIFO queue searches the state spacedepth-first, visiting all of the states following a given state before visiting any other states.

The way that works is important to the power of search. Consider using a first-in, first-out queue and expanding an initial state \(n\). Expanding \(n\) enqueues the states \((o_1, o_2, o_3)\). The next step is to remove the first state from the queue, expand it, and enqueue the results, leaving the queue as \((o_2, o_3, p_1, p_2)\) (assuming expanding \(o_1\) produced \(p_1\) and \(p_2\)). As a result, following this queuing strategy, all of the states at depth \(o\) from the initial state are expanded before any of the states at depth \(p\). Hence, "breadth-first".

On the other hand, if the algorithm uses a last-in, first-out structure, expanding the same initial state \(n\) still produces a queue of \((o_1, o_2, o_3)\). However, the next step is to remove the *last* state from the queue, expand it, and enqueue the results, producing \((o_1, o_2, p_1, p_2)\), assuming expanding \(o_3\) produced \(p_1\) and \(p_2\) this time. This time, the *next* step will be to expand \(p_2\), with the result that all of the states below \(o_3\) will be examined before any of the other states \(o_1\) or \(o_2\). Thus, "depth-first".

So far, I have been discussing **uninformed search**; these approaches order their exploration of the problem's state space based on the pattern of state-space expansion. There is a better way: **informed search** applies information about the problem to focus the exploration of the state space on *important* regions. Here's an example:

## Sliding number puzzles: fifteenpuzzle

Returning to Gerhard Wickler, he describes the basics of the 8-puzzle (a smaller version of the 15-puzzle) as,

The 8-puzzle is a smaller version of the slightly better known 15-puzzle. The puzzle consists of an area divided into a grid, 3 by 3 for the 8-puzzle, 4 by 4 for the 15-puzzle. On each grid square is a tile, expect for one square which remains empty. Thus, there are eight tiles in the 8-puzzle and 15 tiles in the 15-puzzle. A tile that is next to the empty grid square can be moved into the empty space, leaving its previous position empty in turn. Tiles are numbered, 1 thru 8 for the 8-puzzle, so that each tile can be uniquely identified.

The aim of the puzzle is to achieve a given configuration of tiles from a given (different) configuration by sliding the individual tiles around the grid as described above.

This puzzle is not hard to put into a form suitable for state space search, which should provide a solution as long as the algorithm does not expand a state more than once. If the shortest solution is needed, the breadth-first approach is a straightforward choice.

The state space for sliding puzzles is significantly larger than that of the missionaries and cannibals puzzle; there are \(n!\) ways to permute \(n\) tiles, although only half of those are reachable from a given goal state, although the proof is enlightening itself. (The 8-puzzle therefore illustrates one of the hazards of state space manipulation: some states may be unsolvable.) So, the 8-puzzle version has 181,440 possible states, including the empty tile; the 15-puzzle has 10,461,394,944,000.

As a result, a complete search of the state space is likely to be infeasible and any uninformed search will also likely take more resources than strictly desirable. But there are ways to exploit knowledge about the problem to shorten the search.

Suppose, for example, you can find out, given a state, approximately how many moves are needed to get from the state to a goal? Then, you could select the next state to explore based on your estimate of how close it is to the solution. Fortunately, both of these are possible: there are a number of **heuristics** that can provide the necessary estimate, and our friend, the priority queue, can handle the selection of the next state to expand.

One possible heuristic is the number of out of place tiles, because that is obviously going to be related to the number of moves necessary to get the tiles into their places. Importantly, for reasons that I don't intend to go into here, it always *underestimates* the number of moves. If the search explores the state space, focusing on the states with fewer and fewer tiles out of place, the search is likely to find the goal while significantly reducing the resources required.

To implement the states for the 15-puzzle, I chose to represent the positions of the tiles as locations in an `ArrayList`

of integers, with the tile values being 1 through 15 with 0 as the empty tile. (For expediency, I also record the location of the empty tile in the array.) As a result, determining whether a state is the goal is simple. For locations 0 to 14, whenever the value of the location is one greater than the location number, the puzzle is solved.

@Override

public boolean isGoal()

{

// The final square should be 0 but I don't care.

for (int i = 0; i < BOARD - 1; ++i)

{

if (board.get(i) != i + 1) { return false; }

}

return true;

}

Expanding a state a bit more complicated since it requires attempting to move the empty space left, right, up, and down. (Or, alternately, move the tile to the left into the empty space. Depends on how you look at these things.) One of the directional methods is shown below; the other three are similar.

@Override

public List<FState> expand()

{

List<FState> result = new ArrayList<FState>();

for (FState newState : Arrays.asList(this.left(), this.right(), this.up(), this.down()))

{

if (newState != null) { result.add(newState); }

}

return result;

}

[...]

private FState left()

{

if (empty % ROW != 0)

{

// Empty location is not at left edge

return new FState(this, empty - 1);

}

else

{

return null;

}

}

(I'll leave it to you to figure out how modular arithmatic lets me fiddle with a rectangular grid in a one-dimensional array.)

Implementing the out-of-position heuristic is similar to checking `isGoal`

, with the complication of counting the number of tiles out of place.

public int outOfPosition()

{

int out = 0;

for (int i = 0; i < BOARD - 1; ++i)

{

if (board.get(i) != i + 1)

{

++out;

}

}

return out;

}

And finally, to allow the heuristic function to interface with a priority queue, I need a `Comparator`

subclass.

public static class OutOfPosition implements Comparator<FState>

{

@Override

public int compare(FState o1, FState o2)

{

return o1.outOfPosition() - o2.outOfPosition();

}

}

The following code runs the search on a "harder" problem (harder than the one in the illustration above, anyway).

System.out.println("Heuristic search: out of position");

Queuepriority = new PriorityQueue (100, new FState.OutOfPosition());

Searchsearch = new Search (queue, new HashSet (), FState.harder());

FState end = search.findGoal();

This search exampines 4387 states to find a goal with a path length of 255.

There are certainly many possible heuristics. An alternative heuristic for the 15-puzzle is the total distance for each tile in a state between its position and its goal position. Specifically, the **Manhattan distance**, which is sum of the number of rows and columns each tile is out of position. The total of the Manhattan distance is computed by the `totalDistance`

method. The advantage of this heuristic is that it more closely estimates the number of moves necessary to solve a puzzle from a given state.

public int totalDistance()

{

int out = 0;

for (int i = 0; i < BOARD; ++i)

{

if (i == empty) continue;

int position = board.get(i) - 1;

out += Math.abs((i % ROW) - (position % ROW));

out += Math.abs((i / ROW) - (position / ROW));

}

return out;

}

In this case, rather than implementing `Comparator`

, I made the `FState`

class `Comparable`

@Override

public int compareTo(FState o)

{

return this.totalDistance() - o.totalDistance();

}

The driver program creates a `PriorityQueue`

with no arguments to use the `Comparable`

.

System.out.println("Heuristic search: manhattan distance");

Queuepriority = new PriorityQueue ();

Searchsearch = new Search (queue, new HashSet (), FState.harder());

FState end = search.findGoal();

This version only examines 1469 states to find a solution of 205 moves.

For comparison, according to Wikipedia, "For the 15-puzzle, lengths of optimal solutions range from 0 to 80 moves". The difference is that the raw heuristic search that I am writing about here behaves similarly to depth-first search in that it makes no claim to find an optimal path. On the other hand, the raw heuristic search does find a solution quickly, much faster than a breadth-first search.

You can get the code on github . Other posts on the topic are State space search: the basics, and State space search: A*.