## Sunday, January 29, 2012

### Nth Fibonacci number in Emacs Lisp

Following a post by Phil, a video game developer in Orange County, this is the Fibonacci function in Emacs Lisp:

;;; Define some constants
(defconst s5 (sqrt 5) "square root of 5")
(defconst is5 (/ 1 s5) "inverse of the square root of 5")
(defconst ps52 (/ (+ 1 s5) 2) "1 plus sqrt(5) over 2")
(defconst ns52 (/ (- 1 s5) 2) "1 minus sqrt(5) over 2")

;;; The fibonacci function
(defun fibonacci (n)
"Nth fibonacci number"
(truncate (* is5
(- (expt ps52 n)
(expt ns52 n)))))


I'll leave the proof of it to Phil (and it is pretty damn nice), but here are some quick examples:

(fibonacci 0)
0
(fibonacci 1)
1
(fibonacci 2)
1
(fibonacci 3)
2
(fibonacci 4)
3
(fibonacci 5)
5
(fibonacci 6)
8
(fibonacci 8)
21
(fibonacci 10)
55


And yes, I know it is using limited precision floating point numbers and will break six ways from Sunday.

Still, this is now my favorite answer to an interview question.

## Wednesday, January 25, 2012

### Quote o' the week: a journal that meets in a hotel

I reading my pile of magazines and ran across Moshe Vardi's "Are you talking to Me?" in the CACM. (It's a brilliant editorial; he mentions Doug Zongker's "Chicken Chicken Chicken" talk (and paper!), the number of people attending a conference who think presentations are an opportunity to check their email (yeah, I know most of the good stuff is in the "hall track"), and the following:

I am reminded of Lance Fortnow's pithy description of a computer-science conference as "a journal that meets at a hotel."

Unfortunately, Fortnow, who notably doesn't care for the state of computer science conferences, disclaims responsibility for the quote.

How about this idea: We don't bother meeting and just collect the accepted papers into a single volume. We can give this idea an innovative name like a "journal".

In this month's CACM article, Moshe Vardi complains about the quality of conference talks. (I like the "journal that meets in a hotel" quote but it didn't originate with me). You see this at STOC and FOCS too, people giving a talk only to other specialists in their field instead of aiming for a general audience.

Personally, I believe that the word "weasel" is inherently funny and deserves to be used much more often than it is.

## Sunday, January 22, 2012

### State space search: heuristics

Last time, I wrote, without any explanation

Using a FIFO queue searches the state space breadth-first, visiting all of the states at a given distance from the initial state before any states at a greater distance. Using a LIFO queue searches the state space depth-first, visiting all of the states following a given state before visiting any other states.

The way that works is important to the power of search. Consider using a first-in, first-out queue and expanding an initial state $$n$$. Expanding $$n$$ enqueues the states $$(o_1, o_2, o_3)$$. The next step is to remove the first state from the queue, expand it, and enqueue the results, leaving the queue as $$(o_2, o_3, p_1, p_2)$$ (assuming expanding $$o_1$$ produced $$p_1$$ and $$p_2$$). As a result, following this queuing strategy, all of the states at depth $$o$$ from the initial state are expanded before any of the states at depth $$p$$. Hence, "breadth-first".

On the other hand, if the algorithm uses a last-in, first-out structure, expanding the same initial state $$n$$ still produces a queue of $$(o_1, o_2, o_3)$$. However, the next step is to remove the last state from the queue, expand it, and enqueue the results, producing $$(o_1, o_2, p_1, p_2)$$, assuming expanding $$o_3$$ produced $$p_1$$ and $$p_2$$ this time. This time, the next step will be to expand $$p_2$$, with the result that all of the states below $$o_3$$ will be examined before any of the other states $$o_1$$ or $$o_2$$. Thus, "depth-first".

So far, I have been discussing uninformed search; these approaches order their exploration of the problem's state space based on the pattern of state-space expansion. There is a better way: informed search applies information about the problem to focus the exploration of the state space on important regions. Here's an example:

## Sliding number puzzles: fifteenpuzzle

Returning to Gerhard Wickler, he describes the basics of the 8-puzzle (a smaller version of the 15-puzzle) as,

The 8-puzzle is a smaller version of the slightly better known 15-puzzle. The puzzle consists of an area divided into a grid, 3 by 3 for the 8-puzzle, 4 by 4 for the 15-puzzle. On each grid square is a tile, expect for one square which remains empty. Thus, there are eight tiles in the 8-puzzle and 15 tiles in the 15-puzzle. A tile that is next to the empty grid square can be moved into the empty space, leaving its previous position empty in turn. Tiles are numbered, 1 thru 8 for the 8-puzzle, so that each tile can be uniquely identified.

The aim of the puzzle is to achieve a given configuration of tiles from a given (different) configuration by sliding the individual tiles around the grid as described above.

This puzzle is not hard to put into a form suitable for state space search, which should provide a solution as long as the algorithm does not expand a state more than once. If the shortest solution is needed, the breadth-first approach is a straightforward choice.

The state space for sliding puzzles is significantly larger than that of the missionaries and cannibals puzzle; there are $$n!$$ ways to permute $$n$$ tiles, although only half of those are reachable from a given goal state, although the proof is enlightening itself. (The 8-puzzle therefore illustrates one of the hazards of state space manipulation: some states may be unsolvable.) So, the 8-puzzle version has 181,440 possible states, including the empty tile; the 15-puzzle has 10,461,394,944,000.

As a result, a complete search of the state space is likely to be infeasible and any uninformed search will also likely take more resources than strictly desirable. But there are ways to exploit knowledge about the problem to shorten the search.

Suppose, for example, you can find out, given a state, approximately how many moves are needed to get from the state to a goal? Then, you could select the next state to explore based on your estimate of how close it is to the solution. Fortunately, both of these are possible: there are a number of heuristics that can provide the necessary estimate, and our friend, the priority queue, can handle the selection of the next state to expand.

One possible heuristic is the number of out of place tiles, because that is obviously going to be related to the number of moves necessary to get the tiles into their places. Importantly, for reasons that I don't intend to go into here, it always underestimates the number of moves. If the search explores the state space, focusing on the states with fewer and fewer tiles out of place, the search is likely to find the goal while significantly reducing the resources required.

To implement the states for the 15-puzzle, I chose to represent the positions of the tiles as locations in an ArrayList of integers, with the tile values being 1 through 15 with 0 as the empty tile. (For expediency, I also record the location of the empty tile in the array.) As a result, determining whether a state is the goal is simple. For locations 0 to 14, whenever the value of the location is one greater than the location number, the puzzle is solved.

  @Override
public boolean isGoal()
{
// The final square should be 0 but I don't care.
for (int i = 0; i < BOARD - 1; ++i)
{
if (board.get(i) != i + 1) { return false; }
}
return true;
}


Expanding a state a bit more complicated since it requires attempting to move the empty space left, right, up, and down. (Or, alternately, move the tile to the left into the empty space. Depends on how you look at these things.) One of the directional methods is shown below; the other three are similar.

  @Override
public List<FState> expand()
{
List<FState> result = new ArrayList<FState>();
for (FState newState : Arrays.asList(this.left(), this.right(), this.up(), this.down()))
{
if (newState != null) { result.add(newState); }
}
return result;
}

[...]

private FState left()
{
if (empty % ROW != 0)
{
// Empty location is not at left edge
return new FState(this, empty - 1);
}
else
{
return null;
}
}


(I'll leave it to you to figure out how modular arithmatic lets me fiddle with a rectangular grid in a one-dimensional array.)

Implementing the out-of-position heuristic is similar to checking isGoal, with the complication of counting the number of tiles out of place.

  public int outOfPosition()
{
int out = 0;
for (int i = 0; i < BOARD - 1; ++i)
{
if (board.get(i) != i + 1)
{
++out;
}
}
return out;
}


And finally, to allow the heuristic function to interface with a priority queue, I need a Comparator subclass.

  public static class OutOfPosition implements Comparator<FState>
{
@Override
public int compare(FState o1, FState o2)
{
return o1.outOfPosition() - o2.outOfPosition();
}
}


The following code runs the search on a "harder" problem (harder than the one in the illustration above, anyway).

    System.out.println("Heuristic search: out of position");
Queue priority = new PriorityQueue(100, new FState.OutOfPosition());
Search search = new Search(queue, new HashSet(), FState.harder());
FState end = search.findGoal();


This search exampines 4387 states to find a goal with a path length of 255.

There are certainly many possible heuristics. An alternative heuristic for the 15-puzzle is the total distance for each tile in a state between its position and its goal position. Specifically, the Manhattan distance, which is sum of the number of rows and columns each tile is out of position. The total of the Manhattan distance is computed by the totalDistance method. The advantage of this heuristic is that it more closely estimates the number of moves necessary to solve a puzzle from a given state.

  public int totalDistance()
{
int out = 0;
for (int i = 0; i < BOARD; ++i)
{
if (i == empty) continue;
int position = board.get(i) - 1;
out += Math.abs((i % ROW) - (position % ROW));
out += Math.abs((i / ROW) - (position / ROW));
}
return out;
}


In this case, rather than implementing Comparator, I made the FState class Comparable

  @Override
public int compareTo(FState o)
{
return this.totalDistance() - o.totalDistance();
}


The driver program creates a PriorityQueue with no arguments to use the Comparable.

    System.out.println("Heuristic search: manhattan distance");
Queue priority = new PriorityQueue();
Search search = new Search(queue, new HashSet(), FState.harder());
FState end = search.findGoal();


This version only examines 1469 states to find a solution of 205 moves.

For comparison, according to Wikipedia, "For the 15-puzzle, lengths of optimal solutions range from 0 to 80 moves". The difference is that the raw heuristic search that I am writing about here behaves similarly to depth-first search in that it makes no claim to find an optimal path. On the other hand, the raw heuristic search does find a solution quickly, much faster than a breadth-first search.

You can get the code on github . Other posts on the topic are State space search: the basics, and State space search: A*.

## Sunday, January 15, 2012

### State space search: the basics

A while back, briefly discussing Steve McConnell's Code Complete, I wrote, "On the other hand, McConnell previously presented as a good use of recursion a maze-solving function; maze solving is an application of state-space search, which has a simple, clear, and wildly flexible iterative implementation based on a priority queue." Between that, and the Stanford AI course (which only briefly mentioned most of what I spent a semester learning back in the late '80's), I have been pondering state space search lately. It really is an elegant algorithm. So, I thought I'd spend some spare time writing an elaborate and likely unnecessary post or three about it.

I'll begin with a state. A state is a particular configuration of the system in question. For example, a board position in chess is a state. When you place the pieces on the board, you create an initial state; after some number of moves, the board is in some other state. When the game is over, the board is in a terminal state; if you won, it's in a goal state, but the difference is the topic of some other post. For the moment, I will only be talking about systems where the only terminal states are goals.

A state space is the set of all possible states for a system. For chess, the state space is all possible board positions, and there are a lot of them. Chess has a fairly large state space. The states in a state space are related by the valid actions of the system; given a state those actions produce a collection of the next states reachable by one operation of the system. For chess, the actions are the valid moves of individual chess pieces and, because the initial state is defined by the rules of chess, the state space includes all board positions reachable from that initial state by legal chess moves.

State space search is the technique of searching for a goal state, beginning with an initial state and using the operations available in each state to generate new states. Or, alternatively, beginning with a goal state and searching for a suitable initial state. Or possibly even starting with an initial state and a goal state and searching for a way to connect the two in the state space. Each of those approaches is suitable for some problems, depending on the nature of the problem. The key point is to explore the state space of the problem, searching for a solution.

State space search is one of the most powerful techniques in computer science. If a problem can be framed in a way suitable to state space search (and most can), then the algorithm is guaranteed to find a solution. Theoretically, anyway. If you have a problem and do not have any more direct algorithm, it is the go-to way to solve it. Or, at least it was; there are other techniques that have become popular, usually for very good reasons.

On the other hand, state space search has some serious weaknesses. For one thing, not all problems can be usefully framed in terms of a state space. It requires that the problem be both discrete and deterministic; discrete in such a way that all states are distinct from one another and deterministic so that an action produces exactly one subsequent state from an input state. But that is not the worst part. State space search is the poster child for exponential algorithms. I said it was guaranteed to find a solution, but I carefully didn't say you would be alive to see it. Or, anything would be alive to see it. But there are plenty of problems to which state space search is applicable.

## The algorithm

So, with all that buildup, the algorithm is a typical let-down:

def search(queue):
while not empty?(queue):
current_state := remove_element(queue)
if is_goal?(current_state):
return current_state
for next_state in expand(current_state):
if not seen?(next_state):
mark_seen(next_state)
return nil


The intelligence of the algorithm is actually in the:

• Queue that is used to store states before they are examined and expanded and which determines the nature of the search,
• The rules that determine whether a state is a goal and how the state is expanded into subsequent states, and
• The set used to determine whether a state has been seen before; this and the queue will determine how much memory the algorithm uses.

How does this look in Java?

  public T findGoal()
{
while (!queue.isEmpty())
{
statesExamined++;

T current = queue.remove();
if (current.isGoal())
{
return current;
}
for (T newState : current.expand())
{
if (!seen.contains(newState))
{
}
}
}
return null;
}


statesExamined is an accounting addition, just used for entertainment purposes. The rest of the class similarly simple, initializing seen and queue based on constructor arguments. Like I said, nothing up my sleeve....

public class Search<T extends State<T>>
{
private int statesExamined;
private final Queue<T> queue;
private final Set<T> seen;

public Search(Queue<T> queue, Set<T> seen, T initialState)
{
this.queue = queue;
this.seen = seen;
this.statesExamined = 0;
}

[...findGoal...]

public int statesExamined()
{
return statesExamined;
}
}


The Set seen is used to prevent states from being expanded repeatedly. It records when a state is first seen and subsequently prevents it from being re-added to the queue. Without this, the algorithm could enter an infinite loop if the state space is not acyclic.

The State interface is not much more interesting. The only required methods are to expand the state, producing a list of subsequent states produced by valid actions, and a predicate that is true if the state is a goal.

public interface State
{
public List expand();

public boolean isGoal();
}

If the state space search code is so simple, how is the algorithm capable of such power? How about an example?

## Missionaries and cannibals: rivercrossing

The problem is, according to Gerhard Wickler,

On one bank of a river are three missionaries and three cannibals. There is one boat available that can hold up to two people and that they would like to use to cross the river. If the cannibals ever outnumber the missionaries on either of the river’s banks, the missionaries will get eaten.

How can the boat be used to safely carry all the missionaries and cannibals across the river?

(I have seen versions where the threat is the missionaries converting the cannibals, rather than the cannibals cannibalizing (ahem) the missionaries. I am a traditionalist, however.)

Gerhard Wickler also has an excellent illustration of the complete state space for this problem. The initial state is on the left; the goal state is on the right. With 16 states, the state space is tiny in comparison with the space of other problems, but it gets my point across.

Each state is a configuration of the missionaries and cannibals on each side of the river as well as which shore the boat is on. The process of crossing the river is abstracted away; each state transition involves the boat, with at least one passenger, crossing the river.

Specifically, a state in the puzzle is made up of the participants on this side of the river, the participants on that side of the river, and the location of the boat, this side or that side.

  private static enum BoatSide { THIS, THAT };


On either side of the river are some number of missionaries and cannibals. I abstracted out the state of each side of the river as a class. There are a couple of special features of RiverSide, though.

• First, the class includes methods missionaries, cannibals, and both which returns a new RiverSide instance with the number of missionaries or cannibals modified. These are used when generating new states while expanding a given state.
• Second, the class implements equals and hashCode, making instances eligible to be stored in a HashMap.
  private static class RiverSide
{
public final int missionaries;
public final int cannibals;

public RiverSide(int missionaries, int cannibals)
{
this.missionaries = missionaries;
this.cannibals = cannibals;
}

public RiverSide missionaries(int n)
{
return new RiverSide(missionaries + n, cannibals);
}

public RiverSide cannibals(int n)
{
return new RiverSide(missionaries, cannibals + n);
}

public RiverSide both(int n)
{
return new RiverSide(missionaries + n, cannibals + n);
}

@Override
public boolean equals(Object o)
{
if (! (o instanceof RiverSide)) return false;
final RiverSide other = (RiverSide) o;
return this.missionaries == other.missionaries
&& this.cannibals == other.cannibals;
}

@Override
public int hashCode()
{
return missionaries ^ cannibals;
}

@Override
public String toString()
{
return String.format("[%d missionaries, %d cannibals]", missionaries, cannibals);
}
}


There are two constructors for the state class: the first, public, one creates the initial state for the problem, with three missionaries, three cannibals, and the boat on this side of the river. The second, private, one creates a new MCState based on a previous one and updated inhabitants of this and that side. The previous state is recorded so that the program can reconstruct the sequences of moves in a solution.

The methods required for State are next. isGoal is implemented by checking how many missionaries and cannibals are on that side of the river. expand is more complex, but breaks down into a number of branches; if the boat is on this side of the river, then expand generates all possible ways for it to get to the other side of the river, carrying one or two missionaries or cannibals; if the boat is on the other side, it does the reverse. Specifically, it collects a list of subsequent states; if at least one missionary is on this side, then a possible next state has the boat and one more missionary on that side. Likewise, if there are two missionaries on this side with the boat, another next state would be two less missionaries on this side and two more and the boat on that. The same logic applies to the cannibals and to a missionary and a cannibal moving to the other side. All of these branches go through queueNewState, which first checks whether the new state is invalid if it presents the opportunity for some cannibals to devour some missionaries. As a result of the branches, each state has five possible next states, although some (or most) of those states are invalid.

MCState also implements hashCode and equals, making instances eligible for HashSet storage.

public class MCState implements State<MCState>
{
[...]

public final BoatSide boatSide;
public final RiverSide thisSide;
public final RiverSide thatSide;
public final MCState previous;

public MCState()
{
boatSide = BoatSide.THIS;
thisSide = new RiverSide(3, 3);
thatSide = new RiverSide(0, 0);
previous = null;
}

private MCState(MCState previous, RiverSide thisSide, RiverSide thatSide)
{
this.boatSide = previous.boatSide == BoatSide.THIS ? BoatSide.THAT : BoatSide.THIS;
this.thisSide = thisSide;
this.thatSide = thatSide;
this.previous = previous;
}

@Override
public boolean isGoal()
{
return thatSide.missionaries == 3 && thatSide.cannibals == 3;
}

@Override
public List<MCState> expand()
{
List<MCState> newStates = new ArrayList<MCState>();
if (boatSide == BoatSide.THIS)
{
if (thisSide.missionaries > 0)
{
final MCState newState =
new MCState(this, thisSide.missionaries(-1), thatSide.missionaries(1));
queueNewState(newStates, newState);
}
if (thisSide.missionaries > 1)
{
final MCState newState =
new MCState(this, thisSide.missionaries(-2), thatSide.missionaries(2));
queueNewState(newStates, newState);
}
if (thisSide.cannibals > 0)
{
final MCState newState =
new MCState(this, thisSide.cannibals(-1), thatSide.cannibals(1));
queueNewState(newStates, newState);
}
if (thisSide.cannibals > 1)
{
final MCState newState =
new MCState(this, thisSide.cannibals(-2), thatSide.cannibals(2));
queueNewState(newStates, newState);
}
if (thisSide.missionaries > 0 && thisSide.cannibals > 0)
{
final MCState newState =
new MCState(this, thisSide.both(-1), thatSide.both(1));
queueNewState(newStates, newState);
}
}
else
{
if (thatSide.missionaries > 0)
{
final MCState newState =
new MCState(this,thisSide.missionaries(1), thatSide.missionaries(-1));
queueNewState(newStates, newState);
}
if (thatSide.missionaries > 1)
{
final MCState newState =
new MCState(this,thisSide.missionaries(2), thatSide.missionaries(-2));
queueNewState(newStates, newState);
}
if (thatSide.cannibals > 0)
{
final MCState newState =
new MCState(this,thisSide.cannibals(1), thatSide.cannibals(-1));
queueNewState(newStates, newState);
}
if (thatSide.cannibals > 1)
{
final MCState newState =
new MCState(this,thisSide.cannibals(2), thatSide.cannibals(-2));
queueNewState(newStates, newState);
}
if (thatSide.missionaries > 0 && thatSide.cannibals > 0)
{
final MCState newState =
new MCState(this,thisSide.both(1), thatSide.both(-1));
queueNewState(newStates, newState);
}
}
return newStates;
}

private void queueNewState(final List newStates, final MCState newState)
{
if (newState.isValid())
{
}
}

private boolean isValid()
{
return (thisSide.missionaries == 0 || thisSide.missionaries >= thisSide.cannibals)
&& (thatSide.missionaries == 0 || thatSide.missionaries >= thatSide.cannibals);
}

@Override
public String toString()
{
final String format = (boatSide == BoatSide.THIS) ? "%s * -> %s" : "%s -> * %s";
return String.format(format, thisSide, thatSide);
}

@Override
public boolean equals(final Object o)
{
if (! (o instanceof MCState)) return false;
final MCState other = (MCState) o;
return this.thisSide.equals(other.thisSide)
&& this.thatSide.equals(other.thatSide)
&& this.boatSide == other.boatSide;
}

@Override
public int hashCode()
{
return thisSide.hashCode() ^ thatSide.hashCode() ^ boatSide.hashCode();
}
}


That is a fair amount of code, and nicely tedious. The key points, though, are the implementation of the State interface; the State knows whether or not it is a goal, and it knows how to generate the next states, following immediate, single step actions.

After that big wall o' code, you are probably waiting for a punch line. Unfortunately, there isn't one. Just a driver class invoking the search twice with a couple of different queue parameters. The first option uses ArrayDeque, which implements a First-In, First-Out queue.

    Queue fifo = new ArrayDeque();
Search search = new Search(fifo, new HashSet(), new MCState());
MCState end = search.findGoal();


The second option uses an anonymous subclass of ArrayDequeue where remove calls removeLast rather than removeFirst. As a result, it implements a Last-In, First-Out queue, or a stack.

    Queue lifo = new ArrayDeque() {

@Override
public MCState remove()
{
return super.removeLast();
}

};
Search search = new Search(lifo, new HashSet(), new MCState());
MCState end = search.findGoal();


These two options are my first demonstration of the flexibility of the state space search algorithm. Using a FIFO queue searches the state space breadth-first, visiting all of the states at a given distance from the initial state before any states at a greater distance. Using a LIFO queue searches the state space depth-first, visiting all of the states following a given state before visiting any other states. The difference in this case is modest; a FIFO queue visits 16 states while a LIFO queue visits 13 to find a solution. But that is just the tip of the iceberg.

Update: completely forgot about this. If you want to try it yourself, there is a related problem involving a farmer crossing a river. The farmer is carrying a bushel of corn, a duck, and a fox. (Yeah, I don't really get it either.) If the duck is left unattended with the corn, the corn gets eaten. If the fox is left unattended with the duck, the duck gets eaten. The boat will only carry the farmer and one of the other items. How can the farmer get to the other side of the river with the corn, duck, and fox intact? You can get the state space search code, such as it is, and you will need to write a State for the problem similar to MCState and a driver program.

For the continuing story of state space search, check out the next post, State space search: heuristics and State space search: A*.

You can get the code on github. I owe a considerable debt to Gerhard Wickler for his problem definition and illustration. He has a Java library of documented search code that you might want to check out as well.

Update 20 Nov 2012: Relevant.

## Wednesday, January 11, 2012

### Quote o' the week: I've got my skillet right here, buddy

Perusing some cable-porn on /r/sysadmin, I found this piece of wisdom:

It's a two-pronged issue: Developers don't usually have the skillets or interests required for system administration.

That was part of a couple of insightful posts by sylver_dragon and tuba_man about the costs of using developers instead of sysadmins for system administration.

This is one of my biggest complaints about the blanket term "IT". I have dealt with too many managers who don't understand that there really is (or should be) a separation between Systems Administration and Software Development. Usually, you get a developer acting as admin when either role is (at least) a full time job. The end result is usually that the little things on the sysadmin side suffer. Cable routing is treated as, "fuck it, good enough." Documentation is usually not maintained, regular log checking doesn't happen, etc. It's not that the developer can't do it, it's just that he doesn't have time.

Exactly. Not only does he not have the time, but in a lot of ways, it prevents the developer (and the project or company as a whole) from operating most effectively. It's a two-pronged issue: Developers don't usually have the skillets or interests required for system administration. Application developers can work a lot better when they have solid systems to code on and deploy to. So a developer doing sysadmin work solely because he has to is in many cases shooting himself in the foot.

In my instance at least, our company's two developers work a hell of a lot better when they don't have to worry about systems or infrastructure. Since hiring me on, we've had more code output, higher quality code output, more uptime, better response times, and less customer complaints.

I, personally, keep my skillet with me at all times.

## Wednesday, January 4, 2012

### Update to "System programming"

In case anyone is interested, I added a brief discussion of Jonathan Shapiro's "Programming Language Challenges in Systems Codes: Why Systems Programmers Still Use C, and What to Do About It" to Systems programming. He has an interesting definition of systems programs, and significant insights into the requirements for a systems programming language.