They have also gone to a lot of trouble to make their presentation readable and inspiring, aspiring to "the accessibility of Martin Gardner, the playfulness of Douglas Hofstadter, and the lyricism of Vladimir Nabokov". That's a bit of a reach, given this is a completely different genre to anything written by Nabokov (butterflies notwithstanding) and has significantly harder mathematics than anything in Gödel, Escher, Bach, let alone Gardner, but one can nevertheless tell that more attention has been paid to these matters than is usual in textbooks.
A toolkit of methods is developed, but this is always well motivated and the emphasis is on telling a coherent story, on making a few key ideas and results as clear as possible. Concrete examples are worked through in detail, and the algorithms which are the central "protagonists" of the story are formally introduced and named when first encountered, helping to provide some structure. We are also given a feel for the connections of computation to the broader reaches of mathematics and physics. Only fragments of historical background are included, however, and a little ornamental material: many epigraphs, with most subsections having one, and a few illustrative halftones, of works of art, photos of computing machinery, and cartoons.
Some acquaintance with computer science would help, but it won't be a problem if the reader has had no previous exposure to the theory of computation and doesn't know what a Turing Machine is. More important is a general mathematical competence. No deep knowledge is assumed, but there are some quite involved proofs and the presentation assumes comfort with calculus, probability, linear algebra, and differential equations, while key examples use bits of graph theory, number theory, group theory, and assorted other branches of mathematics. Taken as a whole this is really a graduate text, but the first third is simpler and more accessible and could be read by undergraduates or anyone with equivalent mathematics.
The first eight chapters, taking up a little over a third of The Nature of Computation, are an introduction to the hierarchy of complexity classes, with the central target an understanding of the P/NP distinction.
Moore and Mertens begin with some simple and intuitive problems on graphs, determining whether there exists a Eulerian path, traversing every edge once, and whether there exists a Hamiltonian path, visiting every node once. The latter problem, it turns out, is much harder than the former.
This leads to the concept of an algorithm and what it means for one algorithm to be harder than another, with explanations of asymptotic scaling, or how the time taken by an algorithm increases as the size of the problem increases, intrinsic complexity, and what it means for a problem to be soluble in polynomial time (in the class P) rather than in exponential time (in EXP) or worse.
Some insights into complexity come from tools for handling it such as recursion, divide and conquer, and dynamic programming. (These are staple programming methods, but the perspective here is rather different.) Other key concepts include duality, where two different problems turn out to be the same problem approached from different directions, and the possibility of reducing one algorithm to another. Again, these are mostly illustrated with graph algorithms.
Then comes an informal introduction to NP algorithms, those where potential solutions can be tested in polynomial time. As well as the problem of finding Hamiltonian paths, other key NP problems include graph colouring, SAT (deciding the satisfiability of Boolean formulae), integer partitioning, and primality testing. More formal definitions of NP are given, in terms of witnesses and non-deterministic computation, and we are introduced to coNP, problems where it's easy to test counter-examples instead of examples.
A problem is NP-complete if any NP problem can be reduced to it in polynomial time. Moore and Mertens present a variety of examples, showing that a whole range of algorithms are NP-complete. There are some surprising results here, in particular that solving problems approximately, or as well as possible, can be harder than finding a perfect solution if one exists. Cases where one problem is in P and a slight variant is NP-complete give us a feel for "the boundary between easy and hard".
Whether P = NP or not is "the central question of computational complexity theory". The implications of identity are such that most people believe P =/= NP, but attempts to prove that have so far failed. Moore and Mertens look at several different approaches and why they haven't succeeded, looking at why lower bounds on complexity are harder to prove than upper bounds, at the use of diagonalisation to find problems outside P, at problems which appear to be intermediate between P and NP-complete, at the parallel between non-constructive proofs and subclasses of NP, and so forth.
Starting with the historical context of Babbage and Hilbert, a look at the "grand unified theory of computation" introduces recursive functions, lambda calculus and Turing machines and demonstrates their equivalence; the Church-Turing Thesis asserts that there is no broader concept of computation. The ubiquity of computation is illustrated with examples from domains such as the Game of Life and Wang tiles.
A final chapter turns to memory complexity, exploring classes such as PSPACE and NSPACE and the relationships between them. Logical instead of algorithmic representations of problems are possible, and many have intuitive representations as games. Looking at the complexity of some real games, the word game Geography (finding place names with first letters matching previous last letters) is PSPACE-complete, as is a one-player game involving sliding blocks, while Go is in EXPTIME.
The remaining two thirds of The Nature of Computation consists of seven long chapters on more specialised topics, which are to an extent independent. Some of this is considerably harder than the earlier material, with some long proofs and quite recent research. I skipped over parts of some chapters, but found them structured to make it easy to do that without losing the thread of the broader story.
A chapter on optimization and approximation introduces linear programming and a body of associated concepts and methods: duality, interior point methods, integer linear programming, and so forth. It also offers some advice on how to solve hard problems in practice, with a long worked example applying different methods to the Travelling Salesman Problem on a 42-city graph.
Turning to randomized algorithms, Moore and Mertens consider applications to finding the minimum and maximum cuts of a graph, 3-SAT, deciding whether the minimum spanning tree is unique, polynomial identity testing, and primality testing (including the proof that the latter is in P). Among other ideas, they consider games where we choose an algorithm and an adversary chooses the instance it will run on, the visualisation of games as AND-OR or NAND trees, solving problems in a higher dimension and then projecting down, and the definition of randomized complexity classes.
The idea of an oracle challenge and answer can be extended to so-called "Arthur-Merlin" games in which an all-powerful prover, capable of solving any problem that can be solved, attempts to convince a "a mere mortal, bound within polynomial time", of the existence of a solution, perhaps without giving away the solution itself. ("In theological terms, a polynomial-time conversation with a god is equivalent to polynomial memory, and all the time in the world.") This chapter also looks at pseudorandom number generators, which can be used to derandomize some randomized algorithms.
Inspired by statistical physics, random walks and Markov chains give us a way of approximating random selection of solutions, and there are a variety of tools for studying how a system approaches equilibrium, including equilibrium indicators, coupling from the past, the spectral gap, flows of probability, expanders, and mixing in time and space. The example treated in most detail here is the problem of colouring a graph randomly.
Among counting problems, finding the number of spanning trees of a graph is easy, but finding the number of perfect matchings is hard. Some technical apparatus involving determinants and permanents is developed to try to understand this, along with the relationship between counting and sampling. The chapter builds up to an exact solution of the two-dimensional Ising model (where sites have spins +1 or -1 and the interaction energy of adjacent sites is the negative of their spin product).
A chapter on phase changes in computation attempts to narrow down critical points in the satisfiability of random SAT formulas as the clause/variable ratio is varied; it also looks at the connectivity of random graphs and the existence of a balanced partition in integer partitioning ("the easiest hard problem"). This uses differential equations as well as the method of moments, and draws on methods from physics such as message passing.
The last chapter covers quantum computation. It builds up a formalism for states and operators and qubits and quantum circuits, then explains entanglement and Bell's Inequalities, algorithmic interference, and the quantum Fourier transform. This is used to present Shor's algorithm for breaking RSA (and potentially a huge chunk of modern cryptography). The examples of graph isomorphism and Grover's algorithm show, however, that improvements from quantum computing are not always so dramatic and "our current guess is that BQP [bounded-error quantum polynomial time problems] does not contain NP".
Much interesting but slightly off-track material is left to the notes, which are usually quite readable and should not be missed. There are a few "exercises" in the body of the text which can be done relatively quickly, mostly without pen and paper. The extensive "problems" at the end of each chapter are in many cases much harder, but some hints are given and they can be usefully glanced over even if not attempted. Some of them present new and often interesting material, including additional examples of algorithms and some theory: one problem, for example, introduces context-free grammars.
The Nature of Computation is nicely laid out and presented. The diagrams are clearly laid out and chosen to make the most effective possible contribution to the comprehensibility of the text. I lost track of all the occasions when, just as I was starting to lose the details of some construction, there came a "we can visualise this as in Figure X", followed by a diagram which made everything clear.
So I will comment on a design flaw that would normally be too minor to rate a mention. Like many textbooks, The Nature of Computation has wide pages, but instead of using them to provide large margins, with notes or illustrations, it fills them with text that spans the full page width. This adds "value for money" in terms of content, but means that there are too many words to the line for optimal reading. This is a particular problem in the notes and problems, which use a slightly smaller font; the average line here has close to twenty words.
Even squeezing text in like that, The Nature of Computation is a solid volume, six centimetres thick and weighing nearly two and a half kilograms. But even textbooks might occasionally be read somewhere other than at a desk, and given that so much work has gone into making this one accessible and readable... My suggestion is to publish just the first eight chapters as a paperback. That would make a lovely introduction to computational complexity and the hierarchy of complexity classes, and in such a format might reach the broad audience it deserves.
- External links:
- buy from Amazon.com or Amazon.co.uk
- buy from Wordery
- information from the authors