A Case Study In “Specification Driven Programming”
Project Euler and its derivatives have become a favourite among algorithmic programmers. Quoting from the Project Euler web site: “Project Euler is a series of challenging mathematical/computer programming problems that will require more than just mathematical insights to solve.” There are many blogs detailing solutions to Project Euler problems. Answers, by the way are summarily shown here https://code.google.com/p/projecteuler-solutions/wiki/ProjectEulerSolutions. Another site you might like with related content is Rosetta Code at http://rosettacode.org/wiki/Rosetta_Code. It’s a compendium of Project Euler like problems with textbook solutions in common and not so common programming languages. It’s name cheekely heralds from the ancient Rosetta Stone.
Why then write this blog post?
People tend to look at Project Euler problems with two main considerations: 1) brevity and 2) efficiency. Efficiency, in broad strokes, means space and time complexity, what is commonly referred to as the Big-O notation. Most don’t consider advanced topics such as cache coherence. Often Project Euler problems have solutions that may be simplified by an understanding of the mathematical nature of the problem. Some require an understanding of scale. For example, very large inputs may require a C++ programmer to roll parts of a big number library while a Python or Lisp programmer will find this part of his/her territory.
Here is the issue: In combining different aspects of a given Project Euler problem, such as space and time complexity, the nature of the algorithm and any simplifications, the desire for brevity as well as any stated constraints, many solutions mingle them all. This is great in the isolated context of the problem; it is questionable to practice this for the building of systems. And building systems is the essence of what we do as software engineers. I would like to present an “untangled” approach: implementations as executable specifications. Not only does this yield tractable correctness, it has the added benefit that if the specification changes, it is easy to change the implementation to match. I would like to term this “Specification Driven Programming.” It goes beyond formulating constraints for invariants of objects in the vein of Design by Contract. It attempts to represent inherently imperative constructs by modeling the problem domains and then formulating constraints upon that domain while leaving a framework to handle abstractions related to solvers, such as forward propagation and back-tracking. Project Euler is an ideal playground for this study in that the problems posed are purely imperative.
We will be using the functional programming language Lisp.
Let’s start with Project Euler Problem 1 : “If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000.” How can we transform this into an executable specification?
“Multiples of” in the context of two natural numbers means that a modulus operation yields a zero.
Here we define a predicate function that takes two numbers ‘m’ and ‘n’ which checks if ‘modulus m n’ is equal to zero. As with any positional parameter, C++ and Lisp alike, the question is which is which? Here, which is the devisor? The matter isn’t helped by better naming of ‘m’ and ‘n.’ At the call site we still have two numbers. We can make this a bit clearer with a small macro or MPL in “C++ speak.” ( I do realize that Lisp has named parameters but I thought I might take the macro route.)
Here we define a macro that defines a new function syntax “is-integral-multiple-of-n” which accommodates our original modulus check. If this looks a lot like a closure, it is. The defun has one free variable and one bound variable, and we return a monadic function which takes only the bound variable while binding the hitherto free variable into the name of the function. The only difference between this and a closure is that it executes within the compiler and the function context is returned “statically,” bound into the namespace. Let’s see what the macro expands to.
This is exactly what we wanted. Our macro creates is-integral-multiple-of-[n] functions. ‘n’ here is to Lisp what ‘T’ might be to a function “template <typename T>” in C++. Let’s roll instantiations, one for multiples of 3 and one for multiples of 5.
Now let us create our executable specification of project Euler problem 1.
Screamer – Constraint Programming
We start with the Common Lisp system Screamer. Screamer is “a nondeterministic choice-point operator and forward propagation facility” — also known as a constraint programming framework. Screamer adds support for backtracking and crucially, undoable side effects. The control of side-effects of course is a key aspect of functional programming and Screamer adds a significant dimension to this.
After loading Screamer with Quicklisp, we import it into our namespace.
Let’s start thinking about our problem domain. Initially this is the range of natural numbers between 1 and 9.
We formulate thusly…
Now let us define the domain and constrain it via assertions. Also, membership of the set of natural numbers of zero is controversial. Let’s start with one instead.
Incidentally the result of our specification are the numbers given in the problem statement.
What remains is to sum the set of numbers. Summing a set in the functional programming paradigm is via a combination of a reduce operation and the sum operator.
We may paraphrase as “sum all values in the domain between 1 and 9 where the assertion holds true that members of the domain are either multiples of 3 or multiples of 5. We observe that the implementation models the specification precisely. The answer is as given in the problem statement. To obtain our final result we replace 9 with the number 999. All else remains unchanged.
What then have we done here? We have defined our input domain. We have defined our constraints. The imperative logic which applies these is a separate abstraction provided by Screamer. Then we performed any final operations on the result. Each layer is orthogonal to the other. This is precisely what we were aiming for.
But will this approach work for other problems?
“Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, … By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.”
So our domain is 4 million. Our constraints are: member of Fibonacci sequence and even. Comparing to Project Euler Problem 1, the domain has increased considerably, and so has the complexity of the constraints which must be applied to it. We can overcome this by restricting the domain. Our constraint programming framework has the concept of “generators” to do just that. We have already seen one: “an-integer-between.” In the universe of inputs, this generator restricts the domain to a finite set of natural numbers.
Let’s begin our deliberations with the Rosetta Stone implementation of Fibonacci in Common Lisp.
This is an iterative solution. We do not have to worry about overflow since Common Lisp will handle “BigNums” out of the box. Tail recursion would discount stack overflows. So why chose an iterative solution?
The answer is that if we were to write a Scream generator for Fibonacci, we would be looking for the Fibonacci sequence, not a scalar number. What we really want is for the loop to collect up its terms along the way. Common Lisp loop does that out of the box. We simply replace the keyword do with the keyword collect and return the value of the loop instead of the variable c. Rename fibonacci to fibonacci-sequence and we are nearly done.
While a common representation of the Fibonacci sequence (in terms of its steps), we will want range defining a domain. Ranges have limiting values. So our n needs to be not the number of steps in the sequence but rather it’s value limit. Again, the modifications are trivial.
We simply changed our loop construct to terminating on the sum of the last two terms of the sequence. This is beginning to look a lot like our Euler problem.
Shown below then is the Screamer generator for a Fibonacci range below a limit:
Now we may begin to use our constraint language.
All that remains is to sum the result set with the built-in Lisp predicate (evenp).
Our implementation may be paraphrased as sum all values of x where x is a Fibonacci number below 4000000 and where x is even. This precisely was our problem statement. And the answer is 4613732.
Unifying Test, Specification and Implementation
There are three points worth noting here.
1) We might have easily constrained our initial loop construct with a (when evenp) clause – only collecting even terms of the sequence. In effect, this would have provided a shortcut to the problem. We specifically did not elect to do this. The Fibonacci sequence is fixed. But our constraints may change. Today even numbers, tomorrow odd and the day after that a multiple of the day of the month… We do not want to bake potentially variable constraints into a fixed specification. This is at the heart of the point of this post.
2) By creating a generator function we avoided the brute force method of asking the constraint solver to explore 4 million possibilities.
3) What might unit testing look like here? Unit testing would aim to verify that the implementation complies with the specification. Yet our implementation is the specification… test driven development, anyone? This is the essence of what I term “Specification Driven Programming:“
” The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number ”
For this we will head straight to Rosetta Code.
5,7,13 and 29 are the numbers from the problem statement.
Integer and prime number decomposition has a unique property: It tends to proceed in a range from 2 to the square root of the number in question, in ascending order. This is true for every major prime number sieve, from Eratosthénous to Atkin. What appears as implementation detail at first sight, is indeed a function of the problem itself. We my therefore rewrite like thus:
The answer to Project Euler problem 3 is the last number of the prime decomposition of the number in question. We tried to write a generator function for the problem domain. Once we did this, the problem was solved already.
“A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99. Find the largest palindrome made from the product of two 3-digit numbers.”
The key to this problem is realizing that reversing the digits of an integer is more efficient than enumerating a domain of natural numbers and repeatedly serializing and de-serializing to and from string representation.
Here is my attempt at reversing the digits of an integer:
Using the example palindrome:
Let’s start to write our constraint programming specification:
So what we are saying here is: From the domain of values where x is between 100 and 999 and y is between 100 and 999 and x * y is their product, constrain the solution to values where the product is the same as the number formed by reversing the digits of this product – then reduce the resulting sequence to its maximum value. The answer, predictably, is correct.
In three out of four solutions presented above, we have relied on Screamer to provide the logic for solving the constraints in our specifications. Yet we have used very little of what Screamer has to offer. We have used assertions and we have used the all-values keyword. We have also created so-called generators. Generators may be thought of as the inverse of iterators in C++ or Java Generics. Yet aside from the specification oriented syntax, when looking at a sequence of numbers which must satisfy simple constraints such as being even or odd or a multiple of something, Screamer adds little value that would not be available to the average C++ or Java programmer in iterating over a sequence or applying a filter condition in the spirit of the C++ algorithm header. Our search is simplistic & our dependency graph is linear.
This changes when we enter the combinatorics of NP-hard problems. We might imagine complex dependency graphs resulting from combinatorics where we might wish to undo side effects. This is where Screamer really shines. Examples include Einstein’s Riddle and the Oxford mathematician Lewis Carroll’s Zebra Puzzle.
Another puzzle was recently brought to my attention. I don’t know its name or who to attribute it to. I call it the “Lego Walls Puzzle.”
The Lego Walls Puzzle
The Lego Walls Puzzle is an example of an np-hard Bin Packing problem. The puzzle is described as this: We have a number of Lego tiles, some of size two, some of size three. We wish to build a wall using these tiles. A valid wall is a wall where all bricks overlap which is to say when laying one row of bricks, no boundary between any two bricks of that row may coincide with a boundary between bricks in the row immediately below. I’m not a brick layer, but the intuition makes sense. The wall is rendered stable when this constraint is observed. Additionally, no bricks may protrude at the end of the wall which is to say we disallow jagged edges. The task is to find all possible walls which may be constructed with the dimensions ‘m’ wide and ‘n’ high.
Definitions first. We normalize ‘m’ and ‘n’ to mean number of bricks. ‘n’ is rows. ‘m’ is bricks wide.
Framing a solution:
Reasoning is that the rows of Legos in the wall may be represented as mathematical series’ of the sums of their constituent lengths. Thus two rows where all Legos overlap and no boundaries coincide form two mathematical series’ without shared sums, except of course the last sum in each series which must be the same as otherwise the rows are of different lengths.
We can therefore formulate a predicate that simply takes the intersection of the two series (treat them as sets) and compares the result to the last element of either series.
First up a helper function that helps turn a sequence into a series.
This is a simple let-over-lambda closure that maintains a counter and adds each x passed into it.
Turning a sequence into a series becomes this construct:
Lisp does intersections out of the box, so no work to do here…
All sweet. Let’s roll a predicate denoting two valid neighbour rows of bricks:
So what this says is: construct two accumulators, one for the x-row, and one for the y-row. Then render the series for each using said accumulators. Form the sums of the sequences using a reduce. Now assert that the sums are equal and the intersection of the two series is the last element of the first series.
We then proceed to test 3 examples. The first is valid, no boundaries coincide, rows are of equal length (evaluates to T). The others either have coinciding boundaries or they have different lengths.
Time to try a simple 2×2 wall. As before we formulate a Screamer constraint specification.
What we are saying here is that a valid 2×2 wall consists of all combination of bricks where a row x consists of two blocks sized between 2 and 3, a row y consists of two blocks sized between 2 and 3 and rows x and y are valid neighbours according to our specification. We then apply a rudimentary format (printf for C/C++ readers). The solution consists of the two walls listed. It would be nice to draw them so that we can visually verify their correctness.
We will create some helpers for this task.
One function to render bricks:
One function to render separators:
One function to render a wall consisting of rows of bricks and separators:
Using these helpers, we re-write thusly:
Already we have an executable specification of our problem for a small 2×2 sample that visualizes correct output using but simple ASCII characters.
How about scaling this to a 3×3 wall?
Output is this:
We have looked at a small case n (2) and presented a solution. We then presented a case n+1 (3) and presented a solution. How has our specification evolved from ‘n’ to ‘n+1’ ? For each ‘n+1’ we create an extra variable (from x + y to x + y + z ). Each row has one extra brick. And we have ‘row – 1’ comparisons of neighbouring rows.
This informs how we might generalise our solution. This being Lisp, we will use MPL or macros.
A macro to build our constraints:
This lets us formulate our 3×3 function thusly:
This is not quite general. We still call constraint builder 3 times. This is unnecessary repetition. We also still manually compare neighboring rows and manually create variables for each row of bricks.
We now generate our variables using gensym (generate symbols) and build our constraints in a single step.
We define a macro to compare neighbouring rows – this simply loops from row [1-2] to rows [n-1 to n] and writes the expression for applying assert. Note that it itself does not perform the assertion. It merely creates the expression that does.
This redefines our function like thus:
Essentially, (assert-constraints) simply unrolls a loop. We can observe this by expanding the macro. This clearly shows the validation of neighbouring rows of bricks.
Why do this? Why not execute the loop? Because (assert!) needs to be evaluated in the context of the Screamer expression all-values. This expression does not represent imperative code, but rather a declarative expression.
From here’ it is only a small step to render our expression wholly generic. We shall call this the “wall-builder-builder” which writes out a “wall-builder”. Note that “wall-builder-builder” is a macro writing out a function. As before, the chief difference between this and a closure is that it executes within the compiler. All of the free variables in the (defun) are known at compile time.
Everything works as it did before. One thing to note is that as our expression has become more general, so has the style of language. We now have a wall builder that 1) builds constraints, 2) asserts constraints and 3) renders a wall. This is a departure from our simplistic hard-coded (2×2) function, but it is nonetheless merely an evolution of the same.
What of block size? Our puzzle started with the problem statement that Lego blocks will be between 2 and 3 units inif we ever wanted to vary this? If we have done our job right, the only touch points of this change ought to be the building of constraints – not even their evaluation. The algorithm ought to be otherwise unaffected. Remember our earliest goal ? Separation of concerns. Let’s see how we did…
We firstly redefine our constraint builder.
Then we redefine (build-constraints)
And lastly, we amend the wall-builder-builder.
We now accept height ‘h’, width ‘w’, min block size ‘n’, and max block size ‘m’.
5 seconds and we’re off… a 3×3 wall with block size 2-4:
To improve usability we may name our variables in a more meaningful way as well as make them named-parameters. Positional parameters tend to be become obtuse the moment there is more than one.
Static and dynamic typing are logically equivalent which is to say whatever can be represented in one can be represented in another. The difference is when typing takes place. Dynamic typing is fundamentally a run-time construct while static typing is a compile time construct. Static typing is often favoured because it allows errors to be caught early, by the programmer as opposed to the user, because it delegates checks to the compiler, thus avoiding their repeated application at run-time. Performance minded programmers love the latter.
One is left to wonder what made types the nirvana of program correctness? What of the program logic itself? The answer is that type theory is well understood and easily leveraged by modern compilers. Other aspects of program correctness are delegated to different parts of the development process – such as unit testing.
How do Lisp macros fit into this picture? Lisp has type predicates, but macros run in the compiler. They also give unfettered access to the whole of the language and its eco-system.
Let’s explore this idea a little. Our “architects” have just advised us that the Lego fork lifts are limited to blocks of length 8. Anything smaller than 2 falls through the palettes. So we have some new physical limitations. Architects also advise us that the structural strength of Lego blocks imposed a height limits on building walls: 100 rows high. There is no theoretical limit on the width of the walls. Additionally we only want positive natural numbers since there are no partial blocks or negative wall heights. Let’s write this up…
Changes are highlighted.
Now let’s do something silly, such as use a floating point numbers for block sizes.
We get an error from the compiler – nicely specific advice too. We put it there. The ability to emit custom compiler messages with such ease is a powerful mechanism to administer a principle called Design by Contract. Of course we should have been clearer and stated “natural numbers between 2 and 8.” But the point is this: our macro executes in the compiler – statically as it were – and it affords us a static type check of a subtype of integer (a range), together with a customizable compiler error message. And it is an error – not merely a warning.
From Static Type Checks to General Concepts
Of course our model still has holes. We have constrained the type of each parameter and successfully performed a static type check. But we have domain specific dependencies between our types. While the minimum block size may have the same type range limits, it makes no sense to build constraints that stipulate that the maximum is smaller than the minimum. At present, this will merrily pass our constraint checks despite being wholly invalid. We want and need the ability to verify arbitrary concepts as part of our macro. C++ programmers know this by just this name: concepts. The accepted vehicle in C++ at the time of writing is by way of the SFINAE mechanism (Substitution Failure Is Not An Error) and the void_t mechanism. Let that sink in for a moment: The only way to verify properties of code (concepts) in a modern programming language, in the year 2015, is by inducing unintended consequences (failures) its compiler. With that in mind, let us see how difficult it might be to accommodate our concept…. hope it’s not void.
Changes are highlighted.
Let’s try it out. Now the minimum exceeds the maximum.
We get the following compiler error:
This works precisely as intended – complete with dedicated errors from the compiler. And we added only two lines of code. While the comparison (> block-min block-max) may be trivial, any predicate may be applied to the input parameters that we see fit. Observe the back-quote “`” leading the keyword defun – in the middle of the macro. Anything preceding it is evaluated in the compiler. Anything following it is evaluated at runtime. Syntax and language use on both sides are identical. We can mix these in any way we like. Thus we may use this mechanism to verify any “concept” we wish to verify just so long as the information is available at compile time which of course is a predicate to run-time evaluation of anything. By choosing to write a function as a (defun) or a (defmacro), a programer can decide what is a available when and what is evaluated when! This is a very powerful tool.
Algorithmic puzzles have become popular among computer programmers. While educational, solutions to these puzzles tend to promote approaches where competing aspects of the puzzle are entangled. Specification Driven or Constraint Based Programming is offered as an alternate approach. We hypothesize that this approach untangles the various aspect of the problem, making it easier to change the specifications of the problem while unifying test and implementation models.
We then present four project Euler problems and one NP-hard logic puzzle in support of our hypothesis. We hope the examples show the brevity and declarative power of functional programming combined with meta programming –presented here using Common Lisp.
In the epilogue we demonstrated how to selectively effect static typing using Lisp macros as well as implement constraints otherwise known as concepts in C++.
We hope this article has been interesting.
C++ provides a constraint programming framework called GeCode.
It implements a catalog of constraints called the Global Constraint Catalog.