Read on and leave a comment
or return to our portfolio.

Posts in the ‘Code’ category

31 Jul 2008

by Noel

SchemeUnit 3: A New Approach to Testing

SchemeUnit 3 has been released. Although the interface remains compatible with version 2 the underlying philosophy of SchemeUnit has changed in a significant way. The following is extract from the SchemeUnit manual, describing the new approach.

SchemeUnit is designed to allow tests to evolve in step with
the evolution of the program under testing. SchemeUnit
scales from the unstructed checks suitable for simple
programs to the complex structure necessary for large
projects.

Simple programs, such as those in How to Design Programs,
are generally purely functional with no setup required to
obtain a context in which the function may operate.
Therefore the tests for these programs are extremely simple:
the test expressions are single checks, usually for
equality, and there are no dependencies between expressions.
For example, a HtDP student may be writing simple list
functions such as length, and the properties they are
checking are of the form:

(equal? (length null) 0)
(equal? (length '(a)) 1)
(equal? (length '(a b)) 2)

SchemeUnit directly supports this style of testing. A check
on its own is a valid test. So the above examples may be
written in SchemeUnit as:

(check-equal? (length null) 0)
(check-equal? (length '(a)) 1)
(check-equal? (length '(a b)) 2)

Simple programs now get all the benefits of SchemeUnit with
very little overhead.

There are limitations to this style of testing that more
complex programs will expose. For example, there might be
dependencies between expressions, caused by state, so that
it does not make sense to evaluate some expressions if
earlier ones have failed. This type of program needs a way
to group expressions so that a failure in one group causes
evaluation of that group to stop and immediately proceed to
the next group. In SchemeUnit all that is required is to
wrap a test-begin expression around a group of
expressions:

(test-begin
(setup-some-state!)
(check-equal? (foo! 1) 'expected-value-1)
(check-equal? (foo! 2) 'expected-value-2))

Now if any expression within the test-begin

expression fails no further expressions in that group will
be evaluated.

Notice that all the previous tests written in the simple
style are still valid. Introducing grouping is a local
change only. This is a key feature of SchemeUnit’s support
for the evolution of the program.

The programmer may wish to name a group of tests. This is
done using the test-case expression, a simple
variant on test-begin:

(test-case
"The name"
... test expressions ...)

Most programs will stick with this style. However,
programmers writing very complex programs may wish to
maintain separate groups of tests for different parts of the
program, or run their tests in different ways to the normal
SchemeUnit manner (for example, test results may be logged
for the purpose of improving software quality, or they may
be displayed on a website to indicate service quality). For
these programmers it is necessary to delay the execution of
tests so they can processed in the programmer’s chosen
manner. To do this, the programmer simply wraps a test-suite
around their tests:

(test-suite
"Suite name"
(check ...)
(test-begin ...)
(test-case ...))

The tests now change from expressions that are immediately
evaluated to objects that may be programmatically
manipulated. Note again this is a local change. Tests
outside the suite continue to evaluate as before.

2.1 Historical Context

Most testing frameworks, including earlier versions of
SchemeUnit, support only the final form of testing. This is
likely due to the influence of the SUnit testing framework,
which is the ancestor of SchemeUnit and the most widely used
frameworks in Java, .Net, Python, and Ruby, and many other
languages. That this is insufficient for all users is
apparent if one considers the proliferation of “simpler”
testing frameworks in Scheme such as SRFI-78, or the the
practice of beginner programmers. Unfortunately these
simpler methods are inadequate for testing larger
systems. To the best of my knowledge SchemeUnit is the only
testing framework that makes a conscious effort to support
the testing style of all levels of programmer.

Posted in Code | Comments Off on SchemeUnit 3: A New Approach to Testing

13 Jun 2008

by Noel

PLT Scheme 4 is out

PLT Scheme 4.0 is out. We’ve been using the pre-releases for months so this release isn’t particularly significant to us. However, for Universities and other institutions having an official release is important. I do think that too many individual developers stick with out-dated versions of PLT Scheme. The number of questions about 372, which is at least a year old, amazes me. Pre-releases are so much better! Go upgrade!

Posted in Code, Racket | Comments Off on PLT Scheme 4 is out

6 Apr 2008

by Noel

Science and Self-Directed Learners

Over on LtU I was asked how to help beginning programmers become self-directed learners. I have taught a number of students, but not in a context where I’ve been able to really make a difference in their programming practice, so I don’t have an answer to the whole question (though my instinct is that the apprenticeship method is the right way to go). However I try to teach one process that I think is an essential step towards becoming a self-directed learner. That process is the big idea called science, and I’m not talking about lab coats and chunky glasses

When working with students I always get asked what the result of evaluating some piece of code will be. What I tell the students is toask the computer via a test case or the REPL. Testing ideas by experimentation is science in its simplest and most immediate form, and a crucial step in developing the student’s ability to solve their own problems.

Application of science to programming is not restricted to students;test-driven development is science. So what is science then? I simplify, but basically three things: a theory to test, an experiment to test it, and a standard of proof (note we can never truly prove a theory, just simply not be able to disprove it). This is exactly when a unit test is. For example, a Scheme programmer might pose the theory “the string->number function will convert strings padded with whitespace characters to numbers”, formulate the experiment(equal? (string->number “ 200”) 200), and have as the standard of proof the boolean output of this single experiment.

When most people think of the scientific method they think of the lengthy and expensive double-blind trials used in, for example, medical trials. A really important point is to realise that when you do science, you choose the standard of proof. For example, as most computer programs are deterministic a few tests can be sufficient to show a property holds. When dealing with a concurrent program, or some other non-deterministic system, you may need to be more rigourous.

So there we have it. All programmers are scientists to some extent, though they might not know it. We can extend the use of experimentation to answer other questions, such as determining if productivity is affected by changes to software development process. Doing this in a lightweight way is the intention of the Simple Improvement package, though I haven’t had the time to get that library to a really useful state. Perhaps in a later post I’ll go through the ideas behind it. In the mean time, get experimenting (lab coats optional).

Posted in Code | Comments Off on Science and Self-Directed Learners

15 Feb 2008

by Noel

Requiring up and down syntax levels

If you do any macro programming in PLT Scheme you are sure to run into the dreaded “no #%app syntax transformer is bound” error message at some point. Though puzzling, the fix is actually quite simple in almost all cases. Assuming you’re using 3.99, you either need to:

  1. (require (for-syntax scheme/base))
  2. (require (for-template scheme/base))

What the error means is that some syntax has expanded in a function application, but #%app, the PLT Scheme primitive that actually handles application, is not bound in the phase in which the syntax is being evaluated. Requiring for-syntax will bind #%app in the phase before the current evaluation phase, while requiring for-template will bind #%app in the phase after. In most cases you wantfor-syntax. However, if you are writing functions that return syntax that is then inserted into a program (such a function would be required for-syntax elsewhere) you must use the other form, to make sure the syntax has #%app available to it.

Posted in Code | Comments Off on Requiring up and down syntax levels

7 Dec 2007

by Noel

ICFP 2007 In Review

At last — the long awaited ICFP post! In summary: ICFP was awesome. Freiburg is lovely, the German beer is fantastic, and everyone at the conference was very friendly. It was great to put faces to people we’ve conversed with for years, to meet old friends, and to make new ones.

We gave two talks, one at ICFP on our work building web sites in Scheme (paper here), and one at CUFP (PLT Slideshow slides PDF slides). Both were, I think, well received: a few people expressed some interest in having us come and talk to their groups, and the CUFP organisers invited us to join them at dinner.

There was a definite buzz about ICFP. It seems functional languages are beginning to take off — CUFP doubled its attendence over last year, and ICFP strained the capacity of the hotel. There was something of a reality distortion field in place though. After a few days at the conference you could begin to believe the entire software market consisted of either program verification tools in Haskell or telephony apps in Erlang. There was little representation from web developers, who I think must make up the largest group of commercial developers. I believe this is because Haskell users really dominate ICFP, and Haskell doesn’t have a particularly good web development story as far as I know.

It was interesting to see how the other communities are developing. The Haskell guys had a 3-day Hackathon right after ICFP, which is pretty impressive, and there is a practical Haskell book in development, something which is needed for Scheme. Erlang seemed to have slightly better industry representation and also has several recent practically-oriented publications. I heard that many people had arrived just for the Erlang workshop, which was held the day after CUFP.

Of course the conference revolved around the paper presentations. There were too many to review them all, so I’ll just note the ones that were particularly relevant to our work at Untyped.

Matthew Flatt’s talk on Adding Delimited and Composable Control to a Production Programming Environment was a great presentation on a new feature in PLT Scheme, delimited continuations, that will be very useful in the web server. Matthew hacked Slideshow (something you can do when you’re the core developer) to support animations by quickly fading between slides. His 1028 slides made for some slick animations that quickly and clearly got across the concept of delimited continuations. This was perhaps the best presentation I saw at the conference and it was on something we’ll definitely be using.

The iData toolkit is a Clean library that uses meta-programming to generate code for viewing and editing arbitrary data online (like Ruby on Rail’s scaffolding, but better). At ICFP this year the followup,iTasks: Executable Specifications of Interactive Work Flow Systems for the Web, was presented. Essentially it is a combinator library for specifying workflows, including higher-order workflows. At is happens we may soon be involved in a project that deals with workflows, in which case we’ll review this work.

I really liked Advanced Macrology and the Implementation of Typed Scheme by Ryan Culpepper, Sam Tobin-Hochstadt, and Matthew Flatt. Typed Scheme is pretty cool, and we’ll probably use it when it has matured a bit more, but my favourite bit of this paper is the first half which is essentially a tutorial on intermediate to advanced macrology. There is precious little material available on this corner of Scheme, so it is very welcome addition.

Also of particular interest to us were Applications of Fold to XML Transformation, and Software Transactions Meet First-Class Continuations. We’ve already had occasion to use ideas from the former, while the later gave us some food for thought with regards toSnooze We had an interesting conversation with Adam Wingo, author of the paper on folds, about the advantages of distributed version control. Something we need to look into. Adam also has a great job that allows him to spend two days a week hanging out in Barcelona’s cafes. Some people get all the luck.

One point from ICFP that is particularly relevant for this blog: Dave Herman told me he’d like to see more technical posts. I’ve tried to make the content a bit more technical of late, but if there is anything in particular you’d like me to write about drop me a line.

Posted in Code | Comments Off on ICFP 2007 In Review

23 Aug 2007

by Noel

As recently seen on the Untyped Subversion commit list…

I personally watch commits go by for several projects, and it is instructive in many ways to read the commit messages and code. It is a way to learn new things about the software process as well as the implementation of solutions in code. That said, very occasionally, you actually get a giggle from the process…

Today was one of those times.

Date: 2007-08-22 12:22:06 +0100 (Wed, 22 Aug 2007)
New Revision: 1398
Log:
[DJG] IDCheck trunk:

Tests tests tests.
Date: 2007-08-22 12:41:46 +0100 (Wed, 22 Aug 2007)
New Revision: 1399
Log:
[DJG+NHW] IDCheck trunk:

Testing all the way.
Date: 2007-08-22 12:49:21 +0100 (Wed, 22 Aug 2007)
New Revision: 1400
Log:
[NHW+DJG] IDCheck trunk:

Oh what fun it is to ride on a one horse testing sleigh.

The song ends there, I’m afraid… but it does seem like Dave and Noel are a bit cracked out today. Perhaps they should be out playing frisbee instead of coding this fine Thursday. As I’m not in the same timezone, it’s difficult to say what’s going on over there…

Posted in Code, Fun | Comments Off on As recently seen on the Untyped Subversion commit list…

6 Aug 2007

by Noel

Refactoring Functional Programs

A little while ago we<a
href=”http://www.untyped.com/untyping/archives/2007/06/selenium_code_r.html”>released
an interface to Selenium, a web testing framework.
Since then we’ve learned that Selenium is simply too slow to
use in our work-flow. Hence we started on a faster
reimplementation of the Selenium Remote Control.

A key part of this system is a proxy server, which is necessary to get around the same origin security restriction in Javascript. I’ve just finished a large refactoring of the proxy code, and I think the experience is interesting enough to warrant a blog post. While there is a large literature on refactoring object-oriented programs, there is rather less on refactoring functional programs, and what there is tends to concentrate on program transformation tools (long a FP strength) at the expense of collecting useful FP refactorings. This post is a small contribution to redressing the balance.

The code I spent most time on was the HTTP parser. It is structured as a state machine, so the initial version used the classic FP pattern ofmutually tail recursive functions. The code for parsing an HTTP request looked something like this:

(define (parse-request)
(define request-line #f)
(define headers #f)

(define (parse-request-line)
(set! request-line (read-request-line))
(parse-headers))

(define (parse-headers)
(let ([line (read-line)])
(if (end-of-input line)
(begin (set! headers (reverse headers))
(do-something))
(begin (set! headers (cons line headers))
(parse-headers)))))

(parse-request-line))

The code for parsing an HTTP response was very similar:

(define (parse-response)
(define response-line #f)
(define headers #f)

(define (parse-response-line)
(set! response-line (read-response-line))
(parse-headers))

(define (parse-headers)
(let ([line (read-line)])
(if (end-of-input line)
(begin (set! headers (reverse headers))
(do-something-different))
(begin (set! headers (cons line headers))
(parse-headers)))))

(parse-response-line))

The real code was several screens long. I wanted to make it
simpler by changing to a functional style, and reusing
common code between the request and response parsing
functions. Converting to functional style is simple:

(define (parse-request)
(define (parse-request-line)
(define request-line (read-request-line))
(parse-headers request-line))

(define (parse-headers request-line)
(define headers
(let loop ([line (read-line)])
(if (end-of-input line)
null
(cons line (loop (read-line))))))
(do-something request-line headers))

(parse-request-line))

Reusing common code is not simple. The finite state machine
pattern doesn’t abstract the next state. For example
parse-headers in parse-request
always calls do-something whereas the otherwise
identical version in parse-response calls
do-something-different.

I solved this by refactoring the code into<a
href=”http://library.readscheme.org/page6.html”>continuation-passing
style, leading to code that looks like the following:

;; shared between parse-request and parse-response
(define (parse-headers request-line k)
(define headers
(let loop ([line (read-line)])
(if (end-of-input line)
null
(cons line (loop (read-line))))))
(k request-line headers))

(define (parse-request)
(define (parse-request-line k)
(define request-line (read-request-line))
(k request-line))

(parse-request-line
(cut parse-headers <> do-something)))

Note that I’ve used <a
href=”http://srfi.schemers.org/srfi-26/srfi-26.html”>cut
as a short-cut for lambda.

I’ve got code reuse but the code itself isn’t nice. The
arguments lists were quite a bit longer in the real code and
most of the time arguments are just passed from function to
function without being used (I’ve seen these sort of
arguments called “tramp data”). I also find
that CPSed code can be quite difficult to read — you
have to construct the control flow graph in your head and
then look at the application site to fill in all the
continuations. Ugh.

One way to get rid of tramp data is to use parameters, and this something we talk about in our experience report. However that solution isn’t appropriate here. It forces me to stick with CPS so I can set the parameters in the dynamic extent of the succeeding code, and it extends the lifetime of the values beyond what is strictly necessary. This could be an issue if storing, say, a large request body in a parameter.

Notice that the different versions of
parse-request only differed in which function
they called with the value they computed. If I separate out
the computation of that value, and the decision of which
function to call I can get code reuse without CPS, and I
don’t have long argument lists! This is what my final solution looks like:

;; shared between parse-request and parse-response
(define (parse-headers request-line)
(let loop ([line (read-line)])
(if (end-of-input line)
null
(cons line (loop (read-line))))))

(define (parse-request)
(define (parse-request-line)
(read-request-line))

(do-something (parse-request-line)
(parse-headers)))

(define (parse-response)
(define (parse-response-line)
(read-response-line))

(do-something-different (parse-response-line)
(parse-headers)))

It’s short and sweet, and easy to understand.

So let’s recap what I did:

  • I started with the mutually tail-recursive FSM pattern (that’s a mouthful!)
  • I refactored into continuation-passing style.
  • I separated computation and control, and refactored back to direct style.

So three refactoring (direct style to CPS, separating
computation and control, and CPS to direct style), two of
which are particular to functional languages, and one
pattern. I could do with a better name than
“separating computation and control”. If you’re
aware of some prior work or can think of a better name do
let me know.

Although they use different terminology, the programming
language theory and software engineering communities have
explored a lot of the same ground from different
perspectives. Program transformations are pretty much the
same thing as refactorings, though the former are often
presented in the context of compiler optimisations.<a
href=”http://www.haskell.org/haskellwiki/Research_papers/Functional_pearls”>Functional
Pearls are very similar to design patterns.

If you’re a student of software engineering in functional
languages it is necessary to familiarise yourself with this
literature. This can be difficult. There are no books
summarising this literature, as you’ll find for OO
languages, and the papers are often terse and are not
focused on software engineering. This means they can be
difficult to read if you don’t have a background in
programming languages, and you have to read between the
lines a bit.

Posted in Code | Comments Off on Refactoring Functional Programs

10 May 2007

by Noel

XML Transformation in Scheme

Selenium is a tool for testing web applications, the core of which is a Javascript library that controls a web browser. With the Selenium IDE you can convert your actions in a web browser into tests, and with the Selenium Remote Control you can control a web browser from code. I’ve recently been working on adding Selenium Remote Control bindings to PLT Scheme, which has resulted in a nice and hopefully instructional demonstration of XML transformation in PLT Scheme

The Selenium Remote Control is controlled by sending simple messages over HTTP. The format of the messages isn’t important. What is, is that there are a lot of them, and the API is specified in a file called iedoc.xml that comes with Selenium. The Java/Python/Ruby bindings are generated using XSL. If I was to use XSL I’d have a processing pipeline that uses three languages (XSL, Java, Scheme) which is two more than I’d like. Hence I turned toWebIt!, an XML transformation DSL written in Scheme, to create an all Scheme pipeline. The rest of this post wshows the steps I used to transform the Selenium API into Scheme code using WebIt! I think this is interesting in its own right, but also serves as a nice demonstration of the power of macros, which WebIt! makes extensive use of.

My first step is to get an idea of the structure of the XML. The bits I’m interested in look like this:

<function name="click">
<param name="locator">an element locator</param>
<comment>Clicks on a link, button, checkbox or radio button.
If the click action causes a new page to load (like a link usually
does), call waitForPageToLoad.</comment>
</function>

Let’s read in the XML file and extract all the function elements. For this I’ll use SSAX and SXPath:

(require
(planet "ssax.ss" ("lizorkin" "ssax.plt" 1))
(only (planet "sxml.ss" ("lizorkin" "sxml.plt" 1)) sxpath))

(define api
(with-input-from-file "iedoc.xml"
(lambda () (ssax:xml->sxml (current-input-port) '()))))

(define functions
((sxpath '(// function)) api))

Ok, so we have all the functions. Now let’s parse them into a more useful datastructure. Here’s my first attempt:

(require (planet "xml.ss" ("jim" "webit.plt" 1 5)))

;; struct function : string (listof string)
(define-struct function (name params))

;; parse-function : sxml -> function
(define (parse-function fn)
(xml-match fn
[(function name: ,name
(param name: ,param-name ,desc) ...
(comment ,_ ...))
(make-function name (list param-name ...))]))

(map parse-function functions)

The xml-match macro is a pattern matcher for SXML. You specify the “shape” of the SXML, and if the input matches the pattern the following expressions are evaluated:

(xml-match value
[(pattern expression ...)]...)

The simplified form of a pattern is:

  • (element ...) matches an element with the given name.
  • name: value matches an attribute with the given name and value.
  • ,binding binds the value of binding to the given name in the scope of the following expressions.
  • ... matches zero or more of the preceeding patterns.

In our example the pattern is:

     (function name: ,name
(param name: ,param-name ,desc) ...
(comment ,_ ...))

So we’re looking for an element called function with an attribute called name the value of which is bound to name. Then follows zero or more param elements, with attribute name, the value of which is bound to param-name. Finally we expect a comment element which can contain any amount of data. The use of _ as the binding name is a common convention to indicate data we don’t care about but must still match to make our pattern complete.

I run the code in DrScheme and see the result:

xml-match: no matching clause found

Oops. So our pattern isn’t complete. We’ve also seen one flaw of WebIt!: it doesn’t give very good error messages. However we can easily fix this by adding a catch all pattern that raises an error telling us what we failed to match. The code follows. Notice that I’ve also added pretty printing to make the unmatched SXML easier to read.

(require (lib "pretty.ss"))

;; parse-function : sxml -> function
(define (parse-function fn)
(xml-match fn
[(function name: ,name
(param name: ,param-name ,desc) ...
(comment ,_ ...))
(make-function name (list param-name ...))]
[,err (let ([op (open-output-string)])
(pretty-print err op)
(error (format "Didn't match ~n~a~n" (get-output-string op))))]))

Run this code and you’ll see the error occurs as we don’t allow the description to contain more than one element. This is easily fixed by extending the pattern to ,desc .... The next error is more interesting. The function element contains a return element. The WebIt! pattern language doesn’t allows us to express optional patterns, so we have to duplicate our pattern and include the case ofreturn. This also requires we extend the defintion of the functionstructure.

;; struct function : string string (listof string)
(define-struct function (name return params))

;; parse-function : sxml -> function
(define (parse-function fn)
(xml-match fn
[(function name: ,name
(param name: ,param-name ,desc ...) ...
(comment ,_ ...))
(make-function name "void" (list param-name ...))]
[(function name: ,name
(return type: ,type ,return-desc ...)
(param name: ,param-name ,desc ...) ...
(comment ,_ ...))
(make-function name type (list param-name ...))]
[,err (let ([op (open-output-string)])
(pretty-print err op)
(error (format "Didn't match ~n~a~n" (get-output-string op))))]))

This works! This is as far as I want to go in this article. We’ve seen how we can use SSAX. SXPath, and WebIt! to create XML transforms in pure Scheme. There is a lot more to all of these packages but what we’ve used is sufficient for many uses. The rest of the code to create Scheme from the API is quite straightforward and specific to Selenium. If you’re curious read the source of the Selenium PLaneT package, which will be released soon.

This post also appears on the PLT Scheme Blog

Posted in Code | Comments Off on XML Transformation in Scheme

14 Mar 2007

by Noel

It’s Snow Time!

The Snowfort is “a repository of Scheme packages that are portable to several popular implementations of Scheme”. I think the developers of Snow have taken the correct approach by targeting the more featureful Scheme implementations, which share quite a bit of useful functionality in excess of R5RS. However, at the moment the packages look like they’re written under the assumption the host Scheme has no useful module system, as the packages I looked at all prefixed their exports with snow-. The module system in R6RSshould fix this, so hopefuly these annoying prefixes will go away.

There’s snow business like snow business, snow business I snow…

Posted in Code | Comments Off on It’s Snow Time!

13 Nov 2006

by Noel

New Software, For You!

We released two bits of code a little while ago, both at
PLaneT:

  • A new minor version of <a
    href=”http://planet.plt-scheme.org/300/#schemeunit.plt”>SchemeUnit
    that adds a check-= form for comparing numbers
    within a specified tolerance, and fixes a few bugs.
  • A new add-on to SchemeUnit, called <a
    href=”http://planet.plt-scheme.org/300/#benchmark.plt”>Benchmark
    that, as the name suggests, adds forms for benchmarking
    code.

A simple example of using the Benchmark library:

Suppose you were benchmarking some functions that worked
on vectors of numbers, and you wanted to see if the SRFI-43
vector-map was faster than writing a loop by
hand. You can test this assumption using the
check-faster form:

(module test mzscheme

(require
(lib "vector-lib.ss" "srfi" "43")
(planet "test.ss" ("schematics" "schemeunit.plt" 2))
(planet "text-ui.ss" ("schematics" "schemeunit.plt" 2))
(planet "benchmark.ss" ("schematics" "benchmark.plt" 1)))

(define big-vector
(vector-unfold
(lambda (i x) (values i x))
1000
0))

(define check-map
(test-case
"Check vector-map is faster than hand-written loop"
(check-faster
(lambda ()
(vector-map - big-vector))
(lambda ()
(let loop ([vec (make-vector 1000)]
[idx 1000])
(if (zero? idx)
vec
(begin
(let ([idx (sub1 idx)])
(vector-set! vec idx (- (vector-ref big-vector idx)))
(loop vec idx)))))))))

(test/text-ui check-map)
)

On my computer the hand-written loop is a fraction faster
than vector-map, so if performance is essential
than the loop is to be preferred.

By formalising assumptions as tests you automatically get
notified when implementation changes render them invalid.
So if changes in the JIT compiler made
vector-map faster this test would fail and I
would know to rewrite my performance critical code.

Often it isn’t convenient to keep two versions of a
function around, perhaps because the implementation depends
on many modules. In this case it is useful to benchmark the
implementation against its past performance. You can do
this by creating a benchmark-case where you
would otherwise create a test-case. An example
is in order: Say you have a complicated function
foo and you want to ensure your optimisations
are making it faster. Then you simply write:

(benchmark-case
"foo benchmark 1"
(foo some-big-input))

The Benchmark library automatically saves performance
data and fails this test if foo becomes slower.
The name of the test case is important, as this is what
the Benchmark library uses to find historical data.

That’s it. As you can see the Benchmark library is quite simple, but I have found it very useful when optimising code. I hope you do as well!

Posted in Code | Comments Off on New Software, For You!