Read on and leave a comment
or return to our portfolio.

23 May 2011

by Noel

Internship This Summer at Loho

Our friends at Loho are looking for an intern over summer. We’vepreviously blogged about the (fun and interesting) work we’ve done for Loho: using ScalaLift, and a heavy does of Javascript we’ve created a really simple interface for setting up complex VoIP services. (All credit goes to Alex for the great concept — our work is simply implementing his vision.) The internship could extend this work, or address other parts of the stack right down to mucking around with VoIP hardware. It’s an awesome chance to play with a lot of exciting technology. If you fit the bill (student, free for 8-10 weeks over summer, can relocate to Cambridge) get in touch with Alex at Loho. If you get the internship and decide to focus on the Scala/Lift parts we’ll certainly help you get to grips with the code base.

Posted in General | Comments Off on Internship This Summer at Loho

6 Mar 2011

by Dave

Javascript compilation for SBT

Over the weekend I knocked up a little SBT plugin to wrap up the Javascript resources in our Lift projects and deploy them as one big minified file. Read on to find out how it works, then grab yourself a copy and take it for a spin.

The plugin scans your webapps directory and looks for files with the extensions .jsm or .jsmanifest. These files, called Javascript Manifests, describe lists of Javascript sources that should be combined into a single file. For example:

# You can specify remote files using URLs...

http://code.jquery.com/jquery-1.5.1.js

# ...and local files using regular paths
#    (relative to the location of the manifest):

lib/foo.js
bar.js

# Blank lines and bash-style comments are also supported

Manifest compilation happens in two phases: first, the plugin downloads and caches any remote scripts specified using URLs. Second, it feeds all of the sources (remote and local) into Google’sClosure Compiler, which concatenates them and minifies everything (and provides excellent services like static type checking to boot). The output from the compiler is a .js file with the same base name and relative path as the original manifest.

There’s not a lot more to it than that. The plugin hooks into SBT’s standard compile and package phases, so your Javascript gets rebuilt automatically alongside your Scala code. If this sounds useful to you, please feel free to grab a copy and take it for a spin. Full details are available in the README on Github.

I should point out that there are other useful SBT plugins that do a similar job. For example, I plagiarised extensively from Jon Hoffman’sYUI Compressor plugin and Luke Amdor’s Coffee Script plugin when writing my code. These two particular examples don’t do file combination, though, and that was an important feature for our specific use case.

Posted in Code, Front page, Javascript, Scala, Web development | Comments Off on Javascript compilation for SBT

2 Mar 2011

by Dave

Setting the run.mode in Lift web apps

Update: You can now set the run mode easy and conveniently using our sbt-runmode plugin for SBT.

Setting the run.mode in Lift applications is the source of a surprising number of questions. The documentation recommends passing it as a parameter when the JVM is invoked. This can be hard to achieve for various reasons. In our case our deployment is automated using Chef, and scripts to start and stop the Jetty web server are installed by the package manager. We don’t really want to monkey around with these scripts, so we had to find another way. Jetty is written in Java, which means it must have a ridiculously complex XML configuration language. The Jetty developers turned it up to 11 by making their configuration language Turing complete, so we can actually set the system properties in a configuration file. The file we want to create isWEB-INF/jetty-web.xml and we want it to contain this:

 

<?xml version=”1.0″?>
<!DOCTYPE Configure PUBLIC “-//Mort Bay Consulting//DTD Configure//EN” “http://jetty.mortbay.org/configure.dtd”>
<Configure class=”org.mortbay.jetty.webapp.WebAppContext”>
  <Call class=”java.lang.System” name=”setProperty”>
    <Arg>run.mode</Arg>
    <Arg>production</Arg>
  </Call>
</Configure>

 

If we leave this around then our application will always run in production mode. We don’t want this when we’re developing as we won’t, for instance, get stack traces printed to the browser. Thus we should copy this file in when we package up the project, and remove it when the packaging step completes. Assuming you’re using SBT, store the above text in project/jetty-web.xml and add the following to your SBT project file to get this functionality:

 

  val jettyWebPath = “src” / “main” / “webapp” / “WEB-INF” / “jetty-web.xml”
  lazy val installProductionRunMode = task {
    FileUtilities.copyFile(“project” / “jetty-web.xml”,
                           jettyWebPath,
                           log)
    log.info(“Copied jetty-web.xml into place”)
    None
  } describedAs(“Install a jetty-web.xml that sets the run mode to production”)
  lazy val superPackage = super.packageAction dependsOn(installProductionRunMode)
  lazy val removeProductionRunMode = task {
    FileUtilities.clean(jettyWebPath, log)
    None
  } describedAs(“Remove jetty-web.xml and hence set run mode back to testing”)
  override def packageAction = removeProductionRunMode dependsOn(superPackage) describedAs BasicWebScalaProject.PackageWarDescription

 

This is pretty simple code. Basically it redefines the package action to first copy in the jetty-web.xml file, then it runs the original package action, and finally it deletes the jetty-web.xml. Now any WARfiles you run under Jetty will automatically be in production mode, but callingsbt jetty-run will still give you development mode.

Posted in Code, Scala, Web development | Comments Off on Setting the run.mode in Lift web apps

11 Feb 2011

by Noel

Stop A/B Testing and Make Out Like a Bandit

This is the blog post that led to Myna. Sign up now and help us beta test the world’s fastest A/B testing product!

Were I a betting man, I would wager this: the supermarket nearest to you is laid out with fresh fruit and vegetables near the entrance, and dairy and bread towards the back of the shop. I’m quite certain I’d win this bet enough times to make it worthwhile. This layout is, of course, no accident. By placing essentials in the corners, the store forces shoppers to traverse the entire floor to get their weekly shop. This increases the chance of an impulse purchase and hence the store’s revenue.

I don’t know who developed this layout, but at some point someone must have tested it and it obviously worked. The same idea applies online, where it is incredibly easy to change the “layout” of a store. Where the supermarket might shuffle around displays or change the lighting, the online retailer might change the navigational structure or wording of their landing page. I call this process content optimisation.

Any prospective change should be tested to ensure it has a positive effect on revenue (or some other measure, such as clickthroughs). The industry standard method for doing this is A/B testing. However, it is well known in the academic community that A/B testing is significantly suboptimal. In this post I’m going to explain why, and how you can do better.

There are several problems with A/B testing:

  • A/B testing is suboptimal. It simply doesn’t increase revenue as much as better methods.
  • A/B testing is inflexible. You can’t, for example, add a new choice to an already running test.
  • A/B testing has a tedious workflow. To do it correctly, you have to make lots of seemingly arbitrary choices (p-value, experiment size) to run an experiment.

The methods I’m going to describe, which are known as bandit algorithms, solve all these problems. But first, let’s look at the problems of A/B testing in more detail.

Suboptimal Performance

Explaining the suboptimal performance of A/B testing is tricky without getting into a bit of statistics. Instead of doing that, I’m going to describe the essence of the problem in a (hopefully) intuitive way. Let’s start by outlining the basic A/B testing scenario, so there is no confusion. In the simplest situation are two choices, A and B, under test. Normally one of them is already running on the site (let’s call that one A), and the other (B) is what we’re considering replacing A with. We run an experiment and then look for a significant difference, where I mean significance in the statistical sense. If B is significantly better we replace A with B, otherwise we keep A on the site.

The key problem with A/B testing is it doesn’t respect what the significance test is actually saying. When a test shows B is significantly better than A, it is right to throw out A. However, when there is no significant difference the test is not saying that B is no better than A, but rather that the data does not support any conclusion. A might be better than B, B might be better than A, or they might be the same. We just can’t tell with the data that is available*. It might seem we could just run the test until a significant result appears, but that runs into the problem of repeated significance testing errors. Oh dear! Whatever we do, if we stick exclusively with A/B testing we’re going to make mistakes, and probably more than we realise.

A/B testing is also suboptimal in another way — it doesn’t take advantage of information gained during the trial. Every time you display a choice you get information, such as a click, a purchase, or an indifferent user leaving your site. This information is really valuable, and you could make use of it in your test, but A/B testing simply discards it. There are good statistical reasons to not use information gained during a trial within the A/B testing framework, but if we step outside that framework we can.

* Technically, the reason for this is that the probability of a type II error increases as the probability of a type I error decreases. We control the probability of a type I error with the p-value, and this is typically set low. So if we drop option B when the test is not significant we have a high probability of making a type II error.

Inflexible

The A/B testing setup is very rigid. You can’t add new choices to the test, so you can’t, say, test the best news item to display on the front page of a site. You can’t dynamically adjust what you display based on information you have about the user — say, what they purchased last time they visited. You also can’t easily test more than two choices.

Workflow

To setup an A/B experiment you need to choose the significance level and the number of trials. These choices are often arbitrary, but they can have a major impact on results. You then need to monitor the experiment and, when it concludes, implement the results. There are a lot of manual steps in this workflow.

Make out like a Bandit

Algorithms for solving the so-called bandit problem address all the problems with A/B testing. To summarise, they give optimal results (to within constant factors), they are very flexible, and they have a fire-and-forget workflow.

So, what is the bandit problem? You have a set of choices you can make. On the web these could be different images to display, or different wordings for a button, and so on. Each time you make a choice you get a reward. For example, you might get a reward of 1 if a button is clicked, and reward of 0 otherwise. Your goal is to maximise your total reward over time. This clearly fits the content optimisation problem.

The bandit problem has been studied for over 50 years, but only in the last ten years have practical algorithms been developed. We studied one such paper in UU. The particular details of the algorithm we studied are not important (if you are interested, read the paper – it’s very simple); here I want to focus on the general principles of bandit algorithms.

The first point is that the bandit problem explicitly includes the idea that we make use of information as it arrives. This leads to what is called the exploration-exploitation dilemma: do we try many different choices to gain a better estimate of their reward (exploration) or try the choices that have worked well in the past (exploitation)?

The performance of an algorithm is typically measured by its regret, which is the average difference between its actual performance and the best possible performance. It has been shown that the best possible regret increases logarithmically with the number of choices made, and modern bandit algorithms are optimal (see the UU paper, for instance).

Bandit algorithms are very flexible. They can deal with as many choices as necessary. Variants of the basic algorithms can handle addition and removal of choices, selection of the best k choices, and exploitation of information known about the visitor.

Bandits are also simple to use. Many of the algorithms have no parameters to set, and unlike A/B testing there is no need to monitor them — they will continue working indefinitely.

Finally, we know bandits work on the web, as much of the current research on them is coming out of GoogleMicrosoftYahoo!, and other big Internet companies.

So there you have it. Stop wasting time on A/B testing and make out like a bandit!

Join Our Merry Band

Finally, you probably won’t be surprised to hear we are developing a content optimisation system based on bandit algorithms. I am giving a talk on this at the Multipack Show and Tell in Birmingham this Saturday.

We are currently building a prototype, and are looking for people to help us evaluate it. If you want more information, or would like to get involved, get in touch and we’ll let you know when we’re ready to go.

Update: In case you missed it at the top, Myna is our content optimisation system based on bandit algorithms and we’re accepting beta users right now!

Posted in Business, Code, Design, Front page, General, Myna, Web development | Comments Off on Stop A/B Testing and Make Out Like a Bandit

24 Jan 2011

by Dave

Smooth Scrolling for Mobile Safari

I recently wrote a jQuery plugin to do some smooth scrolling on the iPad, and I thought I’d share the code with everyone.

The effect you get is very similar to the iOS home screen. The user touches the screen and drags to scroll. Releasing the screen causes it to spring to the most appropriate page based upon the last dragging position and speed.

Gurus of front end development tell us that pretty much the only way to get smooth transitions on the iPad is to use 3D CSS transforms. After experimenting with jQuery animations and 2D CSS transforms, I pretty much concur: jQuery animations yield one or two frames per second, and 2D CSS transforms aren’t much better. 3D CSS transforms, on the other hand, are hardware accelerated and smooth as silk.

You can get the code from this Gist on Github (contributions and enhancements welcome). Use it with the following HTML:

 <div id="viewport"> <div>First page</div> <div>Second page</div> <div>Third page</div> </div> 

and the following Javascript:

 $("#viewport").scrollpane(); 

There’s a demo of it in action here. A couple of notes:

  • Because this hooks into touch gesture events and CSS3 3D transforms, it’ll pretty much only work on iDevices and possibly other Webkit-based tablets.
  • It works horizontally and vertically, but I’d recommend only using it horizontally in a regular web page because it interferes with Safari’s natural screen bounce. I had the benefit of a working on an offline brochure where the web page never scrolls naturally. In this environment the plugin really shines. If you are interested in doing something similar, take a look at the iPad app Delivery Site, which lets you customise various things like this.
  • There are a couple of options you can tweak to affect things like dead-zones before a drag will trigger a page transition. See the top of the source code for details.
  • When the first 3D transform is added to a page, Mobile Safari seems to transparently install an OpenGL panel to handle the effects. This causes a rendering glitch that’s just faintly visible if you’re paying attention. The plugin works around this by setting an identity transform on the scroll component on page load. Webkit is presumably frugal about 3D-ification for a reason, so you may find your web pages take more memory and CPU resources with this plugin active than without.
  • Really large (read “many-page, full-screen”) scroll panes can be very heavy on the browser. This is presumably due to the overhead of creating a texture buffer to 3D accelerate the transitions. I’ve managed five-page full-screen scrolling transitions without problems, but your mileage may vary.

Posted in Code, Front page, Fun, Javascript, Web development | Comments Off on Smooth Scrolling for Mobile Safari

21 Jan 2011

by Noel

All About Amazon’s Dynamo

The second paper we looked at in UU is Amazon’s 2007 paper onDynamo. Dynamo is an example of a new type of database dubbed NoSQL and Riak is an open-source implementation of the Dynamo architecture. Studying Dynamo is worthwhile for a number of reasons:

  • It combines a lot of recent ideas in distributed systems. These ideas are worth learning in their own right to avoid mistakes likeReddit’s when building scalable systems.
  • Since Riak is basically Dynamo, knowledge of Dynamo is directly applicable.
  • Understanding the design trade-offs in Dynamo provides a way to understand the rest of the NoSQL space.

So, What is NoSQL?

In the old days everyone used relational databases and it was good. Then along came the web, and with the web a tidal wave of data, and things were not good. The tradeoffs made by relational databases (maintaining the famous ACID properties) made them unsuitable for tasks where response time and availability were paramount. This is the case for many web applications. For example, it doesn’t really matter if my Facebook status updates aren’t immediately visible to all my friends, but it does matter if my browser hangs for a minute while the back-end tries to get a write lock on the status table.

NoSQL databases make a different set of tradeoffs, and achieve different performance characteristics as a result. Typically, NoSQL databases focus on scalability, fast response times, and availability, and give up atomicity and consistency. This tradeoff is formalised via the CAP Theorem, which states that a distributed system cannot provide consistency, availability, and partition tolerance all at the same time (although two out of three of these properties are achievable at once). Dynamo provides availability and partition tolerance at the expense of consistency. Other NoSQL databases may make different tradeoffs. SQL databases typically provide consistency and availability at the expense of partition tolerance.

Reading the Paper

The Dynamo paper can be difficult to read. The main issue we had is that the authors don’t always motivate the different components of the system. For example, consistent hashing is one of the earlier concepts introduced in the paper, but it is difficult to see why it is used and how it contributes to increased availability until later on. It is best to approach each section of the article as a self-contained idea, and wait until the end to see how they are combined. It took us two sessions to get through the paper, so don’t be surprised if you find it slow going.

Setting Out the Shop

The paper starts by laying out the properties required of Dynamo. We’ve talked about the tradeoff between consistency, availability, and partition tolerance above. Some of the other properties are:

  • Cost-effectiveness. This is important but often overlooked. You’ll sometimes see supporters of relational databases arguing that if people got some real database hardware they’d never need NoSQL. The problem with real hardware is it’s expensive. If my 20-CPU database server is at full capacity I have to drop another $20’000 just to handle another 5% increase in traffic. I probably can’t get next day delivery on this type of server, either. With a system like Dynamo I can just boot up another $500/yr virtual machine.
  • Dynamo is a key-value store. This means that there are no foreign keys and hence no joins: the application must provide all of this, or more likely use a denormalised data representation. Furthermore, Dynamo sees its data as opaque binary blobs, so search is only possible using primary keys. Other NoSQL databases make different choices: MongoDB and CouchDB are document-oriented stores, meaning that data is stored as a JSON-like tree of keys and values; HBase and Cassandra store data as tuples, like a relational database, but without foreign keys.
  • Low configuration, and fully distributed design. These two go hand-in-hand. A fully distributed design means all nodes are the same, and thus have the same configuration. It also means there is no single point of failure, another desirable feature. Again, different systems take different approaches. For example,MongoDB and most relational databases have a master/slave setup in which one machine has special “master” significance. Obviously in this setup different machines have different configurations.

Big Ideas

Dynamo is the fusion of a lot of ideas that are have developed in the field of distributed systems. Rather than duplicate the paper I want to discuss four points that I found interesting:

  • Consistent hashing
  • Dynamo’s implementation
  • Amazon’s quality metric
  • Feedback control for balancing tasks

Consistent Hashing

If you take one point from Dynamo, let it be the usefulness ofconsistent hashing. The basic idea of consistent hashing is to decouple the value of a key from the machine it is stored on. If you do this you can add and remove machines from your data store without breaking anything. If you don’t, you’re in a world of pain.

Consistent hashing is best explained via an example of doing it wrong. Say you have N machines serving as your data store. Given a key you want to work out which machine stores the data. A simple way to do so (which is what Reddit did) is to calculate key mod N. Now suppose due to increased load you want to add a machine in your data store. Now key mod (N+1) won’t give the same result, so you can’t find your data any more. To fix this you have to flush out the data and reinsert it, which will take a long time. Or you can use consistent hashing from the outset.

An example of consistent hashing. The small circles indicate the tokens, and the colours the segments of the hash ring allocated to each server.In consistent hashing you arrange the space of hash keys into a ring. Each server inserts a token into the ring, and is responsible for keys that lie in the range from it’s token to the nearest preceding token. This is illustrated in the image to the left. The small circles indicate the tokens, and the colours the segments of the hash ring allocated to each server.

Adding a new server only requires coordination with the server that previously occupied that part of the hash space. In the original consistent hashing paper tokens were inserted at random. For Dynamo it was found that a more structured system worked better. I’ll leave the details of this and other issues (in particular, routing and replication) to the paper.

Non-blocking IO

The section on Dynamo’s implementation will be interesting to PL geeks. If you’ve ever rolled your eyes at the manual continuation-passing style inflicted by Javascript then you might at least crack a wry smile when you read about essentially the same technique being used in Dynamo. There is an interesting debate to be had on the virtues of non-blocking IO vs thread-per-connection. At the moment my opinion is non-blocking IO is a necessary evil given kernels written in unsafe languages (and hence expensive context switches). Erlang does a good job of presenting a simple programming model with its light-weight threads, but achieving decent SMP performance can be hard due to the mismatch between application and OS threads. It’s my hope that languages like Rust will give a pragmatic solution to this dilemma.

Amazon’s Quality Metric

Although it isn’t part of the main thrust of the paper, I found it interesting that Amazon measure response time and other variables at the 99.9% percentile. Amazon have a very good reputation, and for other companies looking to achieve the same stature it is good to know the goal to aim for.

Feedback Control for Balancing Tasks

I’ve recently implemented feedback control (in particular, proportional error control) for a database connection pool. (I’ll blog about this in a bit.) It’s interesting that Dynamo uses a similar method to balance tasks within each node (Section 6.5). I think we’re going to see more self-regulating systems in the future. The work atRADLab is a good example of what might make it into production in a few years.

By scheduling tasks itself Dynamo is performing a task typically handled by the operating system. I think in the future this will be more commonplace, with the distinction between operating system and application program becoming increasingly blurred. TheManaged Runtime Initiative is one project that aims to do this.

Posted in Front page, Web development | Comments Off on All About Amazon’s Dynamo

10 Jan 2011

by Noel

The University of Untyped

We’ve recently started a reading group at Untyped. As consultants we need to maintain our expertise, so every Friday we tackle something new for a few hours. Given our love of Universities (we average three degrees per Untypist) and our even greater love of grandiose corporate training (hello, Hamburger University!) we have named this program Untyped University.

Broadly, we’re covering the business of the web and the business of building the web. The online business is, from certain angles, quite simple. The vast majority of businesses can be viewed as a big pipeline, sucking in visitors from the Internet-at-large, presenting some message to the user, and then hoping they click “Buy”. At each stage of the pipeline people drop out. They drop out right at the beginning if the site isn’t ranked high enough on search terms or has poorly targetted ads. They abandon the website if the design is wrong, or the site is slow, or the offer isn’t targeted correctly. Each step of this pipeline has tools and techniques that can be used to retain users, which we’ll be covering. The flipside of this is the pipeline that delivers the site, starting with data stores, going through application servers, and finishing at the browser or other client interface. Here we’ll be looking at the technologies and patterns for building great sites.

So far we’ve run a couple of sessions. The first covered bandit algorithms, and the second Amazon’s Dynamo. We’ll blog about these soon. We’ve started a Mendeley group to store our reading (though not everything we cover in future will be in published form.) Do join in if it takes your fancy!

Posted in Business, Front page, General, Web development | Comments Off on The University of Untyped

1 Dec 2010

by Dave

File upload using Comet Actors

We’ve been using the Lift web framework for a lot of web development work recently, and we’re very impressed some of its features. Lift’s Comet support, in particular, is a blessing for the kind of data-crunching back-end web sites we typically get involved in.

Importing data from uploaded files, for example, frequently causes trouble. An import can take from a few seconds to a few minutes depending on the size of the file and the complexity of the data processing and validation involved. If the import takes more than a few seconds there is an increasing risk that the web browser will time out. If this happens we fail, because the user won’t know whether the import succeeded or not. Lift’s Comet actors provide a simple way around this problem. But before describing how they work, let’s quickly go over Comet and actors.

Comet is a way of doing push notifications over HTTP, which on the face of it appears to only support pull. Without the jargon, this means a way of allowing the server to send information to the web browser when that information is ready, not when the web browser checks for it. This gives us a better interface, as the UI can instantly reflect new data, and better resource consumption, as the client doesn’t have to continuously poll the server.

There are two or three common ways of implementing Comet. Lift uses a mechanism called “long polling”, which implements Comet using plain old AJAX. As soon as the web page loads, the web browser sends an XMLHTTP request to the server. Instead of replying immediately the server keeps the connection around until it has information to push back. When information is available, the web server responds to the HTTP request, and the browser processes the response and immediately makes another request.  In other words, long polling uses HTTP’s pull mechanism to simulate push communication. This is all well and good, but it immediately raises two issues: how do we manage a large number of open, but idle, connections without swamping the server, and what programming model do we use to manage the additional complexity of Comet applications.

Handing many idle open connections is relatively simple. The traditional model is to use one thread per request, but this doesn’t scale when many requests are idle for long periods. All modern operating systems provide a scalable event notification system, such as epoll or kqueue, allowing a single thread to simultaneously monitor many connections for data. The JVM provides access to these systems via the Selector abstraction in the NIO package. All this is taken care of in the web framework, so the application programmer does not need to be aware of it. (Note that other languages present the same facilities in different ways. Erlang, for example, presents all IO operations as blocking, but the implementation uses the same scalable non-blocking OS services as the JVM. Erlang can do this as it doesn’t use as many resources per thread as the JVM does. This is an appealing choice as it provides a uniformity not found on the JVM, but impacts how Erlang handles multicore.)

More relevant to the application programmer is the programming model used for Comet, and this is where actors come in. An actor is basically a thread with the important restriction that it only communicates with the outside world via messages. To ask an actor to do something, you send it a message. This is rather like a method call, except that the actor queues the message and processes it asynchronously. When an actor wants to communicate with another resource, it sends that resource a message. Since actors never share state with each other, there is never a need to lock resources to avoid concurrent access. This is a great model because all the complexities of programming with locks disappear. If you are interested in more information on the actor model in Scala try here for the original papershere for the Akka framework and here for a bit on Lift’s actors.

Actors are a natural fit for Comet. On the server each Comet connection is handled by a Comet actor, whose job it is to manage communication with a connected browser. Each actor is bound to a single user’s session, but actors persist across web requests. We can asynchronously send an actor messages (whether the user is looking at the web page or not), and have the actor buffer them for transmission to the browser. This means we’ve got almost all of our file upload functionality straight out of the box, without having to do any particularly tricky development.

We put a proof-of-concept of the file uploader on Github. The basic structure of the code is:

  • When a file is uploaded it is handed off to a thread for processing, and a Comet actor is started to communicate with the client.
  • The processing thread periodically sends messages to the actor, informing it of progress on the file upload.
  • The Comet actor in turn communicates progress to the client.

The great thing about this arrangement is that the user can navigate away from the page without aborting the file upload, and if they later return to the page they will get a progress update. It makes for a very pleasant UI.

Posted in Code, Functional Programming, Scala, Web development | Comments Off on File upload using Comet Actors

26 Nov 2010

by Noel

Interested in a Book on Racket?

If you are interested in a book on Racket (formerly PLT Scheme) I’d appreciate it if you could fill out this survey. Thanks!

Posted in Code, Racket | Comments Off on Interested in a Book on Racket?

19 Nov 2010

by Noel

A Reflection on Working in a Team

Being in the office more frequently has given me more opportunity to work in a team; for better or worse (and mostly worse, in my opinion) the PhD students in my lab tended to work alone. An interesting example of the benefits of team working arose a few days ago. We are in the process of shifting Kahu from its current colo server to a VM. As part of this move we want to benchmark the two servers, to make sure the new VM isn’t going to give us an unexpected performance hit. To do the benchmarking we needed to log more performance statistics, and adding logging to our code was the task Dave and I were working on. I started with a grand plan: we would write three libraries, one for logging, one for bechmarking, and one for logging benchmark data. I remember feeling frustrated as we discussed it as the plan was clear in my mind and I thought I could easily get on and code it alone. However, as we talked more it became apparent the existing tools we had were good enough for most of what we wanted. In the end we wrote a single macro and were done. From three libraries down to four lines of code is a pretty good reduction in effort.

Posted in Code | Comments Off on A Reflection on Working in a Team