Read on and leave a comment
or return to our portfolio.

Posts in the ‘Myna’ category

7 Aug 2013

by Dave

Writing Documentation using Grunt and Jekyll

Running a startup is a lot of work. All manner of tasks constantly compete for the team’s attention mandating a careful dance to keep on top of everything. We haven’t nailed all the footwork to this dance yet, but we have learned that choosing the right tool for each job can drastically simplify the choreography.

We recently rewrote a large portion of Myna in the march towards version 2 of the platform. Most of the customer-facing software – the dashboard, client libraries, HTTP API, and so on – were completely new and required new documentation. Our existing help pages were in need of an overhaul, so we decided to move them out into their own project using a build system based on Grunt and Jekyll.

The new documentation is still a work-in-progress – you can watch its evolution on the Myna website and its repository on Github (yes, it’s open source – another experiment we’re trying). We’re really happy with the way it’s all working out, so we’ve published the build system as a separate project that you can use to bootstrap your own documentation. Go forth, fork, and profit!

Why Jekyll?

Our old documentation was implemented as a set of templates in the Play 2 web app that runs our marketing site and original customer dashboard. Publishing the app requires a PhD in SBT, Mongo and Redis, and if you’re writing documentation on the train (as we are prone to do) it isn’t uncommon to have your plans abruptly terminated by one of SBT’s frequent unavoidable urges to download the whole internet (unadvisable in the middle of signal-free rural England).

We considered moving to a CMS such as WordPress, but our support team are all developers with their own preferred editors and IDEs. Forcing them to write documentation (painful) in a tiny WYSIWYG editor embedded into a web site seemed like torture. Also, CMSs require internet connections… it kinda goes with the territory.

Grunt and Jekyll, in contrast, run completely offline, and have a number of other advantages too. Plugins like grunt-contrib-watch provide instant previews via Livereload, and Jekyll’s syntax highlighting (provided by Pygments) can highlight any syntax you throw at it (including, to my amazement, HTTP).
Jekyll isn’t completely perfect for the job. We had to work around a few issues. Fortunately, none of them proved insurmountable:

Versioning

We’ve run into versioning problems many times before, some requiring some pretty serious workarounds. In fact, versioning issues are pretty much endemic across all software development platforms. In this toolchain we’re relying on lots of components: Grunt, five Grunt plugins, Ruby, and Jekyll. Fortunately, versioning is pretty much a solved problem these days: NPM and Bower are great package managers for Node, and Bundler normalizes not only the version of Jekyll we’re using, but also the version of Ruby itself.

Static Assets (Say “NO” to Plain CSS)

I may catch some flack for this, but CSS is a silly language riddled with missing features and bizarre design decisions. No way are we going to battle with a new documentation project without tools like Less CSS and Twitter Bootstrap to support us. And if we’re compiling and minifying our CSS, we might do it for Javascript as well. Jekyll doesn’t support support for either process out-of-the-box.

One way of solving these issues would be to use Jekyll plugins – there are many candidates available on Github. However, we prefer to use Grunt for this kind of thing, running Jekyll via grunt-exec and Bundler, and using grunt-contrib-watch and grunt-contrib-connect for preview functionality.

Content Navigation

Jekyll has built-in support for cataloguing and paginating blog posts, but it can’t natively generate navigation for a hierarchical documentation site. Fortunately, this was easy to work around with a couple of custom plugins: one to create a table of contents for the sidebar, and one to generate next and previous buttons at the bottom of each page.

Authenticating Users

The main navigation bar on Myna changes when users log in. Ideally we want like to keep this consistent across the main web site, the blog, and the documentation. Our solution is to built a small Javascript app to monitor the user’s login details and rewrite the navbar on demand. This is a work-in-progress project and it’s not part of the Github repo above.

Conclusion

If you like the idea of writing documentation in Markdown, you can get started in two minutes by cloning our  Github repo  and following the instructions in the README. We’d love to hear from you if you find our system useful, and we welcome pull requests with improvements.

Posted in Code, Front page, Fun, General, Javascript, Myna, Web development | Comments Off on Writing Documentation using Grunt and Jekyll

19 Oct 2012

by Laura

Meanwhile, at Untyped HQ…

… crickets

As you probably know, for the past few weeks, we’ve been concentrating hard on Myna. We’ve been accepted onto the Oxygen Accelerator program at Birmingham Science Park, Aston, so we’re hard at work setting up companies, opening bank accounts, and other such menial tasks.

In the land of Untyped, all we can see is myna birds.

In case you haven’t done so already, pop on over to the Myna blog, where you can keep up to date with the latest happenings (and anecdotes from life on Oxygen), or follow the Twitter account. Also note that Myna is still in free beta, so if you’d like to try it out, go ahead and sign up. All feedback is gratefully received – drop us a line via the Myna contact page.

Thanks, and have a great weekend!

Posted in Front page, Myna | Comments Off on Meanwhile, at Untyped HQ…

6 Jun 2012

by Noel

Myna for WordPress available now!

Cliff Seal of Pardot has released v0.1 of his WordPress plugin for our web content optimiser, Myna!

Cliff’s plugin lets you test and optimise the content of your WordPress posts and pages. Why not try it on your web site?

Posted in Front page, Myna, Web development | Comments Off on Myna for WordPress available now!

30 Aug 2011

by Noel

What is Hacker News Worth?

Twelve thousand hits, some thirty emails, and over a dozen new beta testers. That’s what happened when a blog post of ours spent ten hours on the Hacker News frontpage. It was definitely fun getting all that attention, despite the rush of traffic taking our little server off the web for a while. (Installing WP-Cache brought it back.)

Myna is the system described in the blog post, and we’re accepting beta users right now. If you’re interested in content optimisation on your website, and want better results than A/B testing will deliver, do take a look. Obviously getting this surge of traffic from HN is incredibly valuable to us. However I don’t have any suggestions for repeating the event: when I submitted the blog post to HN some months ago it disappeared without a trace. Certainly being active answering questions on HN helped keep it on the front page, and that position netted us a fairly steady thousand hits an hour.

If you’re one of the people who read our blog post, thanks for the interest! It’s very exciting for us to know that our idea for improving content optimisation resonates with so many people, and we’re looking forward to getting Myna out of beta and seeing where it takes us.

Posted in Business, Front page, Myna | Comments Off on What is Hacker News Worth?

11 Feb 2011

by Noel

Stop A/B Testing and Make Out Like a Bandit

This is the blog post that led to Myna. Sign up now and help us beta test the world’s fastest A/B testing product!

Were I a betting man, I would wager this: the supermarket nearest to you is laid out with fresh fruit and vegetables near the entrance, and dairy and bread towards the back of the shop. I’m quite certain I’d win this bet enough times to make it worthwhile. This layout is, of course, no accident. By placing essentials in the corners, the store forces shoppers to traverse the entire floor to get their weekly shop. This increases the chance of an impulse purchase and hence the store’s revenue.

I don’t know who developed this layout, but at some point someone must have tested it and it obviously worked. The same idea applies online, where it is incredibly easy to change the “layout” of a store. Where the supermarket might shuffle around displays or change the lighting, the online retailer might change the navigational structure or wording of their landing page. I call this process content optimisation.

Any prospective change should be tested to ensure it has a positive effect on revenue (or some other measure, such as clickthroughs). The industry standard method for doing this is A/B testing. However, it is well known in the academic community that A/B testing is significantly suboptimal. In this post I’m going to explain why, and how you can do better.

There are several problems with A/B testing:

  • A/B testing is suboptimal. It simply doesn’t increase revenue as much as better methods.
  • A/B testing is inflexible. You can’t, for example, add a new choice to an already running test.
  • A/B testing has a tedious workflow. To do it correctly, you have to make lots of seemingly arbitrary choices (p-value, experiment size) to run an experiment.

The methods I’m going to describe, which are known as bandit algorithms, solve all these problems. But first, let’s look at the problems of A/B testing in more detail.

Suboptimal Performance

Explaining the suboptimal performance of A/B testing is tricky without getting into a bit of statistics. Instead of doing that, I’m going to describe the essence of the problem in a (hopefully) intuitive way. Let’s start by outlining the basic A/B testing scenario, so there is no confusion. In the simplest situation are two choices, A and B, under test. Normally one of them is already running on the site (let’s call that one A), and the other (B) is what we’re considering replacing A with. We run an experiment and then look for a significant difference, where I mean significance in the statistical sense. If B is significantly better we replace A with B, otherwise we keep A on the site.

The key problem with A/B testing is it doesn’t respect what the significance test is actually saying. When a test shows B is significantly better than A, it is right to throw out A. However, when there is no significant difference the test is not saying that B is no better than A, but rather that the data does not support any conclusion. A might be better than B, B might be better than A, or they might be the same. We just can’t tell with the data that is available*. It might seem we could just run the test until a significant result appears, but that runs into the problem of repeated significance testing errors. Oh dear! Whatever we do, if we stick exclusively with A/B testing we’re going to make mistakes, and probably more than we realise.

A/B testing is also suboptimal in another way — it doesn’t take advantage of information gained during the trial. Every time you display a choice you get information, such as a click, a purchase, or an indifferent user leaving your site. This information is really valuable, and you could make use of it in your test, but A/B testing simply discards it. There are good statistical reasons to not use information gained during a trial within the A/B testing framework, but if we step outside that framework we can.

* Technically, the reason for this is that the probability of a type II error increases as the probability of a type I error decreases. We control the probability of a type I error with the p-value, and this is typically set low. So if we drop option B when the test is not significant we have a high probability of making a type II error.

Inflexible

The A/B testing setup is very rigid. You can’t add new choices to the test, so you can’t, say, test the best news item to display on the front page of a site. You can’t dynamically adjust what you display based on information you have about the user — say, what they purchased last time they visited. You also can’t easily test more than two choices.

Workflow

To setup an A/B experiment you need to choose the significance level and the number of trials. These choices are often arbitrary, but they can have a major impact on results. You then need to monitor the experiment and, when it concludes, implement the results. There are a lot of manual steps in this workflow.

Make out like a Bandit

Algorithms for solving the so-called bandit problem address all the problems with A/B testing. To summarise, they give optimal results (to within constant factors), they are very flexible, and they have a fire-and-forget workflow.

So, what is the bandit problem? You have a set of choices you can make. On the web these could be different images to display, or different wordings for a button, and so on. Each time you make a choice you get a reward. For example, you might get a reward of 1 if a button is clicked, and reward of 0 otherwise. Your goal is to maximise your total reward over time. This clearly fits the content optimisation problem.

The bandit problem has been studied for over 50 years, but only in the last ten years have practical algorithms been developed. We studied one such paper in UU. The particular details of the algorithm we studied are not important (if you are interested, read the paper – it’s very simple); here I want to focus on the general principles of bandit algorithms.

The first point is that the bandit problem explicitly includes the idea that we make use of information as it arrives. This leads to what is called the exploration-exploitation dilemma: do we try many different choices to gain a better estimate of their reward (exploration) or try the choices that have worked well in the past (exploitation)?

The performance of an algorithm is typically measured by its regret, which is the average difference between its actual performance and the best possible performance. It has been shown that the best possible regret increases logarithmically with the number of choices made, and modern bandit algorithms are optimal (see the UU paper, for instance).

Bandit algorithms are very flexible. They can deal with as many choices as necessary. Variants of the basic algorithms can handle addition and removal of choices, selection of the best k choices, and exploitation of information known about the visitor.

Bandits are also simple to use. Many of the algorithms have no parameters to set, and unlike A/B testing there is no need to monitor them — they will continue working indefinitely.

Finally, we know bandits work on the web, as much of the current research on them is coming out of GoogleMicrosoftYahoo!, and other big Internet companies.

So there you have it. Stop wasting time on A/B testing and make out like a bandit!

Join Our Merry Band

Finally, you probably won’t be surprised to hear we are developing a content optimisation system based on bandit algorithms. I am giving a talk on this at the Multipack Show and Tell in Birmingham this Saturday.

We are currently building a prototype, and are looking for people to help us evaluate it. If you want more information, or would like to get involved, get in touch and we’ll let you know when we’re ready to go.

Update: In case you missed it at the top, Myna is our content optimisation system based on bandit algorithms and we’re accepting beta users right now!

Posted in Business, Code, Design, Front page, General, Myna, Web development | Comments Off on Stop A/B Testing and Make Out Like a Bandit