Thijs Scheepers
02 Feb 2017

Introducing… ons eerste eigen product:

Categories: Dutch, Urenregistratie, Keeping

Met trots presenteren wij Keeping, de betere manier om werktijd bij te houden en inzichtelijk te maken. Onze fonkelnieuwe applicatie is natuurlijk niet de eerste time-tracker, maar wel een complete oplossing ontwikkeleld voor Nederlandse ondernemers door Nederlandse ondernemers. De applicatie is flexibel in hoe jij je tijdsregistratieproces inricht. Daarnaast is de applicatie helemaal goed te gebruiken op een smartphone of tablet. Uitgebreide native apps voor iOS en Android zitten in de pijplijn.

Dit artikel is een kopie van het artikel op

Het idee voor een echte Nederlandse time-tracker is ontstaan toen wij met gefronste wenkbrouwen terugkeken op de afgelopen 6 jaar. Dit zijn de jaren waarin wij ons bedrijf hebben opgebouwd. Label305 heeft heel veel gehad aan software als Google Apps, Basecamp, GitHub en MoneyBird. We werken er dagelijks mee en hebben de immense waarde van goed ontworpen software ervaren. Naast deze knappe staaltjes vakmanschap, gebruikten wij natuurlijk ook een tijdregistratie-oplossing.

We hebben van alles gezien en gebruikt; van Excel, tot internationale SaaS, tot desktopapplicatie, tot logge Nederlandse administratiepakketten. Veel goede oplossingen zijn gericht op “uurtje-factuurtje” maar gaan niet echt veel verder dan dat. Andere oplossingen zijn erg compleet maar een nachtmerrie om mee te werken.

De interface van Keeping

Vandaag introduceren wij de eerste bètaversie van Keeping, deze versie is al aardig compleet maar er staat nog veel meer op stapel. In de komende weken zal in een aantal blogartikelen beschreven worden hoe de simpele urenregistratie van Keeping jou kan helpen met verschillende problemen:

Keeping is nu al een heel compleet pakket, en altijd gratis voor de ZZP’er! We zijn nog steeds druk bezig met de ontwikkeling. Nieuwe features, plugins, applicaties en een API worden in het aankomende jaar uitgerold. Begin nu al met het gebruiken van Keeping en ervaar het gemak van moeiteloze tijdsregistratie zelf. Tijdens de bèta is de hele applicatie gratis!🎉

Written by: Thijs Scheepers
Thijs Scheepers
10 Jan 2017

Tensorflow and AI at a web development consultancy

Categories: Artificial Intelligence, Machine Learning, Tensorflow

In 2016 we didn't see a day go by without some major AI story on the popular tech blogs. It's a hot topic, no one can deny it. I won't bore you with yet another reference to AlphaGo, but cool stories about generative audio models, AI-generated paintings and neural machine translation keep coming. Most of these models are not really applicable by developers directly. However, as developers, we can't stay oblivous to the underlying techniques that make this new kind of computing possible. That is why we started building something useful for one of our clients.

Tl;dr: As a web development consultancy, we built a neural suggestion system for human translators in a couple of weeks. We did so by using Tensorflow and free open data. The resulting system is fast and gives good results.

Predictive models are becoming more prevalent in web applications. End-users are now expecting applications to become smarter and learn from their interactions. The Google’s and Facebook’s of the world have figured out how to do this splendidly. On the other hand—we have to admit—a small consultancy like ours really hadn’t, only one years ago. So because of all this, and maybe because we were just a little hyped, we wanted to start offering shiny AI to clients as well. However, we didn’t really know where to start. These predictive models still felt as much like “hocus pocus” to us, as they may do to you right now.

We could just have started with integrating artificial intelligence by letting one of our applications consume some new machine learning API—like one from Google Cloud or AWS. These solutions are often domain specific and not very flexible. So instead, we truly embraced AI, made a plan and started building models of our own. This will be the story about how we built our first AI-system for one of our clients back in May of last year.

A Neural Translation Aid

Fairlingo is a Dutch startup and a sharing economy platform for human translations. If you are a translator you can earn an income by translating documents on the site. If you want to have something translated, relatively cheaply and quickly, you can submit your document to the site and have the quality guarantees Fairlingo provides. Or as a startup hipster would say: “It’s like Uber for translation”.

When I started reading some interesting stuff on neural machine translation and was studying the documentation of the TensorFlow machine learning library, it dawned on me that we could fairly easily build a suggestion system for Fairlingo’s translators. After convincing our client, and four weeks of hacking, research and implementation we finally we ended up with a prototype that looked like this:

Accepting only the first suggestion

This picture shows Fairlingo’s interface, where we translate from Dutch into English. The system suggests the words in auto-complete fashion. The suggestions pop up fast, and the resulting translation is accurate. The model underneath it is powered by a large recurrent neural network (RNN).

A simple model architecture

So, there are several steps involved in building such a system. First we need to define an architecture for our model. When you are prototyping a model the same rules apply as when you are prototyping a web application. Start simple, and do not add bells and whistles immediately. Otherwise you’ll quickly get in over your head and lose oversight.

In order to construct an architecture in TensorFlow, you should define a computation graph. It is a bit like, defining a program before you running it with actual data. The computation graph is one of these new programming paradigms that is very important to deep learning models. This video introduces the concept well.

Because we would like to give suggestions for the next word given the previous words, we start of with a simple RNN architecture. This is a neural network that passes along a memory vector over each time step. At each time step in you enter a specific token, which are word-based in our case. The model’s task at each time step is to predict the next token. The memory is represented by the output of an RNN unit such as an LSTM or a GRU.

Language model rnn architecture

Both Tensorflow and Theano have the scan() function which can be used to pass the appropriate output state along for each time step. This article has some good examples on how to use it. Furthermore, Tensorflow also has its own RNNCell which is an alternative way to implement an RNN.

The actual input of the network is a large one-hot vector, i.e. a vector where all entries have a zero value except for one. This large vector has the size of the chosen vocabulary, e.g. 50,000. This means that the 50,000 most common words will be used in the vocabulary. All other words will be assigned to a special <unk> token. Additionally, we add a special <eos> token to indicate the end of a sentence. So in the end, each word in the vocabulary is represented by an index in this large one-hot vector.

Before entering the RNN, the one-hot vector will be transformed into an embedding. These word embeddings represent the token in an n-dimensional vector, where n is way smaller than 50,000. n is 1024 in our case. These embedding vectors allow the network to capture the relationship between words. It is pretty neat and maybe sounds a bit complicated—luckily Tensorflow has a very simple method for adding an embedding layer to the network’s architecture.

So, these embedding vectors are the actual input of the RNN. It’s output can than be used to calculate probabilities for the next token. This is often done using the softmax function. It results in probabilities for each of our 50,000 tokens. The only thing left for us to do is sorting the tokens and suggest the ones with the highest probability to the user.

Now that we have defined an architecture we need to give the model its parameters. Using an optimizer we can train the model’s parameters to minimize the error between the predicted token and the actual token. This optimization procedure used, is called stochastic gradient descent and is one of the reasons neural networks work as well as they do.

So now we have a model that can predict the next word given the previous word. Such a model is often called a language model, and is used in our smartphone keyboards. However such a model is sadly not able to translate anything yet, since it can’t take the source language into account.

A more elaborate model architecture

So now that we have built a fully functioning language model and understand the basics of recurrent neural networks for natural language processing (NLP) we can continue and expand our architecture to a so called sequence to sequence networks.

Sequence to sequence rnn architecture

These networks can encode a sequence of tokens into a memory vector (or hidden state) much in the same way as a language model does. So in essence, at each time step you have a representation of not just the input word through the word embedding, but also one of all previous words through the memory vector. A sequence to sequence model is based around the idea that we could encode the entire sentence and then use the latest memory vector to decode the representation into a another sequence. This decoding happens token for token by entering the previously predicted token back as input in the next time step.

An important thing to mention is that the neural network has separate weights for the decoding and encoding steps. Furthermore, for a simple model, it is a good idea to reverse the source sentence, as shown in the figure above. More elaborate bi-direction approaches are even better, but let’s leave those out for now.

So I hope you now see where I’m going with this. The actual suggestions produced by our system are the most probable words at a specific decoding step from our sequence to sequence model. Such a model can also be used to predict entire translations by simply predicting until the <eos> token is seen. Actually, Google Translate is starting to shift to this approach.

One of the big advantages of this model, especially for suggestion systems, is that we can enter our own input into the decoder. And thus ignore the previous suggestions. This is exactly what we need to compensate for the different prefix an end-user might have entered. To clarify, when looking at the architecture figure, we could enter something different than X or Y into the decoder and the model will adjust its predictions accordingly.

Adjusting the input and getting even suggestions then

Data and training

A neural network needs training data like cookie monster needs cookies. But as a small consultancy with a client that does not have gigabytes of data, where do we do we get more? Luckily open parallel corpa are freely available all over the internet and we used those to extend our training data. One obvious candidate was the OpenSubtitles corpus where you can get almost any language pair. Omnomnom🍪, thank you pirates!

An obvious downside is the specific domain of each dataset. A system trained on only subtitles will have great suggestions for when you are translating subtitles, but not so much for anything else. We have to start somewhere though, and as more data from Fairlingo itself comes in, we could use that to enhance the system even further.

high-end consumer gaming card We trained our models on a nVidea GeForce GTX 1080 Ti which is a high-end consumer gaming card. When selecting a GPU, you should pay specific attention to the amount of VRAM the card has. This is important since the card should be able to hold the entire model in its memory. The larger the VRAM, the larger the model you can train, the better it performs eventually.

So, we let it spin for a couple of days (something like 60 hours) and the results for our Dutch - English system are pretty good as you can see.

Serving the suggestions

So because Tensorflow has python bindings we can implement the entire model in Python. That way we can easily and efficiently couple the entire thing to a Tornado web server and serve our suggestions directly to clients over a WebSocket. This allows for a fast connection from client directly into the GPU without much overhead. The entire Fairlingo implementation is written in PHP with a Laravel backend. But by accessing the Tornado WebSocket on a different endpoint we can keep the suggestion system separate from the main application.

In a production environment, it is important to combine multiple incoming requests into a single batch in order to utilize the GPU at it’s full capacity. This will always be a trade-off between speed and utilization.

A big disadvantage is the amount of language pairs Fairlingo offers. So with our current method we would need to create and train a model for each specific language pair in order to make the system available to every translator. Google has tackled this problem by creating an interlingua representation, with which they can have just one encoder and one decoder model for each language.

The last hurdle for running large deep learning models in production is the actual use of GPU’s. If you spin up a GPU machine in the cloud you are easily paying hundreds of dollars per month. Managing the machines yourself, as we did is cheaper but a hassle. That is why I’m excited to see if Google’s new TPU’s will come available through Google Cloud directly in the coming years. These chips offer interesting new possibilities for running models in production without managing expensive GPU servers. Coupling them through the GRPC compatible Tensorflow library should be easy. I imagine they would make it possible to scale a production system similar to how we scale webservers on AWS or Google Cloud. We will have to wait and see how this unfolds.

Testing it with users

user study at vertaalbureau perfect We have successfully created a suggestion system. Now does it actually help translators do their job any better? We asked 6 translators to test our new suggestion system. During the user study, we actually recorded and measured each keystroke the translators made to deduce which words where entered by themselves and which were entered through the suggestion system. We found that the translators actually entered 23% of all characters through the suggestion system, they did not slow down considerably and the system did not compromise the translation quality.

This sounds grim, but if you consider other suggestions system that have been tested by researchers in the past it is actually pretty good. The impact on translation speed was minimal and all but one translator liked the system and would use it, if it was available to them.

To find out if the translation quality was not compromised we let people rank different translations, some of them done with the system enabled, some of them done without. CrowdFlower is a great website which lets you setup these kinds of experiments.

Wrap up

So after four weeks we managed to implement and test a suggestion system that incorporates a neural network and runs on a GPU. In our experience, four weeks is not out of the ordinarily for a big feature. So what is stopping your company or you from starting to play around with these things? I can only encourage everyone to start informing their clients of the exciting new possibilities.

Maybe one final catch, we all know the programmer’s fallacy when it comes to time estimation. This is sadly even more the case with building models. You do not know beforehand if it will work, and if it doesn’t whether it is because of slight error or just wrong architecture. No unit test is going to give you a definite answer. It really should be approached as research at first. But when it works, it is awesome.

So what’s next for AI at Label305? We will continue offering AI service and want to make it one of our key services in the coming years—next to: web, native mobile and UX. We believe that there should be expert on our management team for all our key services. That means we will have to find an expert on AI. Well… suffice it to say, that is not very easy. These people are in short supply it seems. So, I’m very fortunate that Label305 is doing great and my three co-founders allowed me two years to get a masters degree in Artificial Intelligence at the UvA. A co-founder not dropping out, but starting his studies is a rare thing, I know 😉.

But as you have seen, even though I’m now planning to get one, you definitely do not need a degree in Artificial Intelligence to start building these kinds of systems yourself.

As a true member of the artificial intelligence research community we wrote a paper on our project and user study. You can find the paper over here: Interactive Neural Translation Assistance for Human Translators (PDF). Work on the user study was done in collaboration with the Insitute of Logic, Language and Computation at the University of Amsterdam. We had great help from Philip Schulz who is a PhD candidate over there.

Written by: Thijs Scheepers
Joris Blaak
12 Dec 2016

Connecting a serial device to the web

Categories: Nodejs, JavaScript

At Label305 we have several projects that require us to connect some piece of hardware to either a mobile device or a computer. Since our focus is on developing appealing interfaces to control these devices, we have done quite a bit of research on how to do this efficiently.

One of our latest projects had us connecting a device over a serial port to the web, so that people could record and share activities. To get this up and running we researched different techniques using technologies we had experience with. So let me share some of those with you.

Note: This article was also published by Joris at Medium

Approach 1: Chrome App (note: these are deprecated!)

Chrome apps provide you with a wrapper exposing some API’s which are normally not available for web applications.

In this case we are focusing on the serial API, which has functions such as getDevices and connect (as you would expect). The trick, however, is to attach it to a webpage without having to pack your entire interface. You can do this by including your page inside of a webview and stretching it to fill up your app.

Now you’re able to send messages to your webview using postMessage on which you can listen from your web application with window.addEventlistener('message', callback)

Approach 2: WebUSB

This technology would be awesome to use for your connections. Check out the WebUSB RFC as well as the Arduino example code. It allows you to connect a device directly through API’s available from Chrome. It will show you a pop-up asking to gain access to a certain device, after you do so you will get a callback in the form of a promise with the device you can connect with. To play around with these, you have to switch some flags in Chrome after which you can get it to work.

Unfortunately it is a still a bit too early to get this up and running for serial devices (on Linux that is at least). Your serial drivers will probably get first call on claiming the interface, after which your browser won’t have any options to claim it. But even if you get the connection up and running you, at the time of writing, have to write your own serial driver.

Beside this, note that your existing device has to be registered with a, currently non-existent, public repository of allowed hosts. If you are developing a new device you can include a protocol that explicitly broadcasts allowed hosts. Read more about this in the security section of the RFC.

Approach 3: Companion app

This requires your end-user to install a little app guiding them through the connection process, after which it can expose the connection through, for example, a socket.

Since we are experienced web-developers, an app using web technologies is preferable for us to maintain it in the future. So we opted for an Electron app, which allows us to use the node-serial library. Now you expose a socket on a port chosen by you on localhost using, for example, Socket.IO. Since sockets are not as tied down by CORS rules in your browser, you can listen to check if the companion application is running and implement your own protocol talking to this app.

Note that you have to think about what you will expose; also from a security standpoint. We chose to use a custom protocol which allowed us to send bytes directly to the connected device, but the companion app would verify that indeed we are talking to the device for which the application was designed.

They say hardware is hard, but also connecting with them won’t be a walk in the park either. It is useful to read up on protocols and strategies before making choices on a technology for you.

Written by: Joris Blaak
Niek Haarman
08 Nov 2016

A dive into Async-Await on Android

Categories: Android, Kotlin, Android-App-Development

In a previous article I provided a glimpse into the world of async-await on Android. Now it’s time to dive a little bit deeper in this upcoming functionality in Kotlin 1.1

Note: This article was also published by Niek at Medium

What is async-await for?

When dealing with long-running operations like network calls or database transactions, you need to make sure you schedule this work to a background thread. If you forget to do this, you may end up with blocking the UI thread until the task is finished. During that time, the user cannot interact with your application.

Unfortunately when you schedule a new task in the background, you cannot use its result directly. Instead, you’re gonna have to use some sort of callback. When that callback is invoked with the result of the operation, you can continue with what you want to do, for example run another network request.

This easily flows in what people call a ‘callback hell’: multiple nested callbacks, all waiting to be invoked when some long-running task has finished.

    fun retrieveIssues() {
        githubApi.retrieveUser() { user ->
            githubApi.repositoriesFor(user) { repositories ->
                githubApi.issueFor(repositories.first()) { issues ->
                        textView.text = "You have issues!" 

This snippet of code does three network requests, and finally posts a message to the main thread to update the text of some TextView.

Fixing this with async-await

With async-await, you can program that same function in a more imperative way. Instead of passing a callback to the function, you can call a suspension function await which lets you use the result of the task in a way that resembles normal synchronous code:

    fun retrieveIssues() = asyncUI {
        val user = await(githubApi.retrieveUser())
        val repositories = await(githubApi.repositoriesFor(user))
        val issues = await(githubApi.issueFor(repositories.first()))
        textView.text = "You have issues!"

This snippet of code still does three network requests and updates a TextView on the main thread, and still does not block the UI!

Wait.. what?!

If you use AsyncAwait-Android, you’re given a couple of functions. Two of them are async and await.

The async function enables the use of await and changes the way method results are handled. When entering the function, each line of code is executed synchronously until a suspension point is reached. In this case, this is a call to await. That is all async does! It does not move any code to a background thread.

The await function enables things to get asynchronous. It receives an ‘awaitable’ as a parameter, where ‘awaitable’ is some asynchronous operation. When await is called, it registers with the awaitable to be notified when the operation has finished, and returns from the asyncUI method. When the awaitable has completed, it will execute the remainder of the method, passing the resulting value to it.

The magic

This all seems magic, but there’s no real magic involved. Instead, the Kotlin compiler transforms the coroutine (that’s what the function passed to async is called) into a state machine. Each state represents a piece of code from the coroutine. A suspension point (the call to await) denotes the end of a state. When an awaited task has finished, the next state is invoked, and so on.

If we take a simpler version of our code snippet before, we can see what states are created. Remember that each call to await denotes a suspension point:

   fun retrieveIssues() = async {
       println("Retrieving user")
       val user = await(githubApi.retrieveUser()) 
       println("$user retrieved")
       val repositories = await(githubApi.repositoriesFor(user))
       println("${repositories.size} repositories")

For this coroutines there are three states:

  • initial state, before any suspension point
  • After the first suspension point ( await(githubApi.retrieveUser()) )
  • After the second suspension point ( await(githubApi.repo...) )

This code is compiled to the following state machine (pseudo-byte code):

    class <anonymous_for_state_machine> {
        // The current state of the machine
        int label = 0
        // Local variables for the coroutine
        User user = null
        List<Repository> repositories = null
        void resume (Object data) {
            if (label == 0) goto L0
            if (label == 1) goto L1
            if (label == 2) goto L2
              println("Retrieving user")
              // Prepare for await call
              label = 1
              await(githubApi.retrieveUser(), this) 
                 // 'this' is passed as a continuation
              user = (User) data
              println("$user retrieved")  
              label = 2
              await(githubApi.repositoriesFor(user), this)
              repositories = (List<Repository>) data
              println("${repositories.size} repositories")     
              label = -1

When entering the state machine, label == 0 and the first block of code is executed. When an await is reached, the label is updated, and the state machine is passed to the await call. Execution returns from the resume method at this point.

When the task passed to await has finished, await invokes resume(data) on the state machine, and the next piece of code is executed. This is continued until the last state is reached.

Exception handling

If an awaitable terminates with an exception, the state machine is notified of that exception. In fact, the resume method actually takes in an extra Throwable parameter. Each time a new state is executed, it first checks if the Throwable isn’t null. If it isn’t, it is thrown.

This way, you can use a regular try / catch clause in your coroutine:

    fun foo() = async {
        try {
        } catch(t: Throwable) {


await does not ensure that the awaitable is ran on a background thread. Instead, it merely registers a listener to the awaitable to be notified when finished. It is the task of the awaitable to make sure computation happens on a proper thread. For example, you may pass a retrofit.Call<T> to await. At that point, enqueue() is invoked on the parameter and a callback is registered. Retrofit makes sure that the network call is made on a background thread:

     suspend fun <R> await(
         call: Call<R>,
         machine: Continuation<Response<R>>
     ) {
               { response ->
               { throwable ->

For convenience, there is one version of await that does move its task to a background thread. This takes in a function () -> R, which is scheduled on a background thread:

    fun foo() = async<String> {
        await { "Hello, world!" }

async, async<T> and asyncUI

There are three flavors of async :

  • async : does not return anything (like Unit or void)
  • async<T> : returns a value of type T
  • asyncUI : does not return anything.

When using async<T>, you need to return a value of type T in the coroutine. async<T> itself returns a Task<T> which itself is, as you might have guessed, an awaitable. This way you can await on other async functions:

    fun foo() = async {
       val text = await(bar())
    fun bar() = async<String> {
       "Hello world!"

Furthermore, asyncUI ensures that the continuation (e.g. the next state) is called on the main thread. If you use async or async<T>, the continuation will be called on the thread at which the callback was called:

    fun foo() = async {
      // Runs on calling thread
      await(someIoTask()) // someIoTask() runs on an io thread
      // Continues on the io thread
    fun bar() = asyncUI {
        // Runs on main thread
        await(someIoTask()) // someIoTask() runs on an io thread
        // Continues on the main thread

Wrapping up

As you can see, coroutines provide great possibilities and may increase readability of your code if done right. Coroutines are currently available in Kotlin 1.1-M02, and the async-await funtionality described in this article can be found in my library on Github.

This article is inspired by Async and Await by Stephen Cleary and the informal design description on coroutines. I highly recommend reading these if you want to know more. If you think you’ve read enough for the day, you can also watch this talk by Andrey Breslav.

Written by: Niek Haarman
Alexander van Brakel
28 Sep 2015

User Tests: Improving Fairlingo Through Validated Learning

Categories: User-testing, Validated-learning, Quality-assurance

Hey everyone, today I’m back to share a little about the importance of user testing, and how we’re using it to improve Fairlingo! Additionally, a little bit about validated learning, and its place in the development of a successful product. Let’s dig in!

User Testing: How important is it, really?

It is hard to overstate the importance of testing any product before releasing it. At Label305, we write automated tests, ensuring the quality of our code. Before any change to the code is released to production, tests are automatically run so we know everything still works as it should. If not, we’ll know it and fix it. If the tests pass, it can be released to production.

But there is one thing such tests cannot check: Whether someone actually understands the user interface, and enjoys using it. Computer says yes, user says no. Even a very pleasant-looking interface could turn out to be frustrating to use, deliver a big hit to the adoption of the product. Not something you want to happen to your expertly developed new product, as it enters the marketplace for the first time. So, to insure the quality of the interface, indeed the product, you need to test it with real people. There is really no substitute for getting real people in a room, having them use the product, and see what happens.

This is exactly what is currently happening at Fairlingo; an innovative translation online platform under development here at Label305. At the time of this writing, Fairlingo has been undergoing tests by real translators for a few weeks. So far this has resulted in issues being found that otherwise would’ve gone unnoticed until its public release. Most of these were small and seemed obvious in hindsight. But lots of small things can greatly detract from the overall experience.

What this has to do with validated learning

Validated learning is a term used in the lean start-up scene, and originated in the book with the same name “The Lean Startup”. But what it means applies more universally and can be summarized in the following steps:

  1. Make something with specific goals in mind, goals that can be tested by valid metrics (set goals, choose metrics, and create)
  2. Get your product into the hands of some of your intended audience and let them use your product, and measure whether your product reaches the goals you have set for it (test, measure, and learn)
  3. If not, use your metrics as input for improvement for step 1 (evaluate and validate)
  4. If so, your product is viable for release!

So, in an even smaller nutshell, you pre-test your product before releasing it. This in such a way that you can reliably assess whether a product needs more work before release, or whether it’s even worth of extended development. For software products, user tests are an important part of this process. After all, if your product passes mustard with a sample of your intended audience, you can rightfully be more confident the product can be released to the public.

Fewer relevant tests == Greater real risks

Not testing a product before release is a very risky move, and sadly sometimes even fatal. After all, without user testing any existing issues will be found AFTER release, and may thus frustrate a far larger group of people. And these people are unmotivated to be patient with your product unlike a test-group would have been. Frustrating a large group of users early on can mean the death of your product. You don’t get a second chance to make a first impression.

Testing on the cheap(er): Put your product in beta

At Label305 we meet a lot of startups with great ideas, but none have money to burn. To save money in the short run, they often cut user testing out of the budget. However, user testing does not need to be expensive. It can even be free! An easy way to limit the potential problems of an untested product, is to go for a release with restricted access. Simply put, a beta stage where really interested users get early access and try it out before the public release. Depending on the product it can be very easy to find some early adopters, eager to try your product. If not, all it requires is some more elbowgrease to drum up a small group of people willing to test, and off you go. Even if it’s just one person from the outside having a go with the product, any issues he/she finds can be fixed for a larger release. And that will save you money and many headaches in the long run.

Written by: Alexander van Brakel
Thijs Scheepers
17 Feb 2015

Code coverage for the Laravel Behat Extension

Categories: Laravel, Mink, Behat, coverage

The Laravel Behat Extension is a great and minimal way to test your Laravel 5 application using Behat 3. It doesn't require an external server and does not connect through a driver like Goutte. Besides it being faster and minimal this approach brings another benefit. Because the Laravel Behat Extension loads the application container in the same process as the Behat tests we can measure our code coverage.

Laravel Behat Extension

Jeffery Way from Laracasts has developed a simple but powerful extension for Behat 3 to execute Behat tests on a Laravel 5 app without the use of an external server.

He has made some tutorial video’s on the extension, I recommend checking them out. They are hosted on Laracasts, which is a paid service but definitely worth it.

Code Coverage

Once you have setup your Laravel 5 application with the Laravel Behat Extension you can include our simple CodeCoverage trait. Place it in your features\bootstrap directory and add use CodeCoverage; to your FeatureContext. Make sure the directory whitelist and blacklist are set correctly.

trait CodeCoverage {

     * @var PHP_CodeCoverage
    protected static $coverage;

     * @BeforeSuite
    public static function setupCoverage()
        if (self::isCodeCoverageEnabled()) {
            $filter = new PHP_CodeCoverage_Filter();
            $filter->addDirectoryToBlacklist(__DIR__ . '/../../vendor');
            $filter->addDirectoryToWhitelist(__DIR__ . '/../../app');

            self::$coverage = new PHP_CodeCoverage(null, $filter);

            self::$coverage->start('Behat Test');

     * @AfterSuite
    public static function writeCoverageFiles()
        if (self::isCodeCoverageEnabled()) {

            // Use Clover to analyze your coverage with a tool
            if (getenv('CODE_COVERAGE_CLOVER')) {
                $writer = new PHP_CodeCoverage_Report_Clover;
                    __DIR__ . '/../../clover-behat.xml'

            // Use HTML to take a look at your test coverage locally
            if (getenv('CODE_COVERAGE_HTML')) {
                $writer = new PHP_CodeCoverage_Report_HTML;
                    __DIR__ . '/../../coverage-behat'

            // use the text report to append the coverage results
            // to stdout at the end of the test run
            // so you can view them from the command line
            $writer = new PHP_CodeCoverage_Report_Text(75, 90, false, false);
            fwrite(STDOUT, $writer->process(self::$coverage, true));

    private static function isCodeCoverageEnabled()
            getenv('CODE_COVERAGE') ||
            getenv('CODE_COVERAGE_HTML') ||

View the file as a gist.

The trait uses PHPUnit’s Code Coverage extension, which in turn requires xdebug. Add the following lines to your composer.json to include the code coverage extension.

    "require-dev": {
        "phpunit/php-code-coverage": "~2.0"

Scanning for code coverage makes your tests run much slower. So because you don’t want to run code coverage every time you run Behat we check for certain environment variables. In order to run Behat with code coverage use: $ CODE_COVERAGE=on ./vendor/bin/behat. Or if you want a generated HTML page use: CODE_COVERAGE_HTML=on ./vendor/bin/behat.

Integrate with CI

At Label305 we have integrated this code coverage approach into a private project which runs CI builds on Travis and do not use any kind of special code coverage service. We read the results right from the logs.

However you could also use a service like Coveralls. If you want to integrate with coveralls you will need a clover document. Run $ CODE_COVERAGE_CLOVER=on ./vendor/bin/behat to run the behat tests and generate the clover-behat.xml document which you can send to coveralls or a similar service.

Written by: Thijs Scheepers
Thijs Scheepers
23 Jan 2015

Handle authenticated users when using Behat and Mink with Laravel 4

Categories: Laravel, Mink, Behat, Session

Behat is a great tool for testing your Laravel application. At Label305 we use Behat and Mink extensively to test several applications. But when using Mink drivers like Selenium to test your application it can take ages for all tests to finish. With the simple trick described below you can speedup your tests by creating a logged in state before the browser/client is fired up.

Preparing Application State

When executing behat it is smart to create application state when evaluating the Given ... statements without using the browser. So for example by filling the database. This way you prevent testing stuff in the browser twice.

When we just started out with Behat we created application state for our new scenarios by calling parts of previously implemented scenarios. So if the scenario required a registered user, the test would actualy go to the registration form, fill it out and register the user. This can add up when you have 100+ scenario’s and most of them require a registered user.

  Scenario: Some scenario
    Given there is a user with email ""
    And I am logged in as ""
    When I do something
    Then I should see something
    But I should not see something else

Database State

Implementing a step like Given there is a user with email "" by populating the database is not that difficult of course. Just execute a couple of database queries.

class FeatureContext extends MinkContext {
     * @Given /^there is a user with email "([^"]*)"$/
    public function thereIsAUserWithEmail($email)
        User::where('email', $email)->forceDelete();

            'email' => $email,
            'password' => Hash::make('password'),

When using Eloquent models to query the database you need to have a booted application container. You can make sure this has happened by adding the following method to your FeatureContext.

class FeatureContext extends MinkContext {

     * @static
     * @beforeSuite
    public static function bootstrapLaravel()
        putenv('APP_ENV=behat'); // So this application has its own environment
        $app = require_once __DIR__ . '/../../../bootstrap/start.php';


It is important that the application running from behat is using the right environment. Make sure detectEnvironment() is setup to give the application a seperate environment. I’ve used putenv() and getenv() in this example.

Session State

Now we will look at the step Given I am logged in as "". When you are using Behat without an external server this is as simple as Auth::login($user);. You could use this approch when using the Behat Laravel Extension. The major disadvantage is that you can’t use JavaScript.

In our case, the Behat process is seperated from the server because we wanted to use Selenium. However, we want the user to be logged in without having to fill out the login for every scenario (which takes time, especialy when this is done 100+ times). How to accomplish this? By using something that looks like session hijacking, but without the hacking part.

Authentication state is stored to the session and a user specifies which session he/she is using by providing a cookie. We wanted to create a session in the Behat process and let Selenium use that session to let it execute its steps. So we gave Selenium a cookie, omnomnom.

class FeatureContext extends MinkContext {

     * @Given /^I am logged in as "([^"]*)"$/
    public function iAmLoggedInAs($email)
        // Destroy the previous session
        if (Session::isStarted()) {
        } else {

        // Login the user and since the driver and this code now
        // share a session this will also login the driver session
        $user = User::where('email', $email)->firstOrFail();

        // Save the session data to disk or to memcache

        // Hack for Selenium
        // Before setting a cookie the browser needs to be launched
        if ($this->getSession()->getDriver() instanceof \Behat\Mink\Driver\Selenium2Driver) {

        // Get the session identifier for the cookie
        $encryptedSessionId = Crypt::encrypt(Session::getId());
        $cookieName = Session::getName();

        // Set the cookie
        $minkSession = $this->getSession();
        $minkSession->setCookie($cookieName, $encryptedSessionId);


This does not work out of the box, however. You will have to make sure that session data can be shared between the behat and server process. So you can’t use the array session provider. Using the file provider should be fine, and make sure both the environment of the server and Behat are using that setting.

Now for the tricky part. The Illuminate\Session\SessionServiceProvider has a method setupDefaultDriver() which makes sure the application always uses the array session provider when executing the application from the command line. So we need to extend the Illuminate\Session\SessionServiceProvider and create our own service provider and swap them out in the config/app.php. But in order to keep things clean, make sure this only happens in the behat environment (setup above in the bootstrapLaravel() method).

<?php namespace YourApp\Providers;

use Illuminate\Session\SessionServiceProvider;

class BehatSessionServiceProvider extends SessionServiceProvider {

    protected function setupDefaultDriver()
        // Do nothing
        // Allows command line execution to save sessions


So now your all set up to start using Selenium, Goutte or the other decoupled Mink drivers and keep your tests running relatively fast.

Written by: Thijs Scheepers
Alexander van Brakel
02 Dec 2014

Gamification in action

Categories: Gamification, Concept development

Here at Label305 we’re always on the look-out for innovative ways to create a compelling digital experience, no matter the product. With projects ranging from designing apps for tourist hotspots, to developing digital tools that track and improve sports performance, we’re always facing the exciting challenge of how to make things more effective, and more engaging. And, if it’s not too much trouble, more fun, too ;).

Tl;dr: Gamification has tremendous potential for making countless tasks more engaging and fun, but it has to be done right or not at all.

The case of FairLingo

When Sam van Gentevoort of Vertaalbureau Perfect(a fast-growing translation service) came to us with an idea for an online platform that crowdsources translations, we got really excited. This would be a platform that gives everyone the opportunity to turn their translation skills into cash, from stay-at-home moms to exceedingly qualified professional translators. An idea innovative in its own right, yet we couldn’t help but wanting more. How could we make Fairlingo as much fun as possible, while ensuring translation quality? This is where we dove into the fascinating world of gamification. Indispensible in helping me find the aswers were Jesse Schell’s excellent book The Art of Game Design: A Book of Lenses, and Kevin Werbach & and Dan Hunter’s For the Win: How Game Thinking Can Revolutionize Your Business. These books opened a new world to me, and I quickly got acquainted with a vital and necessary change of perspective.

Not users, but players

The first thing that has to be done when considering gamification is throwing out the term ‘user’, and bringing in the term ‘player’. This seems trivial, but I assure you, it’s not. Below is a short comparison of both in terms of the cause of use or play, what they want out of the product or game, and what they need for the experience to be what they’re looking for.


An illustration of a user
  • Motivation: Perform tasks to reach a goal
  • Wants: As few obstacles as possible. Tools that simplify tasks
  • Needs: Most effortless path to goal


An illustration of a player
  • Motivation: Be entertained, to have fun
  • Wants: Enjoyment, rewarding experience, obstacles to overcome
  • Needs: Challenges, positive feedback

The term ‘user’ triggers a certain kind of thinking. When considering the term ‘user’, thoughts like ‘the highest priority is fun’ or ‘exciting challenges should be built in’ are not the first things that come to mind. However, when you think of a ‘player’ chances are those thoughts come running to the front of the line. You also need to know what makes something a game. How is it fundamentally different from, say, work, a job? Essentially it’s the element of ‘voluntariness’. A player cannot be forced to play in the true sense of the word. You can drag someone into a game and make them go along, but then they’re not really players. Players choose to go along with a game because it’s fun. They will keep coming back if the game is also intrinsically rewarding to them. Keeping in mind that you need more than just fun is important, or you will find high adoption of your game paired with equally high drop-out rates. Ensuring the experience is intrinsically rewarding is the key to keeping your players coming back for more.

Points, Penalties, and balancing

After getting these fundamentals down it was time to move on to the real stuff. The people on the platform will translate texts and check each other’s work for mistakes. In doing so they make money, but we made it more fun by also introducing a points system. The points system represents a player’s skill and experience. Points are given for doing good work, even more for excellent work, and the fewer mistakes you make the more points you get to keep. The more points you get the closer you are to getting to the next level, where the player will get access to more difficult and better-paying texts, as well as be offered more work. But how much points should you get and when? How many points should a player lose for misspelling a word, or missing a deadline? This was pretty tricky to balance. From an experience point of view a simple rule of gamification is that a player should feel like a winner most of the time. So, most events should be rewarding to the player. Obviously we’d have to give the players points most of the time and take them away fewer times. Clear enough, but that still leaves the balancing problem. After all, from a business perspective you can’t make players feel like winners without taking their performance into account. Last but not least was the question of the number of points we’d give. Should a player get a 100 or a 1000 points for translating an average paragraph of text? This is important, because size matters, at the very least from the perspective of perception. Small amounts are, well, small. But easy for our brains to keep track of. Large amounts seem impressive, but are more difficult to work with, and can make a point in general seem rather worthless.

The solution

An example of a points scale based on percentages

I decided to sidestep the ‘size’ problem by drawing up a points scale based on percentages. This made it a lot easier as I could just ask myself what the single best thing was a player could do. This event would represent the highest amount of points, 100%. Similarly with the single worst thing a player could do, that would represent the highest penalty. Then, I just placed all other points-related events along the scale, which resulted in balanced distribution of points in a very visual way.

You’ve probably noticed the big gap in the middle. This exists to emphasize the bonuses given for exemplary skill. Players will not try to do things flawlessly if it’s easier to get the same amount of points by doing something a lot less difficult twice.


Applying gamification is more complex than you would expect at first, but do it right and you could see people lining up to do something in the context of a game that they would normally stay far away from. However, not everything lends itself to gamification. But whenever it does it holds the potential for the most rewarding user experience possible.

Written by: Alexander van Brakel
Thijs Scheepers
26 Nov 2014

Setup logs and monitoring for Laravel applications on Heroku

Categories: Laravel, Heroku, Papertrail, Bugsnag, NewRelic

Whether you are bringing a Laravel application into production or are in active development, it is always good to have logging services in place so you can see what's going on and where problems lie. Furthermore it is important that you are notified when things go sideways in production applications. That's why we need monitoring and alerting. We have experimented with Heroku in the past couple of weeks, so we have put together a small guide to setup these services on Heroku. I was amazed how easy it all was.

Heroku Logging for Laravel

Setting up basic logging is really easy, once you know how. Heroku picks up everything that is written to stdout and puts it in its logs, which you can access by running heroku logs. To make your Laravel app write all logs to stdout you need to modify app/start/global.php.

| Application Error Logger

/* Log::useFiles(storage_path().'/logs/laravel.log'); // remove this line */

These lines enable Monolog which is already included in Laravel. You can remove the line that enables writing the log to logs/laravel.log.

Setting up Papertrail

If you want to do more with your logs you can forward them to a service like Papertrail. With Papertrail you can set alerts for when something bad happens. To set up Papertrail run: heroku addons:add papertrail. Yeah, that’s right, a single command. And since Laravel already logs to stdout these logs will now also appear in Papertrail. You can access Papertrail from the Heroku Dashboard.

Setting up Bugsnag

With Papertrail you can now search your raw logs but we also want to group errors, see how many time a particular error occurred and view nice Laravel stacktraces. If you have used services like Crashlytics for crash log analysis on mobile apps you know how valuable this can be. Bugsnag is basically Crashlytics for the web. It gives you a nice overview of all unresolved internal server errors and uncaught exceptions. To setup Bugsnag you need to make small modifications to your Laravel application.

$ heroku addons:add bugsnag
$ composer require "bugsnag/bugsnag-laravel:1.*"
$ php artisan config:publish bugsnag/bugsnag-laravel
# afterwards commit the changes and push them up to heroku

Modify app/config/app.php and add the special Bugsnag service provider and facade. If you take a look at the code of BugsnagLaravelServiceProvider you will see that it will make sure it handles all the exceptions correctly and sends them to Bugsnag.

return array(
    'providers' => array(
    'aliases' => array(
        'Bugsnag' => 'Bugsnag\BugsnagLaravel\BugsnagFacade',

Modify app/config/packages/bugsnag/bugsnag-laravel/config.php to set the API key. The heroku addons:add command automatically added the BUGSNAG_API_KEY environment variable.

return array(
  'api_key' => getenv('BUGSNAG_API_KEY'),

When you have several larger applications you could use Bugsnag’s default plan instead of the Heroku Addon, it looks like that is more affordable. When you switch you only need to change the value of the BUGSNAG_API_KEY environment variable.

Setting up NewRelic (with Nginx)

With NewRelic you can actively monitor your applications performance when it is in production. See which requests are slow and which request triggers the most database queries. In addition NewRelic is also a great tool for basic profiling when your application is in development. You should take a look at their transaction traces feature. NewRelic has even optimized their software for Laravel so we can view these transaction traces per route.

Setting up NewRelic is again as easy as executing a single command: heroku addons:add newrelic. You can also set up alerting with NewRelic from the Heroku Dashboard.

When we started using NewRelic we noticed that the transaction traces per route only worked if we used Nginx and PHP-FPM, it didn’t work with Apache. To use Nginx modify your Procfile.

web: vendor/bin/heroku-php-nginx -C ops/heroku/nginx.conf public

The -C argument points to the nginx.conf file we will create in a minute. The last argument points to the location Nginx should use as root of your application, in the case of a Laravel application this is the public folder.

Next, create a ops/heroku/nginx.conf file.

try_files $uri $uri/ @rewrite;

location @rewrite {
    rewrite ^/(.*)$ /index.php?_url=/$1;

The try_files line is the important line, it will first check if a static file exsists, if not it will pass the request to PHP.


If you use a central chat system like HipChat or Slack you can hookup Papertrail, Bugsnag and NewRelic to send alerts to your chatrooms. This is what we’ve done. It a great way to keep your e-mail inbox clean and keep all notifications for a specific project grouped.

All these services are free to use when you have a small application with small logs, so they are free to use in development. We have no large production applications running on Heroku yet, going by the pricing table the costs could go up quickly. With setup being this easy you should look at the information these services give you and how valuable you think this information is.

Written by: Thijs Scheepers
Olav Peuscher
01 Oct 2014

Our new website

Categories: Design, Webdevelopment, UI

You may have noticed that we have a brand new website. After having our old website for one and a half year, it was time to rebuild the website completely. This process took longer than expected since the website is more than just a few pages. It represents us, explains the way we work and showcases the projects we're proud of. The way we work has changed a lot over the past one and a half year. Now with the new website we needed to put our new process on paper and make it concrete, and that was tougher than expected.

####Tl;dr: We built and designed our new website with Jekyll, kept the feel of our old website and described our new work process.

A new look with a familiar feeling

Building a website for your own shop brings a lot of questions to mind. Not only design related questions, but also more business related questions. Building a new website is fun, but you are aware of the importance of the website. Within Label305, the function of the website was very clear: “Show both potential clients, potential employees and peer companies what we do, what we are capable of and who we are, while maintaining the feel and recognition of our old website.

Flat design with detailed icons

In my opinion—the elements that ‘made’ the old site, were the grungy illustrations and details, the the font Intro and our blazin’ blue and roaring red color. My brother Xander and I both wanted to abandon the grungy thing within the old website, while maintaining the feel of the same old website. What we mean with “the feel of our old website” is the combination of a certain degree of flat design and a more cartoon-like style illustrations.

I added the yellow color to our color palette and transformed the round borders under the old website’s sections to sleek repeated triangles and rotated squares. The “Intro Book Alt” font was also added to the Label305 style to have the opportunity to vary more with the fonts. I shot some pictures of our office and projects, something that was totally missing on the old website. However there was still something missing in the design and overall style. The recognizable Label305ness. Alex van Brakel and I started to make vector icons, for every step in our process, and our services. These icons became some kind of “small art works”: icons with a lot of detail.

The combination of screen filling images, sleek borders, cool colors and detailed icons made the overall style very familiar to the style I describe as the “Label305’s style”. All while having a design that does not look like the old website at all.


The development of our website was done the same way we work with clients. We started with a session with four people and determined the goals and the raw concept of the website. Wireframes were made and a lot of ideas were captured. Based on this session, a backlog with to-do’s was filled. The new website was made using Jekyll, which is great for static websites, and Sass CSS.

During development, every built feature was reviewed in pull requests and merged within the development branch. I was focusing on building parts which are usable, consistent and could be improved in the near future, being useful at the moment. This way, I was able get a creative workflow, without focusing on one single element of the website. Photoshop was not touched within the process and every step was an improvement, which works very satisfying.


The old website looked kinda cool, however when looking at animations it was a bit static. There was one CSS-slider with images and the other parts of the website were just present. Animations are cool and can be an addition to your website, but a website is not a showcase of all animations you can make. Keeping this in mind, I focused on combining animations that have the same direction, creating unity. For example the header animation combined with the parallax titles: both vertical. All animations in the website are vertical upwards based, focusing on scrolling further to the bottom.

Responsive web design

Our website’s styling is optimized for both tablet and phone viewports, which was a real challenge. Bootstrap was helping a lot, but I really needed CSS to create specific styles for specific screen sizes. It took some effort, but it is so cool when your website looks good at all sizes. The site now looks different on laptop, tablet and phone, but the readability is always good.


The new Label305 website really represents who we are and what we do a lot better. Also the implementation is much more open to further additions and changes, since there is a lot more consistency in the design. Jekyll is a cool tool for static websites. And as a front-end developer, I really learned a lot during the development of this website.

By the way, the source code of the whole website can be found at GitHub.

Written by: Olav Peuscher
Thijs Scheepers
03 Sep 2014

The switch, from CakePHP to Laravel

Categories: Laravel, CakePHP, Framework, PHP

CakePHP is a good framework, and we have experienced that it works well with modern development practices, like test driven development, dependency management, cloud deployment, migrations, caching and rapid development. Cake has been our main framework for more than 3 years now and it has been awesome. We have learned a lot but it is time to move on.

Tl;dr: Laravel will be our main web development framework for the next year or so. We are going to switch to Laravel and we have a couple of reasons why.

  1. Laravel enables true “Rails-like” development for PHP and goes beyond just that.
  2. is the most awesome learning resource out there, right now!
  3. We believe Laravel has a bright future.

After discussing our considerations I will discuss the consequences of the switch and how it will affect the development of our rapid-back-end creation library Auja.

Beyond true “Rails-like” development

CakePHP was of course heavily inspired by Rails. Back in the day, things like a command-line utility and migrations where fairly new to the PHP community. These did seem awesome. However, while we tried to use these features we often ran into problems.

  • The structure of migrations was difficult to understand.
  • I18N translations were a pain.
  • Executing commands in the command-line on Windows wasn’t always easy.
  • Difficult Travis CI configuration.
  • We’ve tried using CakePHP with Composer but it wasn’t always trivial to load plug-ins or the framework itself.

Because of these problems we regrettably often didn’t use features like migrations and composer, mainly because a deadline was always right around the corner. It left us with a bad feeling that the code architecture could have been better if we just had the time to tackle these problems.

We have played around with Rails two years ago and noticed it didn’t have these kinds of problems. Dependency management was particularly nice with Bundler and Gems. However Rails had a whole other class of issues. Rails installation took literally more than an hour on computers that where not that old. And deployment was not trivial.

When we started experimenting with Laravel our mouths fell open. Laravel was basically all the goodness of Rails and PHP combined—no deployment pains, a great extendable command-line utility, very good dependency management integration, painless Travis CI configuration and great migrations.

Rails has CodeSchool and RailsCasts, CakePHP never had a learning resource with that level of polish. So we learned the framework by reading the documentation. I know some of you may disagree, but the truth is: “documentation is boring” and nobody is going to read all relevant pages before starting a project. Furthermore documentation does not explain why things are the way they are and do not often lead to a deeper understanding.

Laracasts was a real eye-opener, it showed everyone who was unfamiliar with Laravel all the relevant information to get a project going immediately. Our designers now know their way around Laravel, thanks to Jeffery Way. And this is not because we’ve forced them to learn this stuff—no they actually find it very interesting.

Our developers learned a lot of new things too. Sometimes it is very difficult to explain why a developer should test their application or use a certain pattern. Laracasts is great at showing why things are the way they are, and give developers new insights.

Bright future

Laravel is booming, especially in the Netherlands. We have seen how easy it is and are confident other developers will pick it up. If you search on Google Trends for Laravel and CakePHP you can see that Laravel has surpassed CakePHP as a popular search term. The Laravel framework is focused on the future and backwards compatibility isn’t an issue—yet. Therefore it can get the most out of all the new PHP language features as well as support HHVM.

Auja and the future of our Cake libraries

Earlier this year we decided that Label305 will be a company that embraces open source. We even made it one of our core values. We wanted to open source our interface library Auja, with which you can create a back-end for a web application extremely fast. But our first core value has to do with ease-of-use, and using the library frankly wasn’t that easy for the developer. The library was written to our very specific way of working and therefore useless for most projects out there. Now that we are making the switch to Laravel we are confronted with that very same problem ourselves, the CakePHP library isn’t useful for a Laravel project.

That is why we’ve started working on Auja v3.0. The new version will have separate JavaScript, PHP and Laravel components and a new protocol so it will be easy to write support for another framework should that be needed in the future. We are planning to open source it. More on Auja v3.0 in a month or two.


We are open to new technologies and always want to use different kinds. So we gladly want to work on your PHP, Node.js, Python and Ruby projects. We will of course still support all CakePHP projects and continue to follow the framework. But for new web applications with a relatively standard architecture we will use Laravel.

Written by: Thijs Scheepers