The multi-armed bandit test is one of the primary methods that growth teams deploy when they set up an experiment. Below, I will make this complicated method seem simple, and I will describe some inspiring case studies.

In their toolkits, growth teams have several methods to experiment. One of the more complex “flavors” of testing is the multi-armed bandit. Some readers of this blog (shout-out to Christina and Henk) told me that it is hard to find an easy-to-understand explanation of this method. This article should fill in the blanks.


Some Background: A/B Testing

Before I delve into the nitty-gritty of multi-armed bandit testing, I will take a short detour. Because multi-armed bandit tests are more complex versions of A/B tests, first you have to fully understand what A/B testing is.

Also known as a “split-run test,” an A/B test is a controlled experiment in which two versions of a single variable are compared by testing a subject’s response to variable A against variable B. This will determine which of the two variables is more effective.

This method is one of the keys to the success story of Netflix. Almost all their decisions are based on data analytics and A/B testing. Through split-run testing, Netflix selects the best artwork for its content, sometimes resulting in 20% to 30% more viewing for that title.

Growth hackers can do the same. A simple example: through split-runs, they test the effectiveness of your price by showing different rates to different groups of website visitors (segmented in customer type). For each group, a rate is calculated which ensures the highest profit. Each group will now see the price that is calculated for them. Variations in prices based on time of day are also possible.


So What Are Multi-armed Bandit Tests?

Multi-armed bandit tests are “smarter,” more complex versions of A/B tests. How does it work? Machine algorithms are used to dynamically allocate traffic to variations of a website that perform well while assigning less traffic to variants that perform worse.

These tests work faster than A/B tests because they send traffic to “winning” alternatives to a greater extent within a specified period. In this way, less time is wasted on testing a poorly performing option (as happens with A/B tests).


Multi-armed Bandit Test Examples

Multi-armed bandit algorithms are not only used in testing, but also daily applications. Take for example these inspiring cases:

  • The famous personal recommendations of Netflix are based on a multi-armed bandit algorithm.
  • The website of The Washington Post shows combinations of articles with the largest chance that you will click on them.
  • Wireless networks use a multi-armed bandit algorithm to determine which route is optimal and the most energy-efficient.

You may want to know when you will meet a multi-armed bandit in your business. Well, because of their speed, these tests are particularly suitable for situations where rapid results are required, for example, variations on news headlines. Your growth team will possibly deploy this method to test short-time campaigns.


A Final Word

Growth hacking can mean the difference between the businesses that fail, survive, or thrive. Through our experienced partners and staff, RevelX combines the wisdom of more than 100 years of growth hacking. If you want to know more about this inspiring subject, don’t hesitate to reach out to me. You are always welcome for a cup of coffee at our office.

Learn From The Best! Download

The 25 Best Growth Hacks!

DOWNLOAD NOW!