When you Google something, you are served tailored results created using data that the search engine has collated over time. Perform the same search again and you may well be served a different – or differently ordered – set of results. Although your preferences may have evolved infinitesimally in the interim, the disparity could be because Google is running a test and you’re its lab rat. Multiply the number of users, add in a control group – and you’ve instantly created a giant focus group. This process is called A/B testing and Big Tech uses it in many different ways to improve conversion rates and consequently, revenues.
What’s A/B testing and why do I care?
Think of it as a form of science where you measure the effects of the changes that you're making to websites or their features. A/B tests work by showing two versions of the same item to two different sets of customers. So 50% of customers get version A, and the rest get version B. Then you measure the impact of each – for example, the number of items bought – and evaluate the difference.
A/B tests tell you what your customers want and can be applied to anything from headlines to user experience.
For this reason, Google and Facebook each conduct over 10,000 A/B tests per year, because they help measure the business impact of strategic decisions. Blindly adding in new features based on a gut feeling about their revenue impact is kike crossing your fingers and making a wish—it won’t work. If you can't measure performance, does the feature even have the impact you think it does? Or are you distracting your user from their objective?
Let’s say an eCommerce website produces an inspiring magazine to generate sales, incurring significant outlay on staff and production. A/B testing can easily demonstrate the impact of such a magazine, as compared to, say, an email campaign. So, you can run an experiment to determine the impact of any new change.
How do I get a shiny new A/B testing tool?
This being the third decade of the 21st century, there are a number of different plug-and-play tools available for A/B tests. These are great for businesses that agree with how the experimentation tool functions and the limits the tool imposes. Typically, readymade A/B testing solutions have their own analytics, they run off their own statistical engines and even have their own technical limitations.
So they need to be integrated into your website, your own analytics, and your local systems—down to, for example, the single sign-on process of your company. These are factors you need to weigh in before even thinking of price points.
But can I build my own A/B testing tool?
Yes, you could indeed build your own customized A/B testing platform. If you’ve got the time—about three months for a basic tool—this is really the best option if you have very unique requirements. The great thing about having your own A/B testing program is that you control the destiny of your tool.
You set your own priorities and add in any extra features you want. With a vendor-sold solution, by contrast, you have to wait for missing features to appear on the roadmap and then to be implemented by the vendor—if that occurs at all. That means relying on and outsourcing an important part of your business strategy to someone else.
The benefits of a custom A/B testing tool
Building a custom tool allows better integration with your existing products, websites, apps, and analytics—even from competing companies. That's important because you always have to do some integration and a bought tool might not integrate as nicely as you want.
Another part of a custom experimentation tool is the bucketing algorithm. Suppose you run a 50/50 test to determine the best headline. Here, you let half your customers see a bad headline that they maybe don’t click on, so you lose the traffic. Instead, you can show the version that gets more clicks to more users sequentially, so most people see the best headline and click on it. You don’t need to wait for statistical significance, but you can let your bucketing algorithm provide the best experience to your customers.
With your own tool, you can easily add new features by developing new software inhouse—which you can’t do with a readymade solution because you can't add any features.
How long does building an A/B testing tool take?
A question we’re often asked concerns timelines. The answer depends on what you want. You can have a basic A/B testing tool within about three months. With more custom integrations, it can go to six months or a year.
Xebia can help set up the tool initially, and your own software team can take it over and add in new features. And because you’ve got the expertise inhouse, it becomes a part of your website, app, and your data infrastructure. So, you can develop it indefinitely, in line with changing business demands.
How about a case study?
Try an eCommerce platform with nearly 20 million products. Bol.com needed a measurable way to understand and analyze user behavior. Its team was already working with A/B testing, but they were recording their ideas, tracking experiments and capturing and archiving analytics across many different systems.
Xebia stepped in to help create a single custom experimentation tool
that could unite these functions and embed validated learning throughout the entire company and internal APIs while maintaining control of its data and statistical analysis.
The result is new insight that bol.com can now use to increase its revenue, enabling teams to learn more and faster from user behavior, and in turn, improve conversion and user experience across different platforms. And since business and techni
cal users can use the same tool, they’re speaking a common language, which leverages benefits across the value chain.
Do you want to read the entire bol.com case study?
Click here.