4+ years of A/B testing at Yellowgrape: 3 powerful lessons

Nick Schaperkotter
Team Lead CRO & Design
27/3/25
No items found.

It has now been more than 4 years since we started large-scale CRO and A/B testing in January 2021. After several requests from customers “whether we also did CRO”, it was decided to add this to Yellowgrape's services. So more than 4 years later, we recently crossed the 600 A/B test mark. A good time to take a look back at the impact that has been made with customers and the insights that have been gained. But are there any conclusions we can draw about web optimization after all these tests?

CRO in numbers

Before we get to the conclusions, we would like to share some statistics about our tests. Starting with the success rate, which reached 46% over 600 A/B tests, a percentage that we are proud of. We count a test as successful if there is a significant increase in the primary KPI. The success ratio alone is not a determining metric, because the negative test results also have value. On the other hand, we are of course happy that we can offer such a high success rate to our customers!

Conclusion 1: the same tests can have different results at different web shops

A common question we get is “can't you just share some best practices and we'll immediately implement everything that works?” Our response is then always that it is not that easy and that we often see that similar tests have different results at different web shops. In doing so, we insist on the importance of testing within your webshop and target group. Only then can you really be sure whether the adjustment will produce the desired result. A good example of this is adding a sticky filter to the mobile category page. The idea is to add a sticky filter so that visitors don't always have to scroll up when they want to add or change a filter. This is a test that we have tested several times with different customers, but with different results. What we consistently see is that interactions with the filters are increasing significantly. So we can clearly change behavior with this. However, this does not always result in more transactions. If your filtering does not turn out to be completely correct or does not work in a user-friendly way, making the filter button sticky will actually have a negative effect.

Conclusion 2: Minimizing the navigation menu in the checkout is always a winner

And now we're going to contradict ourselves nicely. So where we see a lot of varying results in the tests we have done, there are certainly tests with a consistently high success rate. For example, there was one type of experiment that we were always successful with: minimizing the navigation menu in the checkout. What we still often see is that the full navigation menu is still displayed in the checkout, including the search bar and all product categories. These are unnecessary exit options, a visitor who has reached the checkout stage should be distracted as little as possible and the focus should be on completing the transaction. That is why we always recommend minimizing the navigation menu. And successfully, out of 6 tests, he was also successful 6 times! Give it a try!

Conclusion 3: hypotheses supported by qualitative research perform better

By documenting all our tests in an extensive backlog, we are able to do meta-analyses about the tests. For example, we document the page types, devices, CRO factors, but also what form of substantiation was used in formulating the hypothesis. We are not documenting used optimization techniques yet, but we will do that from 2025! Back to the substantiation of the tests. The data from the meta-analysis shows us that the success rate of the substantiated tests is significantly higher than when it is missing and the hypothesis is the result of a brainstorm. At the same time, we also have to admit that many tests also fell into the latter category. So there is still work to be done for us. Here, we mainly want to focus on qualitative research (surveys, interviews and user testing) because we specifically see from these data sources that they lead to successful hypotheses. Of course, we don't forget to also look at quantitative data sources such as web analytics data!

In summary, we are very proud of the results we have achieved so far for our customers. We look forward to the future to test even more for even more customers. Hopefully, we can even cross the 1000 A/B test mark in the short term. Need help with CRO or A/B testing? Be sure to contact us!