Establishing a Culture of A/B Testing
I’m continuing some short videos about A/B Testing and how to improve it at your company. The transcripts for the video are below:
How many rounds of testing should a company plan for before they are successful?
I never advise companies that they need to plan for a certain number of rounds of testing. I actually focus more on a few specific key principles. One is it’s really important to have a system in place that enables your company to run a high volume of tests. If you run one or two tests a month, your likelihood of success is really low. Most times a test has a 10-20% success rate. So what you need to start being able to do is run more tests and have a higher velocity of output of tests to really get anything successful happening through the pipeline. I’d say secondarily though, when you do start to see success, figure out what made those tests successful. So let’s say you run 10 tests, and you have two that are successful, you want to then focus on why were those two tests successful. What about those tests made them successful? What changes did you make that customers responded to? Did it do more than just improve a metric at the top of the funnel and actually translate to a purchase or to retention? And if you are feeling good about the results after asking those questions, maybe it’s time to double down in those areas.
So, for example, let’s say you test a lot of variables on a landing page, and the only ones that ended up mattering are things like how many forms you put on the landing page and then what the button placement was. I would then start to double down on how do we focus more on things like the form placement, the input in those fields, what the button looks like. Then I’d start to focus more there, because that’s clearly something that’s impacting customers and making them make a decision to engage or not engage with your product. To be clear, I’m just using these as examples and not necessarily saying these are the variables that will matter for your product.
What are some of the keys to getting better at A/B testing?
Three key things. I think number one is what I said at the very beginning: make sure your tests are hypothesis driven. Don’t just test things because they seem like interesting ideas or “Hey, we had a cool idea. It’s a cool concept. Let’s just test it.” Make sure that your tests are based on “We know our customers like our product because it does this, therefore we’re going to run a test because it’s similar to that, or because it’s going to make our customer more engaged with our product.” Test concepts that are going to make your customers more than likely to use your product.
Second is figure out how you can increase the velocity of your tests over time. If you’re only running a small number of tests, you have a really high likelihood that you’re not going to see a lot of success in A/B testing, or testing could become very frustrating. So you need to decide are you able to run enough tests that you’re going to be successful with A/B testing, or do your tests need to be more user focused? Do you need to actually do a lot more background research before you make these bigger investments in future work and in changing things?
And I’d say last is create a culture around this within your team. Figure out how to engage people that it’s okay to have new ideas, make mistakes, and generate a variety of concepts. And then, if you think about every new feature or every new concept as a test, then people are less scared of failure. The idea is that if you had a good hypothesis, if you knew what metric you were trying to move and it still failed, then at the very least, you had done good thinking about why you were running the test. Whereas if you are testing and you are punishing failure, then you’re never going to have a culture or a company that embraces testing and that advances that way.