/

Back to Basics: A/B Testing for Marketers

/

Back to Basics: A/B Testing for Marketers

/

Back to Basics: A/B Testing for Marketers

Back to Basics: A/B Testing for Marketers

Inspiration

Sep 28, 2022

Updated Sep 28, 2022

Most of us are avid A/B testers without even thinking about it.

Do we prefer the blue shirt or the white shirt?
Chocolate cupcakes or lemon zest?
Paper or plastic?

Any side-by-side comparison to select the preferred option is its own little A/B test.

When it comes to digital advertising, however, A/B testing becomes a whole lot more serious. A/B tests can make or break an e-commerce site design. They can spell the difference between the success and failure of a digital ad campaign.

On the surface, this appears obvious, but how many marketers truly understand how to build, execute, and learn from a refined A/B testing strategy? How many of us are actually willing to take the time to make it work for our brands and clients?

It’s tempting to use our instincts to decide which call-to-action or headline will attract the most people. But the only way to ensure the greatest impact and performance of our creative is with the ongoing incremental improvements of rigorous A/B testing. 

Put simply: More A/B tests = better Return on Ad Spend.

So, how do we put this all into action? 

Here are some steps on how to make your A/B tests a breeze.

Have a Clear Hypothesis

As we learned in our middle school science class, any well-designed experiment starts with a hypothesis.

This is nothing more than a logical statement of what we expect to happen if we change something about a campaign. Formulating our hypothesis is as simple as naming the variable we want to test and stating the expected outcome.

A few examples of this type of hypothesis formulation might be:

  1. Adding the brand’s logo in the upper left of the ad (variable) will increase clicks (outcome)

  2. Using an image of a smiling face (variable) will drive more clicks than a product image (outcome)

  3. Removing the phone number and address fields (variables) will increase email subscriptions (outcome)

After your hypothesis is ready, it’s time to test your ads to understand what works best.

Create a “control” and a “challenger”

Now, with the independent variable set in stone, the dependent variable you want to see changed, and your desired outcome, it’s time to make a "control" and a "challenger." These two go hand in hand with our hypothesis, as the control should be the unchanged version of the test.

If it's a logo-less ad, the control would be the current design of it.

Next, the challenger is what you’ll test against the control, whether that’s the logo added or a new image.

Look for Big Winners

The A or B version that improves the company's metrics is called the "winner." After the A/B tests, marketers can use this feedback on whatever element it was tested for and implement the winning version to optimize its ROI.

With a large enough sample size, big winners can be determined without any difficulty. There will be statistically significant differences between Version A and B results, even if they are minimal.

The key to finding big winners is making sure the sample size is large enough to see trends. To make it easier to detect smaller differences between two populations, the sample size must be larger.

Limit Tests to only a few Things (Headline, Image, CTA)

It’s helpful to test a variety of things in an ad, including the headlines, images, and the CTA (call to action). These are vital parts of an ad’s performance. The headline is the first thing a person notices, so it’s important to be picky about what font to use, how long the headline is, and where it is placed on the page.

Also, the CTA is what makes a user take action. A/B testing allows you to find the effectiveness of CTAs depending on where they are placed on the ad or even its color or font size. With these experiments, we can determine which version has the best chance of getting the most conversions.

Test both variations simultaneously

Does the time of day affect the user traffic on a website? What about the time of year? Less people will visit a gardening site in the winter than in the summer, of course!

Timing has a big impact on the outcomes of any marketing campaign. A change in performance could very much be affected by different months. It’s unreliable to test Version A in the first half of the year and Version B in the second, so both have to be run simultaneously or the results could be flawed.

But there are still exceptions, such as if you’re testing timing itself—for example, finding the best time to send out mass emails. This type of test is a great method of finding out when users are most engaged and how your target market behaves throughout the year.

Give the A/B test enough time to produce useful data

The sample size will be directly affected by how long the test is running, especially when you're testing something like a web page, which doesn't have a finite audience. It's important to give your test enough time to run so it can receive a significant number of views. This will make sure your test has a sizable data set. Otherwise, it would be hard to find significant differences between the two variations.

So, how long do you need to wait?

Well, it depends on what you’re trying to accomplish and your advertising budget. A good rule of thumb is that you need at least a few thousand “events” to draw any conclusions. So if you want to understand which version of an ad is driving the most clicks, you will need to run enough impressions for one or both of the ads to generate at least 1,000 clicks, though more is generally better. Large global ad verification vendors like Nielsen or Millward Brown require a minimum of 50,000 events or more to consider results reliable.

Validate Results

Analyze your results! Make sure accurate data and treatment are given to both the control and challenger groups. When your A/B test fails, you must revalidate your research in case it's caused by inconclusive or weak data. Otherwise, if the outcome is reliable, draw conclusions.

Lather, Rinse, Repeat

A single A/B test will certainly help you better understand your campaign results, however, to get real breakout results, marketers should be running ongoing tests all the time. Each consecutive round of testing can help further improve campaign results.

The key to doing this successfully is to create a process, weekly, bi-weekly or otherwise, that ensures you’re taking all the steps each time. By making A/B testing part of our campaign management workflow on a regularly scheduled basis, you can ensure that you’re always improving and getting the absolute best results out of your ad campaigns.

Conclusion

Used consistently, A/B testing can generate outsized gains for your marketing campaigns. If you're interested in learning how to better scale your creative testing, visit Viewst.com to find tools to get more out of your ad creative and design.

Author
Author
Author
Founder, CEO at Viewst
Founder, CEO at Viewst
Founder, CEO at Viewst

Victoria is the CEO at Viewst. She is a serial entrepreneur and startup founder. She worked in Investment Banking for 9 years as international funds sales, trader, and portfolio manager. Then she decided to switch to her own startup. In 2017 Victoria founded Profit Button (a new kind of rich media banners), the project has grown to 8 countries on 3 continents in 2 years. In 2019 she founded Viewst startup. The company now has clients from 43 countries, including the USA, Canada, England, France, Brazil, Kenya, Indonesia, etc.

In this article