If you conduct email campaigns, you may have heard of the A / B tests. If you have not yet dared to try these, here are some suggestions to guide you through the steps of A / B testing.
What is A/B testing?
This is a technique that proposes several variants of a concept to assess which would be the most efficient. We can do A / B testing on various types of campaigns (a website, banner advertising, etc.), but this technique is particularly applicable to e-mail campaigns. In fact, most email campaign tools already incorporate features of A / B testing (ex: MailChimp or Carma).
You can even conduct tests A/B/C/D … but let’s stick to the A / B testing for this article, because the more different groups you test, the larger the database is required.
Preaching by example: You have a list of 500 subscribers to your email campaign. You want to test specific elements of your campaign; so, you send 24% of your mailing list (120 subscribers) your two tests: 60 of test A and 60 of test B. After 2-3 hours, the campaign that collected the best statistics will be sent to your other 380 subscribers.
Why conduct A/B testing?
At my previous job, in a marketing research firm, the phrase “you cannot improve what you do not measure” was pretty much the company slogan. Measuring your email open rates, click rates and, your conversion rates, to optimize your email campaign to suit your audience, is crucial! To see a benchmark, you can read my previous blog on the subject.
Have you always wondered what will work best in your email campaign: image X or image Y?
Try A / B testing!
Testing what exactly?
You can test several aspects of email campaign:
Testing the title, you can perform several experiments: title evoking the emotional (I like) vs. a title evoking cognitive (increasing your sales) or, passive in nature (the new range of XY is available) vs. title to active nature (Get the new range of XY now). To assess which campaign was the most powerful, the email open rate is to be considered first.
Testing the content, there are several ways to change the calls to action, the details of a specific offer, the typeface, colors used, the images used, layout, etc.
It’s even possible to test the time and date of dispatch, or from whom your email originates from.
Now, you’re probably asking: “Yes, but where do I start? What are the best practices?”. It depends on the type of business and its customers, the type of communication, frequency of communication, etc.
How does it work?
As mentioned earlier, many tools readily integrate the functions of A / B testing and are usually quite simple. You’ll first choose the number of tests (to start, do not lose too much and stay within an A / B test).
Then you have to choose between testing on the email subject, testing the recipient’s names (in this case, you), or, testing dates or email delivery times. Your choice of tests varies from one platform to another.
I strongly advise you to test one item at a time in order to directly attribute an incident to a specific change.
A concrete example: an email campaign from Exo B2B
We did an email campaign for the Exo B2B account about our latest case study. Since our audience is heavily composed of VP’s and Presidents of various industry companies, we wanted to know if inserting a statistic would have a positive impact on the rate of initiation, and ultimately, on the click rate.
Subject A: Download the inbound B2B case study: double your qualified visits
Subject B: Inbound B2B case study: how to generate 300 leads in 5 months!
Psst! To download the case study (in French), click here!
50 subscribers received the campaign A and 50 others received campaign B.
After 3 hours, we got:
So, the conclusion: using statistics in an email title seems to have a positive impact on the email open rate and click rate of email campaigns. However, to draw more robust conclusions, it is recommended that further tests be preformed over the next few email campaigns, to validate this hypothesis with more certainty.