A/B testing: image placement

If you are new to my website, you’ll know I am not the biggest fan of A/B testing. Does that mean you should never do it? No.

When creating an actionable A/B test, I set out to find the following answers to these questions:

What do I want to test?

Why do I want to test it?

How much time is it going to take me to run this test?

What am I going to do with the results?

And with the above answered, I can feel confident to move forward with a solid testing strategy knowing that I’m not wasting time.

Introducing, A/B testing the placement of an image within an email.

Naomi West - Email A:B Test.png

What do I want to test?

In an upcoming reactivation campaign, attempting to winback those that have cancelled or expired, I want to test if the placement of an image (that looks like a video, and links out to a video) results in higher conversion. In Variant A, I am worried that more subscribers will click the image, leave the email, and fall out of the funnel. In Variant B, I am curious if more subscribers will be introduced to the concept of what the email copy is, prior to the image, and thus be more primed to convert.

Why do I want to test it?

I want to know how the placement of an image within an email affects engagement and conversion.

How much time is it going to take me to run this test?

This is a simple test, as content will remain the exact same on both Variants of the email. The only thing that is changing between the two is where I place the image. I would ballpark this extra work to be about 20-minutes.

What am I going to do with the results?

I would like to take my future learnings to translate them into email design, specifically for how much weight we give to images.

This is a fairly loose outline of how I began to feel confident that this would be a test that would be easy to run. I will note that I would like to run this test further times depending on which variant wins.

My audience for this campaign is just over 100k email subscribers that have all recently engaged with email. It can be difficult to assess if test results are significant or not with any lower of an audience group.

My hypothesis

My hypothesis is that variant B is going to see higher statistical significant results for our bottom of funnel conversion event because of:

- Copy before the image priming users of the main message

- Educational video a user can click on the image of should they want more information after the above

- CTA button immediately following

Whereas I think more users will immediately click on the traditional layout of variant A, and fall out of the journey.

The results

For results, I’ll be using this calculator tool for calculating statistical significance.

After running this test multiple times across a variety of days, and email topics (to disprove that results could potentially be a fluke), all tests came back with no statistical significance showing that image placement impacted how subscribers interacted with the email. Subscribers clicked on the emails at the same rate between the two variants, and there was no significant difference in the bottom of funnel conversion event that we ideally were looking for.

page0 16.JPG
Previous
Previous

Slow Emails Volume 3

Next
Next

Email List growth strategies - restaurants