
Learning and improving are the essentials when it comes to achieving the best results in your business. And you can’t learn if you don’t run tests. In the Email Marketing side of your business, the most important elements are Open and Click Rates. Here’s one very simple fact, yet often overlooked – whether you have 100.000 subscribers with 10% open rate or 30.000 subscribers with 30% open rate you have the same number of people that will open your emails. This is why testing and improving Subject Lines is very important.
Klaviyo offers an option to A/B test your campaigns. In one campaign, with a fraction of your subscribers, you can test different Subject Lines, Preview Text or different email content.
The simplest way to describe the process is this. You create your campaign as usual, and then in the “Message Content” part of the campaign you add another variation. The original content will become the “A” variation and the new continent will become the “B” variation. You can choose how much of your targeted audience you want to use for the test and how much time you want the test to run before Klaviyo sends the winning variation to the rest of the audience.
Let’s go deeper into this.
As described above, first you need to set your campaign as usual. If you need help with creating and configuring your campaign you can find our tutorial on “How to Send or Schedule a Campaign in Klaviyo” very helpful. Follow the steps in the tutorial and configure your campaign.
Assuming you have your campaign set and you want to add a second variation for testing, you can do that in the “Message Content” part of the campaign.
At the bottom of the content page, click on the “Crete A/B Test” button.

This will clone the content and create 2 identical variations as shown below.

Now you can see A and B variations which at this point are identical. You can change either of these variations to create the different versions of the campaign that will be tested. In this example, we’re going to leave the variation A with its original content as it is and change the B variation.
In the B variation, click on the “Edit Variation” button.

This will get you to the content section of the B variation. Here is where you can edit all the elements you want to test against variation A. You can change the Subject Line, Preview Text, From name, From/Reply-to Email or the content of the email itself.
In this example we’ve changed the Subject Line of the B variation. After you finish editing the B variation, click on “MESSAGE CONTENT” in the top right corner as shown below.

This will bring you back to the previous window where now you can see the different Subject Line for variation B.

Now that the variations part is ready, you need to configure the rules for the A/B testing.

Winning Metric: Open Rate vs Unique Clicks
The first thing you need to set is the rule for determining the winner between the testing variations.
If you’re testing a different Subject Line or Preview Text you need to select the “Open Rate” option. This is because the open rate in campaigns depends mostly on the first thing people notice when an email lands in their inbox, which is the Subject Line.
If you’re testing different content in the two variations you need to select the “Unique Clicks” option. The variation with the most unique clicks will be the winner and the rest of the subscribers will receive that variation.
NOTE: When you’re testing the email content, make sure that your Subject Line and Preview Text are the same for both variations. This way a nearly equal amount of people will open the email and see the content. From that point, the variation with the better content will have more unique clicks.
The next part is configuring how large a sample of your target audience you want to use for this test.
Understanding the benefits of larger vs smaller test groups
By default, this is set to 100% but you can adjust it according to your needs. If you leave it at 100%, the two different variations will be sent to your entire audience, 50% each. While deciding how much of your audience you want to use for the test, you need to have in mind a couple of simple things. With a larger test group you will get better results. Using a low number of people for the test might give you false results. Let’s explain how that could happen.
For the purpose of explaining things more clearly, let’s say that your audience for this campaign is 1000 subscribers and you choose to test with only 5% of them. In that case only 50 people will be used for the testing. Since we have two variations, each variation will be sent to 25 people. There’s a good chance, whilst randomly selected, you end up having few better subscribers in one of the groups. If that happens, even a worse Subject Line can win simply by having more loyal subscribers in that test group. This is why it’s important your test groups to be large enough so the difference in quality of subscribers in each test group is negligible.
You need to find the balance between giving the test group enough people so you can get a clear winner between the two variations, while maximizing the people that will receive the winning variation after the A/B test is done.
Configuring the time frame for the A/B test run.
The third part of these settings is configuring the time for which the A/B test will run. You can configure these settings to send the test emails and wait between 1 and 96 hours for results before determining the winning variation and sending that variation to the rest of your audience.
Giving the test more time will get you better results. On the other hand, if you give it too much time, you might miss the perfect window for sending your campaign. Let’s explain this.
Let’s say that you generally send campaigns in the morning and your subscribers are used to that. If you schedule the campaign to be sent at 10:00 AM with a 15 hours time frame for the A/B test, only the testing group will receive the email at 10:00 AM. The rest of the subscribers that should receive the winning variation, will receive it 15 hours after that, which will be a time where most of your subscribers are not online to check their emails.
If you want to maximize the testing time and your campaign is not day/date sensitive, you can configure the waiting time to be 24 hours which will then send the winning variation at the same time next day.
In the opposite case where you give the test too little time (e.g. 1 hour), it can happen that people from the worse variation to be more active during that 1 hour and make that variation win.
Generally speaking, 3-4 hours is enough time to collect data and determine which one is the real winner between the two variations. To make sure you’re giving enough time to the test, you can check several of your previous campaigns and see in what time frame after sending the campaign most of the opens happened. Use that time frame as a ground for determining the waiting period for the A/B tests.
In the last part of these settings you have the option to choose whether you want a different UTM parameter to be added for each variation in the URL. This is a helpful option if you’re using analytic tools as Google Analytics, and you want to see the performance of each variation separately.
After you finish configuring the A/B test, click on the “Save & Schedule Campaign” button in the bottom right corner to proceed with sending or scheduling your campaign.
