It’s a common question: Why is split testing important?
Consider your sales funnel when split testing. All your PPC and SEO efforts are aimed at attracting more visitors to the top of the funnel.
You would assume that more visitors to your website automatically equals more customers, but here’s the problem: only 3 out of every 100 people (globally) make a purchase. That’s 97% of wasted spend.
Thankfully, there’s a way to broaden the funnel, so that more of your visitors convert into customers. It’s called conversion rate optimization. I talk more about broadening the conversion funnel here.
The definition of optimization is the action of making the best or most effective use of a situation or resource. When you have a new website or landing page, you’re not completely sure on what’s going to work and gain the most conversions. This is why we split-test and create different variants of pages and ads to see which performs better.
What Is Split Testing?
Split testing allows marketers to compare two different versions of a web page or ad — a control (the original) and variation — to determine which performs better, with the goal of boosting conversions.
Below is an example from Facebook, but it’s the same principal with all types of ads.
When running a test, traffic to the page is split among the different versions and their performance is tracked. Whichever version converts the most at the end of the test, wins.
Ideally, there will be only one difference between the two pages, so the tester can understand the reason behind the change in performance.
Split testing could mean either split URL testing or A/B testing. Split URL testing differs from A/B testing in that the different versions need to be housed on different URLs. Split URL testing is commonly used when the web page designs are very different from each other.
Split-Testing with Ads
As shown in the above image from Facebook, split testing divides your audience into random, non-overlapping groups who are shown ad sets with identical creative.
This randomization helps ensure the test is conducted fairly, because other factors won’t skew the results of the group comparison. It also ensures each ad set is given an equal chance in the auction.
Each ad set tested has one distinct difference, called a variable. Your variable can be different audience types, placements, or delivery optimizations.
To get the most accurate results from split testing, you’ll only have the opportunity to test one variable at a time. For example, if you test two different audiences against each other, you can’t also test two delivery optimizations simultaneously, because you wouldn’t know for sure which change affected the performance.
Split testing is based on people, not cookies, and gathers results across multiple devices.
The performance of each ad set is measured according to your campaign objective and is then recorded and compared. The best performing ad set wins.
What Do These Variables Do?
Here are some initial variables to test:
Target audience – you can choose different audiences and test them against each other.
Delivery optimization – You can run a split test with one optimized for conversions, and one set for link clicks.
Placements – you can select the automatic or customized placements to define where you want your ads to appear. Your best bet is to test custom placements against automatic placements instead of custom versus custom.
Again, only one variable can be tested at a time.
Here are some visual elements to test:
- Ad copy – test size, color, shadowing, font, and of course, the ad copy itself
- Border inclusion – does your ad stand out better with a white background or will it draw more attention with a black vs. colored border?
- Call to action (CTA) – test buttons vs. links
- Promotion – free shipping or discount offers, such as “save 20%” vs. “save $20”
- Price points –$0.99 vs. $1 or “lowest price guarantee”
- Headlines – this is the most vital part of your ad, so definitely test your headlines. Consider stating a fact, asking a question, highlighting a feature, or addressing a pain point.
You can never be 100% sure on what change is going to move the needle in the right direction, so it’s always good to have a list of future tests ready to go for your next test. In some instances, I’ve seen a button color change provide substantial conversion rate lift, while on other ads, it hardly made a difference and ad copy change made the big difference.
Split Testing on Landing Pages
Ideally, we’d like to test a small variation on landing pages, so that we can hone in on what changed the conversion rate. The major problem with the ideal way to split test is that each test often requires hundreds or thousands of visitors to declare a winner. That’s why we like to start off our clients with two completely different designs to begin with, so we can get a bigger result much quicker.
Like we talked about above, people claim that a particular button color produces the most conversions and that there’s one ideal number of fields to use on your form. But their business is not your business.
If you have a horrible conversion rate, a button color change isn’t going to save that. You may have to change your offer or the language surrounding your offer.
1) Start with a reason to test.
Is there’s a drop off second step, are people not scrolling down the page? Does the page load slowly? Did you create a user poll and get data back on this? Find the problem first, so that you can find the correct solution.
2) Create a hypothesis.
Based off the answers you get to those questions I posed, you’ll be able to determine (hopefully) what needs to change. For example, we had a client that was not converting well. We put a Hotjar poll on the page asking, “what’s keeping you from getting you bearing today?” The selection that got the biggest answer was “not trustworthy”.
We went ahead and added testimonials, logos of certifications and awards, and images of the factory. This increased conversions by 30%.
In our next test, we created a two-step form asking qualifying questions on the first step and asking personal questions (name, phone, and email) on the second step. This saw a 420% increase in conversions with a 100% confidence. We talk more about the power of two-step landing pages here.
3) Calculate your sample size.
There’s a way to calculate sample size manually, but it involves some serious math. Optimizely’s calculator is great for this.
Here’s what you’ll have to input for it to spit out an accurate sample size:
Baseline conversion rate: What’s the conversion rate of your original page? The higher it is, the fewer visits you’ll need before you can declare of winner of your test.
Minimum detectable effect: The minimum relative change in conversion rate you want to be able detect. A 20% minimum detectable effect means that, at the end of the test, you can only be confident that a lift or drop in conversion rate higher than 20% is a result of your adjustments. The lower your minimum detectable effect is, the more visits you’ll need before you can conclude your test.
Luckily, Unbounce takes care of this for us. We know that we want to be at least at a 90% confidence before we decide on a winner. However, if you’re noticing that a new variant is already losing, but there’s still low confidence, lower the weight of the variant until it runs its course — the weight being the percentage of people who will see this variant.
4) QC your testing.
Make sure that things like traffic sources and referring ads are the same for both pages and that other variables that could affect your test are eliminated to the best of your ability. Does your landing page look the same in every browser? Is your CTA button working? Are all the links in your ads correct? Once you’ve made sure of this, send traffic to your control page and your variant.
5) Rinse and repeat.
How did it go? Did you see a increase or a decrease of conversions in your variant? If there’s an increase, promote your variant as the champion and start a new test based off the data gathered from your last test. There’s always something to test, something to improve.
Here’s a split testing example from one of our clients below. We were getting a lot of drop-off on the first step, so we moved the company name to the second step and this gave us a nice boost — 69%. When a win like this happens, you’ll want to roll this out across your other pages and see if it helps.
What Are the Best Things to Test?
If you search “split testing case study,” you’ll find so many blog posts claiming that a sacred button color produces the most conversions or that there’s an exact number of fields to use on your form. However, consider split testing as a weight loss diet: what works for some will not automatically work for you.
There isn’t really any “best thing” to test. There’s things that move the needle more than others, but there are no true winners. I can’t say this enough, what works for one page or ad may not work for another. It depends on many factors including your demographic.
One page will do well with a green button while another page does horribly with a green button.
Ted Vrountas from Instapage says it well:
“If marketer 1 tested a red button against an orange one on her landing page, and found the red one to produce more conversions, then she’s correct about red being the right choice for her landing page. If marketer 2 tested a green button vs. a red one on his landing page, and he found green to beat red, then green is the right choice for him.”
This doesn’t prove that the second marketer’s test is better and that green is better than red. The first marketer could test green against red and find that the red button is still a winner.
“The impact of button color on conversions is heavily dependent on a number of things — like your audience and the color of the rest of the page, for instance — all of which vary from business to business.”
Again, what works for someone else may not work for you. That’s why all your tests should be based off data.
Some Things You Can Test
When it comes to landing page split testing, there are tons of elements to be tested for better conversion rates.
This is a very important section as this is the first thing your leads will see. Generally, you’ll have your headline, subhead, and form (if you’re creating a lead-gen page) in the hero section. Have you tried a new headline that better represents your unique value proposition? What about switching the subhead and headline copy? A particular hero image can convey a better response than your current image. Have you tried changing that out?
Directional cues such as arrows and line of sight are useful in pointing users to your CTA.
Here’s a split testing example from Dell.
The result? The second version with directional cues that focused on the CTA and form decreased bounce rates by 27% and 36% more visitors completed the form submission.
What’s the language that you’re using for your CTA? Are you addressing exactly what the customer will be getting, or does it say something vague and useless like “continue”? Although the example from Dell is a good example of using line of sight, it’s not a good example of CTA wording.
It would be frustrating to the customer if when they arrive at the second step and they see the same button. It could lead them to think “Wait, that first form told me I was going to get my offer…how many more of these before I really get it?” Having “Send My Offer” on the second step reassures the customer that they will really be getting what they want on this step.
You can also try out using different copy on each button repeat CTA button of your landing page to see which encourages the click. So, if the button on your form is “Get My Offer”, you could have the repeat button lower on the page “Get My Highest Rate”. Different language, but still the same premise.
If you’re finding many people prefer “Get My Highest Rate”, why not update your main CTA button to that language?
Have you found that a certain color works better for you over others? I’ve found that green buttons work great most of the time, but not every time of course. As you grow in your split testing, you’ll find what works best for you.
If you have a lead gen page, we’ll always stand by this: two-step, two-step, two-step. You can see why we stress this here.
Aside from that, try removing or adding fields. Limit your dropdown options is there’s just too many to choose from.
In our own original research, we’ve found that radio buttons tend to get better results than dropdowns. The reason for this is because radio buttons show the information plain as day while a dropdown hides the information behind an additional click. The quicker you can get people through the form process the better, so limit extra clicks when possible.
Here’s a split testing example below from Marketing Experiments.
Have you tried asking for zip code vs state & city? Have you updated the copy of your form to explain to the user what they’re going to receive in this demo, trial, free guide?
You can test a sticky form that scrolls as the user scrolls down the page, or you can try a stationary form that lives at the top or bottom of the page, with additional CTAs that scroll to the form when clicked.
Social Proof / Testimonials
We know that having social proof on a page can really help boost conversions as it add legitimacy to the brand or product.
Consider your audience. If you have logos of past clients, do you want to show big business clients, small business clients, or a mix of both? Who are you trying to sell to?
Think about how many testimonials you have. Is one enough to really seal the deal? Would three customers’ quotes be more beneficial? Are these testimonials short and vague along the lines of “Great stuff! Would recommend.” or are these testimonials more in-depth covering the customer’s problem and the solution your business provided in a professional and helpful way? These are the questions that need to be considered when presenting social proof.
Another useful tool is putting a name and face to a testimonial. If I’m seeing a quote from someone with no name and no photo, I may call bull. Making your customer feel that this is real and legitimate is very important in claiming that conversion. Try taking this one step further and use video testimonials, if you have them. Viewing rather than reading could be more useful to the potential customer since they can see the real person in action speaking of the product or service that you provide.
Change the Offer
When all else fails, go back to the drawing board and change offers. If you’re offering a free trial, but the temperature of your traffic is a little bit colder, meaning they’re just interested in learning more information first before signing up, you may want to consider giving a free demo instead of a free trial. An even lower ask is gated content. Do you have any free guides, e-books, or case studies about your service that you can give away in exchange for a name and email address?
Wrap Up on Split Testing
There’s no “correct thing” to test, but using tools such as Hotjar to gather data on your split testing will be helpful in creating tests that will more than likely help create a lift in your conversion rates.
There’s always best practices, but always remember that one thing that works for one company may not work for you, because you’re different and that’s ok. With enough research and time, followed by analysis and testing, you should start to see more traffic, more conversions, and ultimately more sales.
What hold-ups have you run into? What have you learned as you test on your own? Are there any questions you have that we did not cover in this blog post for which we can answer for you? We would love to hear about your experience engaging in split testing if you would like to leave a comment below.