A/B Testing vs Multiple Variant Testing: And the Winner Is…?

[ad_1]

During the 2016 Rio Olympic Games, Mahe Drysdale rowed 2,000 meters (1.24 miles) in just 6 minutes and 41 seconds.

However, despite his impressive performance, the world record-holder nearly lost the race.

Drysdale Rows to Nail-Biting Single Sculls Win in Rio 2016 Olympics

In one of the closest finishes in Olympic history, Drysdale won by mere millimeters.

In contrast, Great Britain’s Men’s Eight team rowed the same distance in just 5 minutes and 29 seconds—over 70 seconds faster than Drysdale’s time!

What’s more, the Brits won by more than a half second.

gbr-mens-8-rowing-win

That might not seem like a huge margin, but in the Olympics, a half second is a big deal.

So, why was Britain’s team so much faster than Drysdale?

The answer is simple: they had more oars in the water.

Now, at this point, you might be thinking, This is all well and good, Jake, but what does rowing have to do with online marketing?

Well, it turns out that conversion rate optimization (CRO) is a lot like rowing.

The more oars you have in the water, the faster you’ll make it to your goal and the more likely you are to beat out the competition.

The Secret is Testing Multiple Variants

Over the years, CRO seems to have become synonymous with A/B testing in the minds of many marketers.

Now, there’s nothing inherently wrong with this. A/B testing is a form of conversion rate optimization. You have a page and you want it to perform better, so you change something and see if it improves your results.

But here’s the thing, A/B testing isn’t the only way to do CRO.

It might not roll off the tongue as nicely as “A/B testing”, but if you’ve got enough traffic, A/B/C/D/etc testing can allow you to produce meaningful results much more quickly.

For example, Optimizely recently studied and reported on the factors that defined the world’s best testing companies.

Guess what the 4 biggest factors were?

  1. Testing the things that drive the most revenue
  2. Testing every change
  3. Testing to solve real problems
  4. Testing multiple variants simultaneously

Does #4 surprise you?

Apparently, the most effective CRO doesn’t come from A/B testing—it comes from testing multiple variants.

Essentially, A/B testing is like the Mahe Drysdale of CRO. It works and it can even deliver amazing results.

But, it’s only two oars in the water—there’s no way it can compete with an 8-man team.

To put this in more concrete terms, according to Optimizely, just 14% of A/B tests significantly improve conversion rates. On the other hand, tests with 4 variants improve conversion rates 27% of the time.

So, if you test 4 variants, you are 90% more likely to improve your conversion rate than if you just ran an A/B test. However, 65% of CRO tests are—you guessed it—A/B tests!

Why Testing Multiple Variants Works Better

Basically, there are two reasons why multiple variant testing outperforms A/B testing: 1) it’s faster and 2) it allows you to test more variants under the same testing conditions.

Multiple Variant Testing is Faster

Sure, you can test the same things with a series of A/B tests as you can with a multiple variant test—it just takes a lot longer.

When you run an A/B test, you can really only learn one thing from your test. Your variant will either perform better, the same or worse than your original.

And that’s it, that’s all you can learn.

Now, if you’re smart about your A/B testing strategy, your results can teach you a lot about your audience and make your future tests smarter, but you’re still only learning one thing from each test.

On the other hand, with multiple variant testing, you can try out several ideas at the same time. That means you can simultaneously test multiple hypotheses.

So, instead of just learning that a hero shot with a smiling woman outperforms a shot of a grumpy man, you can also see if a grumpy woman image drives more results than the grumpy man pic or if a happy man outshines them all.

Or, you can try multiple combinations, like a new headline or CTA in combination with either the smiling woman or the grumpy man.

Running all of these tests simultaneously will allow you to optimize your page or site much more quickly than you could with a long series of A/B tests.

Plus, running a test with multiple variants will greatly improve the odds that a single test will deliver at least one positive result, allowing you to start getting more from your website sooner.

Multiple Variant Testing is More Reliable

Another problem with successive A/B tests stems from the fact that the world changes over time.

For example, if you are in eCommerce and run your first A/B test during October and your second test during November, how do you know if your results aren’t being skewed by Black Friday?

Even if your business isn’t seasonal, things like differences in your competitors marketing strategies, political change or a variety of other variables can make it difficult to directly compare the results of A/B tests.

As a result, sometimes it can be hard to know if a particular A/B testing variant succeeded (or failed) because of factors outside of your control or even knowledge. The more tests you run, the murkier your results may become.

However, with a multiple variant test, you are testing all of your variants under the same conditions. That makes it easy to compare apples-to-apples and draw valid, reliable conclusions from your tests.

What Does Testing Multiple Variants Look Like in Real Life?

To show you just how testing multiple variants can improve your CRO results, let me share an experience we recently had with one of our clients.

The client wanted to get site traffic to their “Find Your Local Chapter” page, so we decided to add a “Find Your Local Chapter” link to the client’s footer. That way, the link would be seen by as many people as possible.

Makes sense, right?

So, we put together something that looked like this:

testing-multiple-footer-variants-v1

At first, we figured we would just put the link in the footer and run a test to see if the link made a difference.

But then, we started wondering if there was a way to make the link even more noticeable. After all, getting traffic to this page was a big deal to the client, so it made sense to emphasize the link.

With that in mind, we added color to the link:

testing-multiple-footer-variants-v2

Now, this idea seemed logical, but at Disruptive, we believe in testing, not gut instinct, so we figured, “Hey, we’ve got enough traffic to test 3 variants, let’s take this even further!”

The problem was, the client’s site was a designer’s dream—modern and seamlessly designed. To be honest, we had a bit of trouble selling them on the idea that creating a page element that interrupted their seamless flow was worth testing.

But, eventually, we convinced them to try the following:

 

testing-multiple-footer-variants-v3

It was very different from anything the client had tried on the page before, but we decided to run with the idea and include it in our test.

A few weeks and 110,000 visitors later, we had our winner:
testing-multiple-footer-variants-results

Not surprisingly, adding the “Find Your Local Chapter” link increased page visits by over 60% for every variant—that’s an awesome win, right?

But here’s the thing. With our original, strict A/B test, we would only have discovered that adding the link increased traffic by 63%.

On the other hand, by including a couple of extra variants, we were able with the same test to discover that—contrary to the client’s belief—the more our link “interrupted” the site experience, the more traffic it drove to the chapter page.

Sure, we might have reached the same conclusion with several more tests, but we achieved these results much more quickly and reliably than we would have with an A/B testing series.

Should You Test Multiple Variants?

When it comes to testing multiple variants, there’s only one real reason not to use it: your boat is too small.

rowing-fail

Think about it: if the entire British Eight Man team had tried to cram onto Mahe Drysdale’s boat, they never would have made any forward progress.

The same idea applies to CRO.

As great as multiple variant testing is, if you don’t have enough traffic, a test could take months or years to complete.

In fact, in true multivariate testing—where you test to see how a large number of subtle changes interact to generate your conversion rate—you want at least 100,000 unique visitors per month (for more information on multivariate testing, check out this great article).

On the other hand, you need far less traffic to simultaneously test multiple page variants.

To see how long a multiple variant test will take on your site, try out this VWO has a free sample size and test duration calculator from VWO. If the time frame makes sense for your business, go for it!

Conclusion

Whether it’s Olympic rowing or CRO, the more oars you have in the water, the better your results will be.

Although it may be tempting to limit CRO to A/B testing, testing multiple variants will allow you to improve your conversion rates more quickly and reliably than you could with a series of A/B tests.

You’ve heard my two cents, now it’s your turn.

Have you tried multiple variant testing? What was your experience like? Did any of the data in this article surprise you?

About the Author: Jacob Baadsgaard is the CEO and fearless leader of Disruptive Advertising, an online marketing agency dedicated to using PPC advertising and website optimization to drive sales. His face is as big as his heart and he loves to help businesses achieve their online potential. Connect with him on LinkedIn or Twitter.



[ad_2]

Source link

7 Reasons Your Site Isn’t Ready for A/B Testing

[ad_1]

You’ve invested a lot of time and effort into perfecting your website and you want to get the maximum return from that investment. To achieve that goal, you’ve studied dozens of blogs on conversion optimization techniques. You’ve poured over countless CRO case studies, and you have a few tools to help you run A/B tests.

Before you start split testing to get those conversion gains, pause for a second. I don’t think you’re quite ready yet.

There are plenty of free tools to help you test your optimization – not to mention paid options from Optimizely to OptinMonster that’ll help you explore different facets of your site’s performance – so just about anyone can run A/B tests. But it’s not a matter of simply understanding how to do it.

The problem is that your site just isn’t there yet. A/B testing isn’t for everyone, and if it’s not done at the right time with the right conditions, you might end up accumulating a lot of false data that does more harm than good. Before you invest anything in testing and extensive optimization, consider these seven points:

1. The Traffic Volume Isn’t There

google-analytics-low-traffic-numbersIf this is what your traffic numbers look like, don’t bother A/B testing

There’s no doubt that A/B testing can be highly useful for businesses that want to improve their conversion rates. Having said that however, a lot of businesses shouldn’t bother with A/B testing.

Small businesses that are trying to grow, startups, e-commerce businesses in their early years and other micro businesses simply don’t have the traffic and transactions to accurately perform A/B tests. It takes a significant amount of traffic to provide accurate, measurable results.

In a post from Peep Laja of ConversionXL, he provided an example using a sample size calculator from Evan Miller, where the baseline conversion rate is entered. He then entered the desired lift.

sample-size-calculator-ab-testingImage Source

You can see from this image that in order to detect a 10% lift, the tool recommends at least 51,486 visitors per variation.

If the traffic isn’t there yet, you can still optimize your site based on audience data you’ve gathered, but A/B tests won’t be helpful for a while and they might produce false information.

2. You Don’t Have Anything to Test

A lot of websites function as a general brochure for a company with minimal conversion points. If you run a B2B site or you have a freshly-created site with little more than a contact form and an opt-in, then it’s too early in the game to start running concurrent A/B tests.

new-wordpress-siteIf your site is content-lite then it’s probably too soon to start running tests.

Even if the volume of traffic is adequate to run accurate tests, you may not see a significant lift from a general opt-in or estimate request form. For most businesses, the amount of effort and cost that would go into designing variations for the tests just to get a small lift around micro conversions isn’t worth it.

The same applies to newer e-commerce stores.

Your time would be better spent with your analytics, where you can set up goal tracking, creating marketing campaigns, and developing your content offers and resources. The A/B testing can come later once you have more to offer and traffic has grown substantially.

3. You’re Not Sure What Matters

Do you know what the choke points, leaks, and sticking points are in your funnel? I’m referring to the places where you’re losing prospective customers, as well as where you’re gaining the most.

Before you can run any kind of tests, you have to understand what matters, because some elements are more important than others.

For example: a marketing agency is driving visitors to their estimate request page. They spend a significant amount of time optimizing that page with A/B testing variations and micro changes. After extensive testing, they find that their efforts made very little difference with virtually no impact on their conversions.

Instead, they should have looked for mistakes in their funnel leading up to that page. Maybe the content that led the visitor to that point was where the changes needed to be made. Maybe the search intent of the customer didn’t match the content they found.

Another example: a brand selling shoes online puts a great deal of effort into optimizing and testing product pages, only to realize that the lift in conversion was insignificant. Instead, they could find ways to improve the average order value or review their funnel in Kissmetrics to find the biggest leaks where customers are dropping off and fix those problems instead.

kiss-saas-funnel-opportunity-spottedDon’t know where to test? Find where you’re losing customers (and money) with the Kissmetrics Funnel Report.

If you randomly try to test what you think matters, then you’ll only be wasting time.

One study from Forrester showed that 60% of firms surveyed saw improvements in their website when they used a data-driven approach to design. It’s important to take the time to research what really matters to your business so you know what to optimize and where to make changes.

4. You’re Copying Content

While a competitor site (or any site for that matter) might look like an attractive design that your customers will probably engage with, you can’t waste time testing if you’ve played copycat.

Any tests you run after replicating their design and content will only be wasted. If the solution was as simple as copying what we thought worked well for our competitors (or even conversion case studies) then every e-commerce website would function exactly like Amazon.

The fact is, websites are highly contextual and they should relate to both the audience and what you’re promoting. Wal-Mart and Whole Foods are in the same business of selling food products, but they cater to completely different audiences and sell vastly different products.

If I stacked up my own services against another marketing agency offering identical services, there would still be contextual differences in how we market, how we service customers, the channels we use to engage them, and how we direct traffic to our sites.

You need to make sure your website is designed specifically for you, your channels, your audience, etc. before investing in testing.

5. The Data Isn’t There

The more capable you are with analytics tools like Kissmetrics or Google Analytics, the better off you’ll be. But, if the extent of your knowledge consists of checking traffic quantities, referral sources, time on page and bounce rates, then you’re only scraping the surface.

google-analytics-low-aquisition-dataIf you don’t know what data you need to monitor while A/B testing, then testing is a waste of time.

You have to approach your testing and analytics with a problem so you can find an answer in the data. That way, you can identify issues and confirm what aspects you need to change.

Learning a bit more about your analytics can tune you into:

  • How site elements or offers are performing
  • How your content is performing and whether it is keeping people engaged
  • What people are doing on your site and the routes they typically take
  • Where people are landing, as well as where they’re leaving
  • Where your funnel is losing money

The data won’t specifically tell you how to fix problems; it’s just a starting point where you can discover actionable insights. Without that data, and without the ability to interpret it, A/B testing is pointless.

6. Your Site Has Usability Issues

When was the last time you tested your website in a browser other than the one you typically use? Have you tried going through your entire site on a mobile device?

Have you ever performed a full usability test with a variety of browsers and devices?

This is something a lot of marketers don’t consider when they start A/B tests. Ignoring usability issues, tech problems, and bugs is a huge mistake, though. Even minor bugs and slow load times can dramatically impact your conversion rates.

Just a one second delay in load time can drop conversion rates by as much as 7%.

You won’t get accurate results from A/B testing if segments of your audience are bailing due to usability issues. Some of your audience may never make it to your conversion point, and even if they do, their progress could be hindered by bugs or load times that will ultimately skew your results.

This misinterpretation could lead to changes and further variations of elements that are actually part of your winning, optimized design.

7. You Don’t Know Your Audience

Audience research should be one of the first steps of any marketing strategy. If your goal is to drive lots of traffic to your site with content marketing and paid advertising, I would hope you’ve done some measure of audience research.

Without it, you’re shooting blindly into the darkness and hoping to score a bullseye.

Researching and defining your target audience gives you in-depth information about who you’re targeting, such as their pain points, interests, behaviors, demographics info, and more. That information helps you craft compelling copy, winning headlines, and attention-grabbing offers.

buyer-personaA target customer profile. How well do you know your target market?

Without it, you’ll resort to guessing what to change about your copy, headlines, offers, and calls-to-action. Every variation you test will be just as random as the one before it, and you likely won’t see any significant change in performance.

Know who you’re marketing to before you make a large investment in A/B testing.

Testing isn’t for Everyone

While there’s a wealth of articles and advice online telling you test everything you do and to A/B test every variation, you don’t have to. For many statups and growing online businesses there just isn’t enough traffic early on to create an accurate sampling with measurable results.

Focus on growing your business for now. As you grow traffic levels, learn more about your customers, and targeted traffic increases you can start testing variations to go after those micro wins.

Do you use A/B testing on your site or landing pages right now? Have you found issues with the quality of your results? Share your thoughts with me in the comments below.

About the Author: Aaron Agius is an experienced search, content and social marketer. He has worked with some of the world’s largest and most recognized brands to build their online presence. See more from Aaron at Louder Online, their Blog, Facebook, Twitter, Google+ and LinkedIn.



[ad_2]

Source link

The Three Metrics You Need to Know Before You Waste Any Time on A/B Tests

[ad_1]

It’s hard to argue that split testing (also know as A/B testing) is changing the face of marketing. According to Unbounce, 44% of online businesses are using split test software. And software products like Unbounce and Visual Website Optimizer are making it ever easier. Split testing, done right, with good context, can put a stop to all the guesswork, anecdotal conclusions, and correlation/causation errors that can abound in marketing circles.

But it’s not without risks: split tests are expensive to run, requiring investment for both software, and staff/consultants to run the tests. Not to mention the opportunity cost of lost time exploiting other profit levers in your business.

All of which underscores the importance of testing the right metrics in your business, and the potential cost in time and resources of testing the wrong ones.

While I can’t speak for all businesses, what I’ve seen again-and-again with clients and peers is businesses gravitating toward what’s easy to test – landing pages, checkout pages, email subject lines, and sales pages (all of which can be extremely important in the right context) – rather than what’s important.

That’s why one of the most meaningful changes you can make in your business is to implement a process for identifying which parameters to test and optimize. Below are 3 metrics you need to know before you spend one more minute split testing.

1. List-to-Sale Conversion Rate

What if I told you one simple calculation would tell you whether to optimize any conversion metrics between an opt-in and a sale, or to look elsewhere? That’s what the list-to-sale benchmark gives you. “List-to-sale” is the percentage of buyers of your product or service over a given time period relative to the number of opt-ins to your email list for the same period.

Say in a given month you get 1,000 opt-ins to your email list, and in that same month, you make 55 sales of your flagship product. Wondering whether you should go with a webinar funnel instead of an email onboarding sequence? Whether to incorporate video into your sales page? Whether to change the color of your “buy now” button?

The answer to all of them is “no”, and I didn’t even need to take a look inside your funnel. Why? With 55 sales, you’re converting at a staggering 5% list-to-sale.

To calculate, just take the sales in the last 30 days and divide those by opt-ins over the same time period.

list-to-sale-conversion-rate-formula

Some readers will be noticing the absence of a sales cycle in that calculation (i.e. since it takes days-to-weeks and several touch points to make a sale. We should be comparing this month’s buyers to last month’s opt-ins). You can control for this with a simple average:

  • Take the last 4 months, and average the opt-ins over the first 3
  • Then average the sales over the last 3.
  • Then perform the same percentage calculation. (Sales divided by opt-ins)

For example, say you’re calculating in August:

  • First you’d average the monthly opt-ins for April, May, and June. Let’s just say the average is 1500.
  • Then you’d average the sales from May, June, and July, in order to leave a 30-day lag. Let’s say that average came out to 75.
  • Dividing the sales by the opt-ins, and you’d get 5%.

The benchmark you should be aiming for? 1-2%. Below that, go nuts with split testing parts of your funnel. Above 1%, look elsewhere.

Above 2%, and I’d seriously consider raising your prices. In the hypothetical case of the 5% from above, I’d immediately double the price.

Next, and especially if your list-to-sale conversion is at-or-above the 1-2% benchmark, it’s time to look at your traffic.

2. Opt-in Conversion Rate

The vast majority of businesses I work with have list-to-sale conversion rates closer to benchmarks than their opt-in conversion rates. Put another way, if they’re wasting any resources split-testing their funnel or sales copy, they’re completely ignoring the sizable cohort of website visitors who never even see the offer because they bounce off the site.

As with list-to-sale conversions, you can do a back-of-the-napkin calculation for opt-ins. Just count your new subscribes from the last 30 days and divide it by total website visitors during that same 30 days.

The benchmark to aim at for opt-in conversion is 10%.

If you haven’t ever found your opt-in rate before, my guess is you’ll be astonished how low it is. I’ve seen it as low as 1-2%.

Luckily, there’s a simple strategy to improve it:

  • Find the individual opt-in rates of your biggest webpages and your 10 most popular content pieces. (If you’re using a plugin like SumoMe or OptinMonster, you can set up the software to tell you your opt-ins for each page.)
  • Look for the “outliers” – content pages often perform worse than home and about pages.

Once you’ve identified the worst-performers, perform this simple checklist (from lowest-hanging-fruit to more subtle)

  1. Can readers find your opt-in offer, or is it buried below the fold or ¾ of the way down a blog post?
  2. Are you giving your visitors only one thing to do on each page or post, or are you offering 3 different giveaways on various parts of your page?
  3. This is not the type of page you want to create if you’re looking to increase opt-ins.

    too-many-ctas-blog-postDon’t give your readers more than one choice when optimizing for opt-ins

  4. Is your opt-in offer not just well written, but well copywritten? Does it specify exactly who it’s for, describe a clear, specific benefit, and emphasize the urgency for opting in? (Even high performing opt-ins can usually be improved).
  5. Are you requiring your subscribers to double-opt-in? This will lower your opt-in conversions. Many founders I’ve talked to like to use a double-opt-in because it seems more “polite”. In my opinion, making somebody go off the page to get the freebie they just gave your email address for, let-alone to wait up to 20 minutes for it to arrive in their mailbox isn’t particularly polite. When I give my email address to get a lead magnet, I want it now – not after reconfirming my email address and waiting 20 minutes for the email.

Split-test ninjas take-note: if you’ve read this far, and your opt-in rate is indeed garbage, there’s ample opportunity to split test:

  • Two versions of a homepage with different opt-in copy/design.
  • Two versions of an exit-pop on a popular content piece.

Go nuts.

3. Traffic

If you’re among the extremely lucky minority with list-to-sale conversions at-or-above 2%, and opt-in conversions at-or-above 10%, and you’ve raised your prices, I have some disappointing (although kind of good) news: split testing is not a good fit for your business.

Here’s the question to ask: Are your monthly sessions at least 50% of your list size? (i.e. if your list has 2,000 subscribers, are you getting at least 1,000 uniques per month?) If not, you need a traffic strategy. Don’t waste your time A/B testing anything.

While I’m a conversions expert and not a traffic expert, here’s a quick decision tree:

  • Determine your market size. If you could 5x your traffic, are there enough people in your market to support it?
  • Implement a content/syndication/guest-post strategy ASAP. It’s practically the only guaranteed winner across all verticals, but it can take up to a year to bear fruit.
  • Consider hiring a paid traffic expert for one month to test customer acquisition costs from various paid sources. Choose the most profitable and double down while you wait for organic traffic to grow.

Bottom line: the same month spent split testing two opt-in offers on a homepage, landing page, or content page, could provide a 2-4x increase in revenue (by, say, improving an opt-in conversion rate from 1% to 4%), while the same time and money spent trying to boost an already maxed-out sales conversion rate would have a much smaller return.

That’s why a little context can save you thousands.

About the Author: Nate Smith is a direct-response copywriter and funnel expert who helps businesses scale by exploiting their most powerful profit levers. Nate is founder of 8020MarketingGuy.com.



[ad_2]

Source link