ROI TrackingWeb Analytics

Adaptive Multivariate Testing – Does It Work?

By February 6, 2009 December 3rd, 2012 10 Comments

If you like tinkering with your website, then you have probably heard of A/B or multivariate testing. This is where you can quickly test new things on your website, such as copy, images, call-to-action buttons, placement, etc., and see which combination effectively leads to more conversions. A/B testing is essentially testing two versions against each other that could be completely different, where as multivariate testing assesses multiple areas of the page and tests all possible combinations. So, if you had 3 versions of a headline, 3 versions of an image, and 3 versions of a button, you would have 27 possible combinations in a multivariate test.

Traditionally, multivariate testing has been where each possible combination gets equal play; meaning each combination is displayed equally to your traffic. Once a visitor is exposed to one of the test combinations, they are given a cookie so that each time they return while the experiment is still in progress, they will see the same combination. Google Website Optimizer is considered a traditional multivariate testing tool, where their algorithm will determine the winner based on how many combinations you have, and how many visitors is required to statistically determine a winner.  Their model shoots for a 12% minimum improvement in conversion rate at an 80% confidence level.

Now we have what’s called adaptive multivariate testing, which is offered by a company called Hiconversion. What they do is the same multivariate set up, but instead of giving equal play to each possible combination, they only test page combinations that are consistently performing in producing conversions. They claim that their methodology dramatically reduces the amount of traffic required to reach a statistically correct “winner”. This real-time adaptation also reduces the amount of loss leads or sale conversions that a typical test can produce.

This is sort of how Google AdWords works when you opt for the “auto-optimize” feature, which instead of displaying your ads evenly, they display the best performing ads more often.

Well, I personally was never a believer in AdWord’s optimization feature. I thought they determined a “winner” way too early, and so I’ve always turned off the optimization feature, and did my own split testing within the ad groups themselves.

I also had the same disbelief for the Hiconversion tool initially. First of all, how could you really determine a winner if all combinations were not played evenly? If one was played more than the other, then of course that combination would win!

So I began to ponder about the mathematics involved with this (I know…that’s what I do). First let me start by saying that I do not know how the Hiconversion algorithm works (I did ask them, but they said they would have to shoot me), so this is just my rationalization.

Let’s imagine that our multivariate test had 100 possible combinations. As the testing starts, each combination gets equal play, or 1%. The algorithm can quickly see which combination is starting to get higher conversion rates until it reaches a point to where it begins to test itself. (Bear with me). So if a few combinations are “starting” to look like good performers, the algorithm might say “OK, you got 5 conversions on 100 plays in X time…let’s see if you can get the same 5 conversions or better for the next 100 plays in the same time period.” If the combination meets or exceeds the mini-test, it moves up to the next “level” where it is now played 3% of the time, where if it doesn’t, and lets say it falls to 4, or 3, then perhaps it stays among all of the other combinations that are tested at 1% play. So the combinations in the experiment keep getting tested in this manner until there are only a few left, where a “winner” prevails. All of this happens very quickly, and in real-time. It’s kind of this survival of the fittest scenario because the combinations that can’t leap into the next levels, actually get played less and less over time, since the ones that are performing are eating up their play time.

Anyway, that’s my take on how it might be working, but again, I didn’t design the tool. My ultimate experiment would be to do identical experiments in Google Website Optimizer and Hiconversion and see if they arrive at the same result. Of course, I can’t be experimenting with all of our clients leads and sales, so maybe I’ll try this on my site one day.


  • Richard Hearne says:


    Just wondered if you used their product, and if so how you found it?

    You dont seem to have subscribe to comments, so perhaps you could reply to my mail? If so many thanks in advance.

    Best rgds

  • Derek says:

    Interesting idea, but this model is severely flawed as it is susceptible to time bias, such as time-of-day, day-of-week, holiday and news events, etc.

    For example, if the algorithm automatically adjusts what percent of each version is shown after only looking at Monday’s traffic, then the decision it makes is based solely on user preferences for Mondays, not for other days of the week.

    Similarly, suppose you are a flower or chocolates website … if the algorithm looks at traffic from Feb 1 to 13th, adjusting several times in that timeframe, it probably zeros in on what it thinks is the “best” version. But, come Feb 14th (Valentine’s day), everything changes because the big “event” has come and gone, and now the shopper’s dynamics are totally different. Where are you then? You basically need to start over.

    These are some of the fundamental issue with self-optimizing “auto pilot” systems. Even when we fly airplanes, we don’t let the auto pilot take us the whole way … there are many times that the human needs to take control of the stick.


  • DM, thank you for sharing your opinion about adaptive testing methodology.

    This is not just an ‘interesting idea’. This is a battle tested solution used by broad spectrum of companies over period of more than 2 years.

    During that time the method has consistently delivered speedy results, it has eliminated testing risk, and it was very easy to use.

    The algorithm is designed to take into account a time varying nature of website visitors. It collects combinations statistics over different day parts or days in the week. It’s decisions are driven by relational statistical model a not how an individual combination performs. As result it can effectively adopt to changes in visitor behavior.

    Now, lets talk about your hypothetical ‘February 14th chocolate or flower sales’ example that in your mind kills the adaptive methodology.

    Assuming that you are right about visitors behaving completely differently on February 14 then any other day, your options are:

    * Option 1 – Adaptive optimization: A system will adopt itself to a new situation, but because of a short burst nature of the change the adaptive response will be sub-optimal. Useful but not a perfect one.
    * Option 2 -Traditional multivariate testing solution: These methods will not react to change at all.
    * Option 3 – Let the human take control of the ‘stick’: This is a guessing game that can not guarantee consistent results.

    Today, to better manage their website, the most companies are applying the best practices and ‘gut feel’ measures. The end result is that we have 98% of general web traffic wasted.

    That is why we recommend use of a solution like to merge the optimization science with a good marketing insight. There is not denying, an optimization test is only as good as input provided by a marketing person. Our tool gives you an ability to easily test many promotional ideas while helping you to consistently produce measurable increases in the online marketing ROI.


    President & CEO
    Hiconversion, Inc.

  • I do want to leave a comment here because the post here helps me a lot in dealing with the WordPress program! I could not delete anything in the program in the past and luckily, with this post, I know how to solve the problem now!

  • i emailed this to my mom….thanx

  • Thanks, bookmarked this blog.

  • Jake Muney says:

    Thank you for the intelligent critique. Me and my neighbour were just setting up to do some research about this. I am very glad to see such great information being shared freely out there.

  • Adam says:

    Hi, interesting article, but without fully testing the “losing” treatments aren’t you in danger of producing false positives? How can you statistically prove a treatment to be false, it you haven’t sampled it enough times. Doesn’t this work in reserve of GWOs principle of flood testing the losers to get them out of the way more quickly?

  • How you find ideas for articles, I am always lack of new ideas for articles. Some tips would be great.

Leave a Reply