Post word cloud:
Few days ago we learned from 7 mistakes I've done in A/B testing.
The last, but not least, was the confusion between "Correlation and causality". It is: testing a modification in a landing page, and assuming that the conversion rate changes are caused by that only change.
It may sound confusing and theoretic, so let's see an almost-real study case.
We are the CRO expert in a prominent testing tool's company, and we are trying to improve the "Sign ups" conversion ratio.
This is our Landing Page:
After some brainstorming (note the lack of hypothesis or data related) we decide to test the Hero Shot bigger. As we are not sure how bigger it should be, we test the control page vs 2 options: 15% and 25% bigger.
Variation 1 - 15% bigger
Variation 2 - 25% bigger
Few days later, we reach statistic significance, and the results show that none of the variations improved the original version.
We look at the data, scratch our head, and say... OK, "Hero's Shot size affects on our conversion rate, as the test proved that a bigger picture decreases conversions". So... let's make all our hero shot's smaller? Absolutely NOT.
That's not what the test result show. Reading testing results that way leads to confusion and slow increases. What the test showed was:
There was a correlation between the Hero's Shot size and the conversion in this test.
Which doesn't mean that:
- The conversion rate decrease was caused by the picture's size.
- Making the Hero Shot smaller would make conversions improve.
- The only variable tested was the picture's size.
Someone may say: "Hold on... Hero shot's size was the only change we did. It's the only variable altered in the test".
Of course not. Changing the picture size affected many other variables, like:
- Position of the boxes below the Hero Shot.
- Position and size of the right side claims.
- Vertical Scroll bar appeared.
- Loading time was increased.
- What is more important: The position of the Call To Action button was modified.
In this case the variable "position of the Call to Action" was maybe affecting more than the "Hero shot size" variable to the conversion.
Had we planned the test with a hypothesis or planned the variables affected by our change, we'd have noticed that. And then, we would have tried to isolate the variable by doing a MVT like this:
But.. ¿See what we are doing here?
Trying to isolate "Hero Shot size" variable, we modified the variable "Testimonial position", so the process would start again. Most of times we can't isolate a variable without affecting other's.
So, can we ever be sure of what caused the conversion increase?
99% of times NO. I'm not an statistics expert, and I need your help here (please comment!), but as far as i know, determining the exact cause of a change in conversion rate's is near to impossible.
As we already know, thousands of variables affect conversion. To find the best results in our test, our best chance is following the Scientific method and be disciplined in our hypothesis.
As long as we follow the track of the hypothesis, we'll be able to find more exactly the causes of the increases in our test.
Remember always: Testing is simple. Finding things to test is simple. Knowing what and how to test, is not.
Need some debate here!!