Page 1 of 1

Balance explore vs optimization - consider big levers.

Posted: Mon Dec 23, 2024 4:23 am
by robiulhasan
Ask yourself, "What will make this test successful?" You should identify the single best KPI to evaluate a test.

Typically, your evaluation metric will be a composite metric, like return on marketing investment. It should be predictive of long-term outcomes.

The Harvard Business Review has an excellent writeup detailing a number of challenges Bing faced in determining which metric to prioritize.

While at first it might seem that revenue generated would be a good metric, they found that they could manipulate revenue by adding more ads. Unfortunately, the metric didn't fully capture the c cell phone list osts of these improvements in revenue - namely a worse customer experience and ultimately less people using Bing.

The team came up with an alternative. They wanted to focus on the customer experience first, and so the primary metric was to minimize the number of queries needed to complete any one session, yet maximize the number of sessions per user.

There is a constant tension between exploring new possibilities and exploiting previous findings.

With AB testing, it can be tempting to shy away from big redesigns and instead focus only on smaller optimizations. However, the huge breakthroughs are more likely to come from big levers - large changes in your offer or design.

Make sure you leave room for these larger experiments.


3. Optimize your sample sizes
One common misconception is that your control and test audience sizes need to be the same.

The truth is, your control represents the winner of all previous AB tests. It is a tried and true effective experience that results in sales.

Because of this, it is most common to dedicate more traffic to the control group, while looking for improvements with a smaller percentage of traffic.