A/B Testing of Recommendation Blocks

A/B tests are used to determine the effectiveness of various recommendation elements: title, appearance, placement, or data source.

In the A/B testing tab, you can start a new test and later view its results.

To run a new test, click Start A/B testing. You’ll be directed to Parameters to specify test conditions.

924

You can also start testing in Parameters by clicking the self-titled button.

799

Parameters

First, specify the number of variants to be tested, distribute weight, and enter a description.

931

Weight is the share of the audience that will see the corresponding variant. You can distribute it evenly by clicking the self-titled button, or manually specify the ratio by hovering over the digit in the line. Recommendations with zero weight won’t be displayed on the site.

By default, two variants are set for A/B testing. This is the required minimum. To add a new one, click Add variant. You can test up to 8 variants.

Next, specify the parameter to be tested. We recommend testing one parameter at a time.

Placement. Click the arrow at the end of the title to choose the variant from the available placements.

870

Appearance. Similarly to placements, click the arrow at the end of the title to choose the variant from the available appearances.

Title. Enter the title variants.

587

Data source. Click the corresponding data source to replace it with the variant to be tested.

714

Click Save after all settings are done.

If you don't run any A/B test, the Parameters tab will display only the parameters of the recommendation.

A/B Testing Report

Each recommendation that has been A/B tested is labeled with the corresponding button in the general list. Hover over it, to preview the results in brief.

1078

Depending on the current status (complete/incomplete) and results for two tested indicators (CTR, Conversion), the button has different colors:

  • Green: testing is complete, both winners (CTR, Conversion) are determined.
  • Grey and yellow: there is not enough data to determine the winner for CTR, the results for Conversion are the same.
  • Yellow: the results for CTR and for Conversion are the same, testing is in progress.
  • Green and yellow: the winner for CTR is determined, the results for Conversion are the same.
  • Yellow and green: the results for CTA are the same, the winner for Conversion is determined.
  • Grey: there is not enough data to determine the winner for any indicator.

There are several ways to open the detailed report:

  1. Click the corresponding recommendation in the general list and go to A/B testing.
  2. Click A/B testing in the general list;
  3. Click View test in the preview.
1067

First, the report shows the winner for both indicators. To become a winner, a variant must perform better with a confidence of at least 90%, and the confidence interval between variants must be over 5%. Testing will continue until this minimum threshold is reached.

If there is not enough data, you can finish the test by clicking Choose Variant A and finish test (the variant that has been performing better will be in bold).

If different winners variants perform better for CTR and Conversion, the winner will be the variant that performs better for Conversion, as sales are more important than clicks.

910

Results show indicators for each variant. Indicators for the winner are highlighted green. Hover any to see the confidence interval – a metric that determines the accuracy of testing. The more people saw the recommendations and performed the target action, the higher the interval and the lower the standard error. If % of the standard error is high, wait until enough data is collected.

934

Activity dynamics visualizes the results in the form of histograms. Hover over the graph to see details.

922

At the bottom, there is a history of completed tests with the date of testing and the winner.