A/B Testing of Recommendation Blocks
A/B tests are used to determine the effectiveness of various recommendation elements: title, appearance, placement, or data source.
In the A/B testing tab, you can start a new test and later view its results.
To run a new test, click Start A/B testing. Youโll be directed to Parameters to specify test conditions.
You can also start testing in Parameters by clicking the self-titled button.
Parameters
First, specify the number of variants to be tested, distribute weight, and enter a description.
Weight is the share of the audience that will see the corresponding variant. You can distribute it evenly by clicking the self-titled button, or manually specify the ratio by hovering over the digit in the line. Recommendations with zero weight wonโt be displayed on the site.
By default, two variants are set for A/B testing. This is the required minimum. To add a new one, click Add variant. You can test up to 8 variants.
Next, specify the parameter to be tested. We recommend testing one parameter at a time.
Placement. Click the arrow at the end of the title to choose the variant from the available placements.
Appearance. Similarly to placements, click the arrow at the end of the title to choose the variant from the available appearances.
Title. Enter the title variants.
Data source. Click the corresponding data source to replace it with the variant to be tested.
Click Save after all settings are done.
If you don't run any A/B test, the Parameters tab will display only the parameters of the recommendation.
A/B Testing Report
Each recommendation that has been A/B tested is labeled with the corresponding button in the general list. Hover over it, to preview the results in brief.
Depending on the current status (complete/incomplete) and results for two tested indicators (CTR, Conversion), the button has different colors:
- Green: testing is complete, both winners (CTR, Conversion) are determined.
- Grey and yellow: there is not enough data to determine the winner for CTR, the results for Conversion are the same.
- Yellow: the results for CTR and for Conversion are the same, testing is in progress.
- Green and yellow: the winner for CTR is determined, the results for Conversion are the same.
- Yellow and green: the results for CTA are the same, the winner for Conversion is determined.
- Grey: there is not enough data to determine the winner for any indicator.
There are several ways to open the detailed report:
- Click the corresponding recommendation in the general list and go to A/B testing.
- Click A/B testing in the general list;
- Click View test in the preview.
First, the report shows the winner for both indicators. To become a winner, a variant must perform better with a confidence of at least 90%, and the confidence interval between variants must be over 5%. Testing will continue until this minimum threshold is reached.
If there is not enough data, you can finish the test by clicking Choose Variant A and finish test (the variant that has been performing better will be in bold).
If different winners variants perform better for CTR and Conversion, the winner will be the variant that performs better for Conversion, as sales are more important than clicks.
Results show indicators for each variant. Indicators for the winner are highlighted green. Hover any to see the confidence interval โ a metric that determines the accuracy of testing. The more people saw the recommendations and performed the target action, the higher the interval and the lower the standard error. If % of the standard error is high, wait until enough data is collected.
Activity dynamics visualizes the results in the form of histograms. Hover over the graph to see details.
At the bottom, there is a history of completed tests with the date of testing and the winner.
Updated 11 months ago