Scenario 1: Streaming platform series launch
Goal: Promote a new series for two weeks, targeting users who haven’t watched it yet, while running an A/B test to measure the impact.Step 1: Define the audience
Create two segments to separate users who have and haven’t engaged with the series. Segment: “Hasn’t watched Series X”- Go to Ranking > User Segments and click New Segment.
- Add a condition:
- Type:
item_interaction - Item ID: the series item ID (e.g.
series_x_ep1) - Operator:
interactions < - Value:
1
- Type:
- Name it “Hasn’t watched Series X” and save.
- Create another segment.
- Add two conditions (AND logic):
- Type:
item_interaction, Item ID:series_x_ep1, Operator:interactions >, Value:0 - Type:
item_interaction, Item ID:series_x_ep10(finale), Operator:interactions <, Value:1
- Type:
- Name it “Started not finished Series X” and save.
Step 2: Create the rules
Rule: “Pin Series X for new viewers”- Go to Ranking > Rules Engine and click New Rule.
- Type:
pin. Priority:80. - Add a condition: field =
segment_id, select “Hasn’t watched Series X”. - Action: pin the series into top 3 positions.
- Schedule: set start date to the campaign launch and end date to two weeks later.
- Save.
- Create another rule.
- Type:
boost. Priority:70. - Add a condition: field =
segment_id, select “Started not finished Series X”. - Action: boost the series episodes with a high factor.
- Same schedule as above.
- Save.
Step 3: Run an A/B test
- Go to A/B Testing and click New Experiment.
- Name: “Series X launch campaign”.
- Description: “Hypothesis: pinning Series X for non-viewers increases completion rate.”
- Variants:
- Control (50%): no config overrides (standard recommendations).
- Treatment (50%): config overrides with
include_rule_idsset to the IDs of the two rules above.
- Create the experiment and set status to Running on launch day.
Step 4: Measure results
- During the campaign, go to the experiment’s Results tab.
- Click Refresh metrics periodically.
- Compare CTR and conversion rate between Control and Treatment.
- Check the lift percentage to see if the campaign is working.
- After two weeks, set the experiment status to Completed. The rules auto-deactivate via their schedule.
Scenario 2: E-commerce flash sale weekend
Goal: Boost sale items for a weekend, show premium items to high-value customers, suppress out-of-stock products, and test whether manual merchandising rules outperform pure ML.Step 1: Create the high-value customer segment
- Go to Ranking > User Segments and click New Segment.
- Add a condition:
- Type:
computed - Field:
total_events - Operator:
greater_than - Value:
100
- Type:
- Name it “High-value customers” and save.
Step 2: Create the rules
Rule: “Flash sale boost”- Go to Ranking > Rules Engine and click New Rule.
- Type:
boost. Priority:60. - Add a condition: field =
category, operator =equals, value =sale. - Action: boost with a high factor.
- Schedule: Friday 18:00 to Sunday 23:59.
- Create another rule.
- Type:
boost. Priority:80(higher than the sale boost so it takes precedence). - Add conditions:
- field =
segment_id, select “High-value customers” - field =
tier, operator =equals, value =premium
- field =
- Action: boost premium items strongly.
- Same weekend schedule.
- Create a third rule.
- Type:
filter. Priority:100(highest — always applies first). - Add a condition: field =
stock_status, operator =equals, value =out_of_stock. - Action: exclude matching items.
- No schedule (always active).
Step 3: Set up the pure ML experiment
- Go to Ranking > Pipeline Config.
- Create or note your default pipeline (all stages enabled).
- Consider creating a second pipeline with the rules stage disabled for pure ML ranking.
- Go to A/B Testing and create an experiment:
- Control (50%): default pipeline.
- Treatment (50%): pipeline with rules stage disabled.
- Set to Running on Friday.
Step 4: Monitor the sale
- Open Analytics and watch served volume.
- Filter by user ID to spot-check that VIP users see premium items.
- Refresh experiment metrics throughout the weekend.
- After the sale, complete the experiment and compare conversion rates.
- The sale rules auto-deactivate after Sunday — no cleanup needed.
Scenario 3: Content freshness and diversity
Goal: Ensure recommendations always include recent content and don’t over-represent a single category.Step 1: Create rules
Rule: “Boost new content”- Type:
boost. Priority:50. - Condition: field =
published_days_ago, operator =less_than, value =7. - Action: boost by a moderate factor.
- No schedule (always active).
- Type:
diversify. Priority:40. - No conditions (applies to all results).
- Action: limit to maximum 3 items per
categoryvalue. - No schedule (always active).
Step 2: Verify with Explainability
- Go to Ranking > Explainability.
- Enter a test user ID and an item ID for a recently published item.
- Confirm the “Boost new content” rule shows matched.
- Try an older item and confirm it shows no match.
Scenario 4: Gradual feature rollout
Goal: Roll out a new set of ranking rules to 10% of users first, then expand.Step 1: Deploy rules as inactive
- Create your new rules but leave them inactive (toggle off).
- Note their rule IDs.
Step 2: Create a staged experiment
- Go to A/B Testing and create an experiment:
- Control (90%): no config overrides.
- Treatment (10%): config overrides with
include_rule_idsset to the new rule IDs.
- Set to Running.
Step 3: Monitor and expand
- Check experiment metrics after a few days.
- If metrics look good, edit the experiment and adjust traffic: Control 50%, Treatment 50%.
- Continue monitoring. When confident, set the experiment to Completed, activate the rules for everyone, and delete the experiment.
Combining features
These scenarios demonstrate a pattern: segments define who, rules define what, scheduling defines when, pipelines define how, and experiments measure whether it works.| Feature | Role |
|---|---|
| User Segments | Target specific user cohorts |
| Rules Engine | Override rankings with business logic |
| Rule scheduling | Time-bound campaigns |
| Pipeline Config | Control which processing stages run |
| A/B Testing | Measure impact with traffic splits |
| Explainability | Debug and verify before launch |
| Analytics | Monitor outcomes in production |

