Skip to main content
These walkthroughs show how the ranking platform features work together to solve real problems. Each scenario can be completed entirely within the console — no engineering support required.

Scenario 1: Streaming platform series launch

Goal: Promote a new series for two weeks, targeting users who haven’t watched it yet, while running an A/B test to measure the impact.

Step 1: Define the audience

Create two segments to separate users who have and haven’t engaged with the series. Segment: “Hasn’t watched Series X”
  1. Go to Ranking > User Segments and click New Segment.
  2. Add a condition:
    • Type: item_interaction
    • Item ID: the series item ID (e.g. series_x_ep1)
    • Operator: interactions <
    • Value: 1
  3. Name it “Hasn’t watched Series X” and save.
Segment: “Started but didn’t finish Series X”
  1. Create another segment.
  2. Add two conditions (AND logic):
    • Type: item_interaction, Item ID: series_x_ep1, Operator: interactions >, Value: 0
    • Type: item_interaction, Item ID: series_x_ep10 (finale), Operator: interactions <, Value: 1
  3. Name it “Started not finished Series X” and save.

Step 2: Create the rules

Rule: “Pin Series X for new viewers”
  1. Go to Ranking > Rules Engine and click New Rule.
  2. Type: pin. Priority: 80.
  3. Add a condition: field = segment_id, select “Hasn’t watched Series X”.
  4. Action: pin the series into top 3 positions.
  5. Schedule: set start date to the campaign launch and end date to two weeks later.
  6. Save.
Rule: “Boost continuation for partial viewers”
  1. Create another rule.
  2. Type: boost. Priority: 70.
  3. Add a condition: field = segment_id, select “Started not finished Series X”.
  4. Action: boost the series episodes with a high factor.
  5. Same schedule as above.
  6. Save.

Step 3: Run an A/B test

  1. Go to A/B Testing and click New Experiment.
  2. Name: “Series X launch campaign”.
  3. Description: “Hypothesis: pinning Series X for non-viewers increases completion rate.”
  4. Variants:
    • Control (50%): no config overrides (standard recommendations).
    • Treatment (50%): config overrides with include_rule_ids set to the IDs of the two rules above.
  5. Create the experiment and set status to Running on launch day.

Step 4: Measure results

  1. During the campaign, go to the experiment’s Results tab.
  2. Click Refresh metrics periodically.
  3. Compare CTR and conversion rate between Control and Treatment.
  4. Check the lift percentage to see if the campaign is working.
  5. After two weeks, set the experiment status to Completed. The rules auto-deactivate via their schedule.

Scenario 2: E-commerce flash sale weekend

Goal: Boost sale items for a weekend, show premium items to high-value customers, suppress out-of-stock products, and test whether manual merchandising rules outperform pure ML.

Step 1: Create the high-value customer segment

  1. Go to Ranking > User Segments and click New Segment.
  2. Add a condition:
    • Type: computed
    • Field: total_events
    • Operator: greater_than
    • Value: 100
  3. Name it “High-value customers” and save.

Step 2: Create the rules

Rule: “Flash sale boost”
  1. Go to Ranking > Rules Engine and click New Rule.
  2. Type: boost. Priority: 60.
  3. Add a condition: field = category, operator = equals, value = sale.
  4. Action: boost with a high factor.
  5. Schedule: Friday 18:00 to Sunday 23:59.
Rule: “VIP exclusive items”
  1. Create another rule.
  2. Type: boost. Priority: 80 (higher than the sale boost so it takes precedence).
  3. Add conditions:
    • field = segment_id, select “High-value customers”
    • field = tier, operator = equals, value = premium
  4. Action: boost premium items strongly.
  5. Same weekend schedule.
Rule: “Suppress out-of-stock”
  1. Create a third rule.
  2. Type: filter. Priority: 100 (highest — always applies first).
  3. Add a condition: field = stock_status, operator = equals, value = out_of_stock.
  4. Action: exclude matching items.
  5. No schedule (always active).

Step 3: Set up the pure ML experiment

  1. Go to Ranking > Pipeline Config.
  2. Create or note your default pipeline (all stages enabled).
  3. Consider creating a second pipeline with the rules stage disabled for pure ML ranking.
  4. Go to A/B Testing and create an experiment:
    • Control (50%): default pipeline.
    • Treatment (50%): pipeline with rules stage disabled.
  5. Set to Running on Friday.

Step 4: Monitor the sale

  1. Open Analytics and watch served volume.
  2. Filter by user ID to spot-check that VIP users see premium items.
  3. Refresh experiment metrics throughout the weekend.
  4. After the sale, complete the experiment and compare conversion rates.
  5. The sale rules auto-deactivate after Sunday — no cleanup needed.

Scenario 3: Content freshness and diversity

Goal: Ensure recommendations always include recent content and don’t over-represent a single category.

Step 1: Create rules

Rule: “Boost new content”
  1. Type: boost. Priority: 50.
  2. Condition: field = published_days_ago, operator = less_than, value = 7.
  3. Action: boost by a moderate factor.
  4. No schedule (always active).
Rule: “Diversify by category”
  1. Type: diversify. Priority: 40.
  2. No conditions (applies to all results).
  3. Action: limit to maximum 3 items per category value.
  4. No schedule (always active).

Step 2: Verify with Explainability

  1. Go to Ranking > Explainability.
  2. Enter a test user ID and an item ID for a recently published item.
  3. Confirm the “Boost new content” rule shows matched.
  4. Try an older item and confirm it shows no match.

Scenario 4: Gradual feature rollout

Goal: Roll out a new set of ranking rules to 10% of users first, then expand.

Step 1: Deploy rules as inactive

  1. Create your new rules but leave them inactive (toggle off).
  2. Note their rule IDs.

Step 2: Create a staged experiment

  1. Go to A/B Testing and create an experiment:
    • Control (90%): no config overrides.
    • Treatment (10%): config overrides with include_rule_ids set to the new rule IDs.
  2. Set to Running.

Step 3: Monitor and expand

  1. Check experiment metrics after a few days.
  2. If metrics look good, edit the experiment and adjust traffic: Control 50%, Treatment 50%.
  3. Continue monitoring. When confident, set the experiment to Completed, activate the rules for everyone, and delete the experiment.

Combining features

These scenarios demonstrate a pattern: segments define who, rules define what, scheduling defines when, pipelines define how, and experiments measure whether it works.
FeatureRole
User SegmentsTarget specific user cohorts
Rules EngineOverride rankings with business logic
Rule schedulingTime-bound campaigns
Pipeline ConfigControl which processing stages run
A/B TestingMeasure impact with traffic splits
ExplainabilityDebug and verify before launch
AnalyticsMonitor outcomes in production