Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.neuronsearchlab.com/llms.txt

Use this file to discover all available pages before exploring further.

Events are the signals your users generate when they interact with your product: views, clicks, purchases, shares, and any other action you decide is meaningful. NeuronSearchLab uses these signals to learn each user’s preferences and improve ranking quality over time. This guide explains how to define event types, set their relative importance, and organise them into templates for training. The key idea is:
  • the Events page is where you define the training recipe
  • a template is that recipe
  • starting training from a template creates a new trained version
  • review, approval, naming, and promotion happen later on the Models page

Event types

An event type maps a stable type string (sent in your API or SDK calls) to a human-readable name and a weight. The weight tells the training process how much to value one signal relative to others.

Add an event type

1

Open Events

2

Name the event

Enter a name in the Event name field, for example click, purchase, or video_complete.
3

Set a signal weight

Set a weight between 1 and 100. Higher weight means the training process treats that signal as more informative.
4

Add the event

Click Add event.
A common starting point:
NameWeight
impression1
click10
add_to_cart30
purchase100
You can adjust weights at any time. Changes take effect on the next training run.

Send events from your application

Pass the event type when tracking actions via the SDK:
await sdk.trackEvent({
  type: "click",
  userId: "user-123",
  itemId: "itm_7f3a2c9e",
  metadata: { placement: "homepage" },
});
Or via the REST API:
curl -X POST https://api.neuronsearchlab.com/v1/events \
  -H "Authorization: Bearer <access_token>" \
  -H "Content-Type: application/json" \
  -d '[{
    "type": "click",
    "user_id": "user-123",
    "item_id": "itm_7f3a2c9e",
    "context_id": "101",
    "occurred_at": 1777478400,
    "click": {}
  }]'

Signal templates

A template captures a specific combination of event types, weights, and training thresholds. Saving a template lets you reproduce a training run exactly or switch between different configurations without losing your settings.

Create a template

1

Configure event signals

On the Events page, configure your event types and set the thresholds described below.
2

Name the template

Enter a name in the Template name field.
3

Choose a status

Set the status to Draft (not yet used for training) or Published (ready to train from).
4

Save the template

Click Save template.

Training thresholds

Two thresholds control when a model is trained:
  • Per-signal threshold: the minimum number of events for each individual event type before that signal is included in training. Set this to avoid training on noise from rarely used signals.
  • Minimum total events: the minimum total number of events across all signals before training will proceed. This prevents a model from training on too little data to generalise.

Train from a template

1

Select a template

Select a saved template from the template list.
2

Configure training options

Configure the training options:
  • Epochs: how many passes through the training data (default 5, range 1-50).
  • Batch size: number of events processed together (64, 128, 256, or 512).
  • Learning rate: step size for gradient updates (default 0.001).
3

Start training

Click Start training. The console will create a new trained version from that template and show the run status as training progresses.
This step does not require you to already have a model record. New accounts can train immediately.

What happens after training

After a run finishes:
  • a new trained version appears on the Models page
  • the system gives it a default name based on the template and the run identifier
  • your team can add a human-friendly label and description
  • you can approve it
  • you can promote it to a serving target
If you retrain later, the platform creates another new version. It does not overwrite the earlier one.

Monitoring training runs

Open Console > Training Jobs to see the status of each run, its duration, and the metrics reported at completion. From this page you can:
  • View final training metrics and run manifests.
  • Stop an in-progress run if needed.
For approval, version labelling, and promotion, continue in Console > Models.

Analytics: measuring signal quality

After deploying a model, monitor how different event types are driving engagement in Console > Analytics. Track each event type as its own metric alongside recommendation serve counts to understand which signals correlate with downstream outcomes. If high-weight events are rarely occurring, consider lowering their weight or broadening what you treat as a conversion signal. If impressions are large but clicks are low, adjust your context filters and ranking rules before retraining.