Passive feedback signals: learn what I like and don't like #1

Open
opened 2026-02-22 18:54:35 +00:00 by antialias · 0 comments
Owner

Problem

Right now the taste profile treats all listened tracks equally (weighted by play count and recency). But listening to a track doesn't mean I liked it — it might have come up in a playlist and I tolerated it, or I skipped it halfway through. The system has no way to distinguish "I love this" from "this was playing in the background" from "I hated this and skipped it."

Signals to consider

  • Skip behavior — if a track is skipped within the first 30 seconds, that's a strong negative signal. Music Assistant / HA may expose media_position or state transitions that indicate skips.
  • Full plays vs partial plays — a track played to completion is a stronger positive signal than one abandoned at 40%.
  • Repeat plays — already captured via play count, but could be weighted more heavily.
  • Thumbs up/down — explicit feedback via an API endpoint (POST /api/feedback), could be triggered by OpenClaw ("I don't like this song").
  • Queue additions — if the user manually queues a track after a recommendation plays, that's a strong positive signal.
  • Volume changes — turning volume up during a track vs down could be a signal (probably too noisy).

Design questions

  • How do we detect skips? Do we poll the speaker state, or does HA fire events we can hook into?
  • Should negative signals (skips) actively push the taste profile away from those embeddings, or just reduce their weight?
  • How do we handle the "background listening" case where someone puts on a playlist and walks away — every track gets full play but there's no active preference signal?
  • Should we maintain separate positive/negative embedding centroids instead of a single taste profile vector?
  • What's the right weight balance between implicit signals (play duration) and explicit signals (thumbs up/down)?

Impact

Without this, recommendations converge on "inoffensive middle ground" rather than reflecting actual preferences. This is the difference between a recommendation system that's okay and one that's genuinely useful.

## Problem Right now the taste profile treats all listened tracks equally (weighted by play count and recency). But listening to a track doesn't mean I liked it — it might have come up in a playlist and I tolerated it, or I skipped it halfway through. The system has no way to distinguish "I love this" from "this was playing in the background" from "I hated this and skipped it." ## Signals to consider - **Skip behavior** — if a track is skipped within the first 30 seconds, that's a strong negative signal. Music Assistant / HA may expose `media_position` or state transitions that indicate skips. - **Full plays vs partial plays** — a track played to completion is a stronger positive signal than one abandoned at 40%. - **Repeat plays** — already captured via play count, but could be weighted more heavily. - **Thumbs up/down** — explicit feedback via an API endpoint (`POST /api/feedback`), could be triggered by OpenClaw ("I don't like this song"). - **Queue additions** — if the user manually queues a track after a recommendation plays, that's a strong positive signal. - **Volume changes** — turning volume up during a track vs down could be a signal (probably too noisy). ## Design questions - How do we detect skips? Do we poll the speaker state, or does HA fire events we can hook into? - Should negative signals (skips) actively push the taste profile away from those embeddings, or just reduce their weight? - How do we handle the "background listening" case where someone puts on a playlist and walks away — every track gets full play but there's no active preference signal? - Should we maintain separate positive/negative embedding centroids instead of a single taste profile vector? - What's the right weight balance between implicit signals (play duration) and explicit signals (thumbs up/down)? ## Impact Without this, recommendations converge on "inoffensive middle ground" rather than reflecting actual preferences. This is the difference between a recommendation system that's okay and one that's genuinely useful.
Sign in to join this conversation.
No Label
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: antialias/haunt-fm#1