Siren

Distributor Compensation Modeling

How to pick the right pool percentage and metric point values for a fair distributor. Worked examples for moving from flat-rate creator pay to performance-based revenue sharing.

Requires Siren Essentials

Last updated: April 10, 2026

Setting up a distributor requires picking a pool percentage and assigning point values to each metric you track. The mechanics are straightforward once you’ve done it a few times, but the first time you do it the numbers can feel arbitrary. This page walks through the math so you can pick values that make sense for your business instead of guessing.

The two levers

A distributor has two configuration decisions that drive payouts. Everything else is secondary.

The first is the pool percentage: what fraction of your relevant revenue goes into the distribution pool each period. The second is the point values you assign to each tracked event type. Together they determine how much each collaborator earns, and adjusting either one changes the outcome.

Pick the pool percentage first, because it’s the budget question. Pick the point values second, because they’re the fairness question.

Picking the pool percentage

This is the budget question. Ask it like this: how much am I willing to spend on this distributor each period, and what pool percentage produces that spend?

If you currently pay collaborators a fixed total each month, divide that total by your monthly relevant revenue to find the equivalent pool percentage. If your monthly subscription revenue is $50,000 and you currently pay 30 instructors a flat $500 each ($15,000 total), that’s a 30% pool. Starting at the equivalent percentage means your total payouts under the new model will roughly match your current spend. Nobody’s getting a budget surprise.

After you’ve run the new model for a few periods, adjust the percentage based on how the pool feels relative to performance. If collaborators are earning noticeably less than before and you want to keep them happy, bump the percentage up. If the new model is producing payouts larger than the old budget and you can’t sustain that, bring it down. Small adjustments compound quickly because the pool scales linearly with the percentage.

Picking metric point values

This is the fairness question. Different events have different difficulty and different value to your business, and the point values encode that judgment.

The trap to avoid is weighting every event the same. If you track lesson completions (easy, frequent) and course completions (harder, rarer), giving them equal weight would over-reward instructors whose students take shorter courses, because short-course instructors rack up more total completions per student.

A common starting ratio for an LMS is course completion = 10 points, lesson completion = 1 point. Ten lesson completions equal one course completion in the score. This matches the rough intuition that finishing a course is harder and more valuable than finishing a single lesson, without being so lopsided that lessons stop mattering.

Adjust the ratio based on what you actually want to reward. If you care more about retention than throughput, weight course completions higher, maybe 20:1 or 50:1. If you’re a podcast network paying based on listener engagement, you might weight a full episode listen at 10 points and a partial listen at 1 point. If you’re running a content site, you might weight a paid conversion at 100 points and a free signup at 1 point. Pick a ratio that matches your business, then adjust after the first period based on the actual distribution of payouts.

A worked example: moving from flat rate to performance-based

A course platform pays 30 instructors $500/month each. Monthly subscription revenue is $50,000. They want to move to a performance-based pool tied to student engagement so that instructors whose content actually gets used earn more.

Step 1: pool percentage. Current spend divided by relevant revenue is $15,000 / $50,000 = 30%. Start there.

Step 2: metric weights. Course completions matter more than lesson completions. Assign course completion = 10 points, lesson completion = 1 point. This is the starting ratio and can be tuned after the first period.

Step 3: calculate the first month under the new model. At the end of the month, imagine three instructors with these engagement totals:

  • Instructor A: 50 lesson completions + 5 course completions = 50 + 50 = 100 points
  • Instructor B: 200 lesson completions + 2 course completions = 200 + 20 = 220 points
  • Instructor C: 10 lesson completions + 0 course completions = 10 points

The pool is $50,000 × 30% = $15,000. Total points across all 30 instructors (rolling up everyone, not just these three) comes to 3,300. That makes the per-point payout $15,000 / 3,300 = $4.55. Each instructor’s earnings are their score times the per-point rate:

  • Instructor A: 100 × $4.55 = $455
  • Instructor B: 220 × $4.55 = $1,001
  • Instructor C: 10 × $4.55 = $45.50

Step 4: compare to the flat-rate model. Under the old system, A, B, and C would all have earned $500. Under the new one, A is slightly under, B is heavily over, and C is heavily under. This is the redistribution you wanted. Instructor B’s content is being engaged with more, so they earn more. Instructor C’s content barely gets used, so they earn less. The pool is the same size, but it lands where it belongs.

Communicating the change to collaborators

Before you flip the switch, show your collaborators the new model with worked examples. Pick a few real instructors from your data and walk them through what the new payout would have been last month. People handle change much better when they can see the math themselves, and it surfaces feedback early.

Run it in parallel for one full period if you can afford to. That means calculating what each collaborator would have earned under the new model, but still paying them the old flat rate for that period. At the end of the period, share the “what it would have been” numbers alongside the actual paycheck. The collaborators who would have earned more get an incentive to keep going. The ones who would have earned less get a heads-up and a chance to ask questions before their income actually changes.

Be ready to adjust the metric weights based on what you hear. The first ratio you pick is rarely the final one.

Iterating

After the first one or two periods on the new model, look at the actual payouts and decide if they match your intent. Two patterns are worth watching for.

If everyone’s earning roughly the same amount, the redistribution didn’t actually redistribute. This usually means your weight ratio between high-effort and low-effort events is too small. Try doubling the weight on the high-effort event and see what happens the next period.

If a small number of collaborators are eating most of the pool and the rest are earning almost nothing, you’ve got a winner-take-all dynamic. Check whether your high-weight metric is too easy to game (a single instructor flooding the system with a promotional push, for example) and consider either capping individual earnings or switching to a different distribution structure. If the concentration is persistent and not a one-off, the Performance Weighted Pool may not be the right fit, and you should look at Choosing a Distribution Structure to see the alternatives.

For more on how distributors themselves work and how the pool calculation fits into the full event pipeline, see What are Distributors?.

Further reading