Defines how platform weights are set and changed: founding vote (initiator defines eligible members, averaged proposals), annual community vote (all platforms simultaneously, median of submitted distributions), and structural change tier. Updates ADR 002 and ADR 007 to reflect the new mechanism, and ARCHITECTURE.md to mark weight governance as resolved rather than a future direction. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2.5 KiB
ADR 002 — Simple weighted scoring, configurable via config.yaml
Date: 2026-05-10 Status: Accepted
Context
We needed a way to turn raw participation counts (posts, commits, edits, messages, funding) into a single comparable signal per user. Several approaches were available, ranging from simple weighted sums to machine learning models.
Options considered
| Option | Notes |
|---|---|
| Simple weighted sum | Transparent, adjustable, immediately understandable |
| Normalized weighted sum | More "fair" across platforms with different scales |
| Graph-based reputation | Models relationships, not just activity counts |
| Machine learning model | Could learn patterns, but opaque and hard to argue with |
| Equal weight per platform | Simpler but ignores meaningful differences in effort |
Decision
Simple weighted sum with weights defined in config.yaml.
The formula is:
score = (forum_posts × w₁) + (github_commits × w₂) + (wiki_edits × w₃)
+ (matrix_messages × w₄) + (funding_activity × w₅)
Starting weights (determined by founding vote per ADR 008; adjusted annually thereafter):
github_commits: 0.4— highest weight; commits represent sustained technical effortforum_posts: 0.2— meaningful contribution but easier to produce at volumewiki_edits: 0.2— documentation is undervalued; deliberately weighted equally to forummatrix_messages: 0.1— low weight; high volume, hard to assess qualityfunding_activity: 0.1— included but intentionally small; money ≠ participation
Why not normalize?
Normalization (converting scores to percentages or z-scores) was deliberately deferred. Raw sums are easier to explain and trace. "Your score is 78.8" is harder to argue with than "your score is in the 94th percentile" when the dataset is small and the model is new.
Normalization can be added once the signal is validated socially.
Consequences
- Scores are not directly comparable across communities with different activity volumes
- Platforms with high message counts (Matrix) could dominate if weight is raised carelessly
- Anyone who disagrees with the weights can fork
config.yamland run their own ranking (see Correctability over precision) - Platform weights may not be changed via direct
config.yamledits; changes go through the community vote process defined in ADR 008