- Python 100%
|
|
||
|---|---|---|
| data | ||
| docs | ||
| src | ||
| .gitignore | ||
| .gitmessage | ||
| CHANGELOG.md | ||
| config.yaml | ||
| main.py | ||
| README.md | ||
| requirements.txt | ||
Agency
A participation signal system for open source communities.
Agency makes existing human activity legible — who is contributing, where, and how much — so that people can act decisively, with standing that is visible to others.
It is not a governance engine and does not make decisions. But over time, it does confer a form of authority — because agency and authority are not separate things. Visibility of consistent participation becomes recognized legitimacy. The system makes that process explicit rather than leaving it implicit and contested.
OSArch is the reference community. If you are from another community, everything here is designed to be forked. Start with
config.yaml.
The core idea
In most open source communities, legitimacy is earned through participation — but that participation is rarely visible in one place. People hesitate to act because participation is real but untracked — and what isn't tracked can always be contested.
Agency addresses that by aggregating participation signals across platforms (forum, code, wiki, chat, funding) into a single ranked table. The goal is not a leaderboard — it is a coordination tool that reduces decision latency and makes distributed authority visible.
Over time, consistent participation recognized by this system becomes a genuine form of authority. That is not a side effect to be avoided — it is the point. Authority earned through sustained contribution is more legitimate than authority assigned by structure.
Legitimacy comes from participation, not structure.
Quickstart
git clone <this repo>
cd agency
pip install -r requirements.txt
python main.py
Example output (sample data, not real community figures):
Rank User Score
------------------------------------
1 moult 78.8
2 aothms 56.1
3 steverugi 28.0
4 yorik 25.3
5 theoryshaw 19.5
How scores are calculated
Scores are a weighted sum of participation counts across platforms:
| Signal | Default weight | Rationale |
|---|---|---|
| GitHub commits | 0.4 | Sustained technical effort |
| Forum posts | 0.2 | Meaningful contribution, easier to produce at volume |
| Wiki edits | 0.2 | Documentation is undervalued; deliberately equal to forum |
| Matrix messages | 0.1 | High volume, hard to assess quality |
| Funding activity | 0.1 | Included but intentionally small — money ≠ participation |
Weights live in config.yaml. Change them, re-run, see a different ranking.
Disagreement with the weights is expected and welcome — that is the point.
A future direction is a lightweight community process for proposing and deciding weight changes — open to any active participant, with equal say regardless of agency rank.
Adapting this for your community
- Fork the repo
- Replace
data/sample.jsonwith your community's participation data - Adjust weights in
config.yamlto reflect what your community values - Add or remove signal types in
src/scoring/score.pyto match your platforms - Run it
Each fork is its own independent system. No central authority, no shared database.
Project structure
agency/
README.md this file — start here
main.py entry point — run this
config.yaml scoring weights (the thing to argue about)
requirements.txt Python dependencies (just pyyaml for now)
data/
sample.json participation data (currently hand-crafted mock data)
src/
collectors/ future: one module per data source
scoring/
score.py weighted formula for a single user
aggregate.py applies score() across all users, returns ranked result
outputs/
table.py renders output as a CLI table
docs/
ARCHITECTURE.md how the system works and why
decisions/ one file per significant decision (ADR format)
CHANGELOG.md running log of what changed, why, and what's next
Documentation system
Every meaningful change to this project is documented in three places:
CHANGELOG.md— what changed, why, and what it means going forwarddocs/ARCHITECTURE.md— how the system currently works, design principles, future directionsdocs/decisions/— one file per significant decision, recording options considered and reasoning
This is intentional. The goal is that a newcomer — human or AI — can read those three things and understand not just what the system does, but why it was built this way.
Current state
This is an early prototype. Current data is hand-crafted mock data, not live API pulls. The scoring model is deliberately simple and unnormalized. Both of these are intentional starting points, not oversights.
See docs/ARCHITECTURE.md for known limitations and the reasoning behind them.
Roadmap (rough order)
- First real data collector (OSArch forum API)
- Additional collectors (GitHub, wiki, Matrix, Open Collective, Funders)
- Normalized scoring
- Web-based dashboard
- Distributed / tamper-evident data layer (IPFS or similar)
Note for AI agents
If you are an AI working in this codebase, you are welcome to update this README when the project changes in ways that affect how a visitor would understand it.
Keep this file at a high level — purpose, core idea, quickstart, and orientation.
Implementation details belong in docs/ARCHITECTURE.md and docs/decisions/.
If something in the README no longer reflects reality, correct it.
If a major new capability exists that a visitor would want to know about, add it.
When in doubt: would a first-time visitor need this to understand what Agency is and whether it is relevant to them? If yes, it belongs here. If no, it belongs elsewhere.
Before committing anything, check that CHANGELOG.md reflects all changes made in
that session — not just code changes, but framing decisions, open questions added, and
wording corrections. If the changelog is behind, update it before committing. Do not
assume the human will catch it.
Contributing
If you want to change how scores are calculated, start with config.yaml.
If you want to add a data source, add a collector in src/collectors/.
If you make a significant architectural decision, add an ADR in docs/decisions/.
The commit message template (.gitmessage) will prompt you to update the docs.