- Shell 63.9%
- Python 36.1%
Hashes required a follow-up commit every time (circular dependency). Version + date + description is sufficient; git log finds the rest. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|---|---|---|
| .githooks | ||
| data | ||
| docs | ||
| src | ||
| .gitignore | ||
| .gitmessage | ||
| CHANGELOG.md | ||
| config.yaml | ||
| main.py | ||
| README.md | ||
| requirements.txt | ||
Agency
A participation signal system for open source communities.
Agency makes existing human activity legible — who is contributing, where, and how much — so that people can act decisively, with standing that is visible to others.
It is not a governance engine and does not make decisions. But over time, it does confer a form of authority — because agency and authority are not separate things. Visibility of consistent participation becomes recognized legitimacy. The system makes that process explicit rather than leaving it implicit and contested.
OSArch is the reference community. If you are from another community, everything here is designed to be forked. Start with
config.yaml.
The core idea
In most open source communities, legitimacy is earned through participation — but that participation is rarely visible in one place. People hesitate to act because participation is real but untracked — and what isn't tracked can always be contested.
Agency addresses that by aggregating participation signals across platforms (forum, code, wiki, chat, funding) into a single ranked table. The goal is not a leaderboard — it is a coordination tool that reduces decision latency and makes distributed authority visible.
Over time, consistent participation recognized by this system becomes a genuine form of authority. That is not a side effect to be avoided — it is the point. Authority earned through sustained contribution is more legitimate than authority assigned by structure.
Legitimacy comes from participation, not structure.
Quickstart
git clone <this repo>
cd agency
pip install -r requirements.txt
python main.py
Example output (sample data, not real community figures):
Rank User Score
------------------------------------
1 moult 78.8
2 aothms 56.1
3 steverugi 28.0
4 yorik 25.3
5 theoryshaw 19.5
How scores are calculated
Scores are a weighted sum of participation counts across platforms:
| Signal | Default weight | Rationale |
|---|---|---|
| GitHub commits | 0.4 | Sustained technical effort |
| Forum posts | 0.2 | Meaningful contribution, easier to produce at volume |
| Wiki edits | 0.2 | Documentation is undervalued; deliberately equal to forum |
| Matrix messages | 0.1 | High volume, hard to assess quality |
| Funding activity | 0.1 | Included but intentionally small — money ≠ participation |
Weights live in config.yaml. Change them, re-run, see a different ranking.
Disagreement with the weights is expected and welcome — that is the point.
A future direction is a lightweight community process for proposing and deciding weight changes — open to any active participant, with equal say regardless of agency rank.
Adapting this for your community
- Fork the repo
- Replace
data/sample.jsonwith your community's participation data - Adjust weights in
config.yamlto reflect what your community values - Add or remove signal types in
src/scoring/score.pyto match your platforms - Run it
Each fork is its own independent system. No central authority, no shared database.
Project structure
agency/
README.md this file — start here
main.py entry point — run this
config.yaml scoring weights (the thing to argue about)
requirements.txt Python dependencies (just pyyaml for now)
.githooks/
pre-commit enforces CHANGELOG.md is staged with substantive changes
data/
sample.json participation data (currently hand-crafted mock data)
src/
collectors/ future: one module per data source
scoring/
score.py weighted formula for a single user
aggregate.py applies score() across all users, returns ranked result
outputs/
table.py renders output as a CLI table
docs/
ARCHITECTURE.md how the system works and why
STYLE.md documentation conventions for contributors (human and AI)
decisions/ one file per significant decision (ADR format)
sites/
PROPOSAL_TEMPLATE.md standard format for proposing a new data source
CHANGE_TEMPLATE.md standard format for modifying or retiring a source
platforms/ one file per platform (API mechanics, signals, concerns)
active/ one file per active data source (rationale, weight, status)
proposed/ proposals under community consideration
retired/ retired sources with reasons recorded
CHANGELOG.md running log of what changed, why, and what's next
Documentation system
Every meaningful change to this project is documented in three places:
CHANGELOG.md— what changed, why, and what it means going forwarddocs/ARCHITECTURE.md— how the system currently works, design principles, future directionsdocs/decisions/— one file per significant decision, recording options considered and reasoning
A fourth document, docs/STYLE.md, governs how all of the above are written —
conventions for contributors (human and AI) to keep documentation consistent and community-agnostic.
This is intentional. The goal is that a newcomer — human or AI — can read those documents and understand not just what the system does, but why it was built this way.
Current state
This is an early prototype. Current data is hand-crafted mock data, not live API pulls. The scoring model is deliberately simple and unnormalized. Both of these are intentional starting points, not oversights.
See docs/ARCHITECTURE.md for known limitations and the reasoning behind them.
Roadmap (rough order)
- First real data collector (OSArch forum API)
- Additional collectors (GitHub, wiki, Matrix, Open Collective, Funders)
- Normalized scoring
- Web-based dashboard
- Distributed / tamper-evident data layer (IPFS or similar)
Note for AI agents
If you are an AI working in this codebase, you are welcome to update this README when the project changes in ways that affect how a visitor would understand it.
Keep this file at a high level — purpose, core idea, quickstart, and orientation.
Implementation details belong in docs/ARCHITECTURE.md and docs/decisions/.
If something in the README no longer reflects reality, correct it.
If a major new capability exists that a visitor would want to know about, add it.
When in doubt: would a first-time visitor need this to understand what Agency is and whether it is relevant to them? If yes, it belongs here. If no, it belongs elsewhere.
Before committing anything, check that CHANGELOG.md reflects all changes made in
that session — not just code changes, but framing decisions, open questions added, and
wording corrections. If the changelog is behind, update it before committing. Do not
assume the human will catch it.
Exploratory work must not leak into commits. If you prototyped an approach that was
later rejected — a naming idea, a data structure, a scoring variant — grep the working
directory for any artifacts before staging. This includes: variable names, column headers,
config keys, doc sections, and template language. If the idea was rejected, none of it
should appear outside the ADR that records the rejection. Run grep -r "<concept>" .
before staging if you explored something non-trivial. If in doubt, read the ADR for the
decision and verify the working files match it exactly.
Project structure diagram: The diagram in the Project structure
section of this file lists every file and directory a newcomer needs to know about. When
you add a new file directly under docs/ (e.g. docs/STYLE.md, docs/CONTRIBUTING.md),
check whether it appears in the diagram. If not, add it. The pre-commit hook will remind
you mechanically, but do not wait for it — check proactively before staging.
Cross-linking across docs: This project uses markdown anchor links to connect related content across files. When you add a new section to any document, ask yourself:
- Does any other document reference this topic loosely (e.g. "see ARCHITECTURE.md")? If so, update that reference to point to the specific section anchor.
- Does this new section belong to a principle, decision, or open question that another doc already links to? If so, verify the existing link still resolves correctly.
- New
###headers indocs/ARCHITECTURE.mdanddocs/decisions/automatically become linkable anchors. Anchor format: lowercase, spaces → hyphens, punctuation removed (e.g.### How should X?→#how-should-x).
Loose references like "see ARCHITECTURE.md" or "see ADR 003" should be treated as technical debt — replace them with direct anchor links when you encounter them.
Contributing
One-time setup after cloning (enables commit hooks):
git config core.hooksPath .githooks
This enforces that CHANGELOG.md is updated alongside any substantive change.
Bypass with git commit --no-verify only if you have a good reason.
If you want to change how scores are calculated, start with config.yaml.
If you want to add a data source, use docs/sites/PROPOSAL_TEMPLATE.md.
If you want to change or retire a data source, use docs/sites/CHANGE_TEMPLATE.md.
If you make a significant architectural decision, add an ADR in docs/decisions/.
Before writing any documentation, read docs/STYLE.md — it covers changelog conventions, ADR format, and the principle that all docs should be written for any community, not just OSArch.
The commit message template (.gitmessage) will prompt you to update the docs.