Your profile
Login email — used to send OTP codes. Must be on an allowed company domain. Engineer records are migrated automatically when changed.
Used in dashboard tables, weekly reports, and the user-menu badge. Updates every engineer record this user owns.
How Product Engineers are evaluated
Two questions drive every PE row on the Dashboard. This page defines them so the rules are transparent and reproducible — there are no hidden judgments.
Justification — did this PE earn their seat this week?
Justified ✓ Yes
The PE put in enough authoring effort this week, spread across enough of their expected workdays, on substantive (non-chore) work, to defend their compensation at this domain's weight.
- Authored at least one commit this week — release-pipeline re-exports of older work do not count.
- Authoring covered a reasonable share of expected workdays (already excludes weekends, PTO, future days, and pre-domain-start days; scaled by part-time weight).
- Work was a real feature, improvement, bug fix, or refactor — not only chores or trivia.
- Effort wasn't a single concentrated burst masking an otherwise empty week.
Not justified ✗ No
Any one of these conditions makes the week Not justified:
- Zero authored commits. Hard rule — release-only weeks never justify a slot.
- Coverage below 50% of expected workdays (when ≥2 days were expected). Catches the "concentrated burst masking an otherwise empty week" pattern — a single big day doesn't replace a week of expected effort.
- All commits were chores or trivia (whitespace, dep bumps, formatting).
- Pipeline activity here reflects work authored in prior weeks being promoted; the PE didn't actually work on this domain this week.
- Slack presence was high but no code was authored — Slack alone doesn't rescue an empty authoring week.
What it does not mean
- It's not about what shipped this week. A giant production deploy authored last month is not this-week effort.
- It's not about overall domain health or pipeline throughput (see below).
- It's not a judgment of the PE overall — it's a one-week snapshot.
- It's not about the absolute value of the work — a one-line fix shipped to thousands of users is still trivial as a week of effort.
Score — how the 1–5 number is calculated
Every PE row carries a single Overall score (1–5) made of two independent sub-scores. The rule is intentionally simple: Overall = min(Meaningfulness, Engagement). The "weakest-link" model means a great week of meaningful work that only happened on one day still scores low — and consistent showing-up on trivial work also scores low. Both have to be there.
Meaningfulness (1–5) — what was actually built
LLM-judged from the diffs and commit categorization. Ignores how the work was distributed across the week — that's Engagement's job. Looks only at what shipped or was authored.
- 5 — Exceptional. Major feature delivery or significant product advancement.
- 4 — Solid. Clear progress with tangible new capabilities at any pipeline stage.
- 3 — Moderate. Some real features or meaningful fixes, even if not yet shipped.
- 2 — Minor. Small fixes or cosmetic improvements.
- 1 — Trivial. Whitespace, dependency bumps, formatting, or other chore-only work.
Engagement (1–5) — how consistently the PE showed up
Deterministic floor computed from coverage
(meaningful_days / expected_workdays) and the
longest gap between active days. The LLM may adjust the floor by at most ±1
with reasoning. The denominator is already clamped — it excludes weekends, PTO, future days,
pre-domain-start days, and is scaled by part-time weight.
- 5 — 100% coverage. Meaningful work on every expected workday, no gaps. A single missed expected day disqualifies a 5.
- 4 — ≥80% coverage. Longest gap ≤ 1 day, active on at least
expected − 1days. - 3 — ~60–80% coverage. Gaps ≤ 2 days. Solid but not consistent.
- 2 — ~40–60% coverage or one longer gap. Partial engagement.
- 1 — <40% coverage, or only trivial chore commits sprinkled across the week.
Overall = min(Meaningfulness, Engagement)
- Meaningfulness 5, Engagement 5 → Overall 5. Best possible week.
- Meaningfulness 5, Engagement 1 → Overall 1. Big-burst week with most days empty — the burst doesn't rescue the week.
- Meaningfulness 1, Engagement 5 → Overall 1. Showed up every day but only on trivia — consistency doesn't rescue the work.
- Meaningfulness 0 (no commits) → Overall 0. Engagement is irrelevant when nothing was authored.
Domain throughput — is the pipeline moving?
Domain throughput is a domain-level signal independent of any individual PE. It describes what landed across the pipeline (In Review → Testing → Staging → Production) this week, including release events. A domain can be healthy while an individual PE on it is Not justified, and vice versa.
Healthy healthy
Normal flow: production output landed this week, or there's clear progression of substantive work moving stage-to-stage. The domain is shipping.
Stuck stuck
Substantive work piled up in Testing or Staging with nothing reaching Production. Work is accumulating but not crossing the finish line — usually a release-cadence or QA-bottleneck signal.
No production no production
There is Testing or Staging activity this week but zero commits made it to Production. Lighter than stuck — the pipeline is alive, just not delivering.
Idle idle
Little or no activity at any stage this week. The domain didn't visibly move.
Users & access
Owners can grant per-app roles. Contributors only see their own engineer record; admins manage settings; super-admins can also trigger analyses.