Team Health Dashboards

Team Health Dashboards

Pick a role to explore weekly engagement, productivity, and quality metrics for that team.

Dashboard PE Deep Dive Plans Settings
Active Domains
Total Output (this week)
Avg Meaningfulness
PEs At Risk
Output by Domain
Weekly Trend (all domains)
Sort:

⚠ Unjustified Output This Week

    Loading dashboard…
    Select a Product Engineer to view their timeline.
    Loading plans…
    Domains
    BambooHR
    Loading…
    Select a domain to edit, or click + New to add one.

    Your profile

    Email

    Login email — used to send OTP codes. Must be on an allowed company domain. Engineer records are migrated automatically when changed.

    Display name

    Used in dashboard tables, weekly reports, and the user-menu badge. Updates every engineer record this user owns.

    Account roles

    How Product Engineers are evaluated

    Two questions drive every PE row on the Dashboard. This page defines them so the rules are transparent and reproducible — there are no hidden judgments.

    Justification — did this PE earn their seat this week?

    Justified ✓ Yes

    The PE put in enough authoring effort this week, spread across enough of their expected workdays, on substantive (non-chore) work, to defend their compensation at this domain's weight.

    • Authored at least one commit this week — release-pipeline re-exports of older work do not count.
    • Authoring covered a reasonable share of expected workdays (already excludes weekends, PTO, future days, and pre-domain-start days; scaled by part-time weight).
    • Work was a real feature, improvement, bug fix, or refactor — not only chores or trivia.
    • Effort wasn't a single concentrated burst masking an otherwise empty week.

    Not justified ✗ No

    Any one of these conditions makes the week Not justified:

    • Zero authored commits. Hard rule — release-only weeks never justify a slot.
    • Coverage below 50% of expected workdays (when ≥2 days were expected). Catches the "concentrated burst masking an otherwise empty week" pattern — a single big day doesn't replace a week of expected effort.
    • All commits were chores or trivia (whitespace, dep bumps, formatting).
    • Pipeline activity here reflects work authored in prior weeks being promoted; the PE didn't actually work on this domain this week.
    • Slack presence was high but no code was authored — Slack alone doesn't rescue an empty authoring week.
    The first two rules are deterministic — they override the LLM's verdict. The rest can be applied by the LLM based on the week's pattern.

    What it does not mean

    • It's not about what shipped this week. A giant production deploy authored last month is not this-week effort.
    • It's not about overall domain health or pipeline throughput (see below).
    • It's not a judgment of the PE overall — it's a one-week snapshot.
    • It's not about the absolute value of the work — a one-line fix shipped to thousands of users is still trivial as a week of effort.

    Score — how the 1–5 number is calculated

    Every PE row carries a single Overall score (1–5) made of two independent sub-scores. The rule is intentionally simple: Overall = min(Meaningfulness, Engagement). The "weakest-link" model means a great week of meaningful work that only happened on one day still scores low — and consistent showing-up on trivial work also scores low. Both have to be there.

    Meaningfulness (1–5) — what was actually built

    LLM-judged from the diffs and commit categorization. Ignores how the work was distributed across the week — that's Engagement's job. Looks only at what shipped or was authored.

    • 5 — Exceptional. Major feature delivery or significant product advancement.
    • 4 — Solid. Clear progress with tangible new capabilities at any pipeline stage.
    • 3 — Moderate. Some real features or meaningful fixes, even if not yet shipped.
    • 2 — Minor. Small fixes or cosmetic improvements.
    • 1 — Trivial. Whitespace, dependency bumps, formatting, or other chore-only work.

    Engagement (1–5) — how consistently the PE showed up

    Deterministic floor computed from coverage (meaningful_days / expected_workdays) and the longest gap between active days. The LLM may adjust the floor by at most ±1 with reasoning. The denominator is already clamped — it excludes weekends, PTO, future days, pre-domain-start days, and is scaled by part-time weight.

    • 5 — 100% coverage. Meaningful work on every expected workday, no gaps. A single missed expected day disqualifies a 5.
    • 4 — ≥80% coverage. Longest gap ≤ 1 day, active on at least expected − 1 days.
    • 3 — ~60–80% coverage. Gaps ≤ 2 days. Solid but not consistent.
    • 2 — ~40–60% coverage or one longer gap. Partial engagement.
    • 1 — <40% coverage, or only trivial chore commits sprinkled across the week.
    Pure-chore days (categorized as "chore" by the LLM) do not count as meaningful days. Showing up to bump dependencies every day still scores 1.

    Overall = min(Meaningfulness, Engagement)

    • Meaningfulness 5, Engagement 5 → Overall 5. Best possible week.
    • Meaningfulness 5, Engagement 1 → Overall 1. Big-burst week with most days empty — the burst doesn't rescue the week.
    • Meaningfulness 1, Engagement 5 → Overall 1. Showed up every day but only on trivia — consistency doesn't rescue the work.
    • Meaningfulness 0 (no commits) → Overall 0. Engagement is irrelevant when nothing was authored.
    Overall and Justification answer different questions. Overall grades the quality of the week on a 1–5 scale. Justification answers a binary yes / no: did they earn their seat? A PE can score Overall 3 and still be Not justified (e.g. concentrated burst), or score Overall 2 and still be Justified (a quiet but consistent week of small but real fixes).

    Domain throughput — is the pipeline moving?

    Domain throughput is a domain-level signal independent of any individual PE. It describes what landed across the pipeline (In Review → Testing → Staging → Production) this week, including release events. A domain can be healthy while an individual PE on it is Not justified, and vice versa.

    Healthy healthy

    Normal flow: production output landed this week, or there's clear progression of substantive work moving stage-to-stage. The domain is shipping.

    Stuck stuck

    Substantive work piled up in Testing or Staging with nothing reaching Production. Work is accumulating but not crossing the finish line — usually a release-cadence or QA-bottleneck signal.

    No production no production

    There is Testing or Staging activity this week but zero commits made it to Production. Lighter than stuck — the pipeline is alive, just not delivering.

    Idle idle

    Little or no activity at any stage this week. The domain didn't visibly move.

    Domain throughput is judged on shipped pipeline activity, including release events — so a release of older work counts toward keeping a domain healthy, even if no PE on it is justified for the current week.

    Users & access

    Owners can grant per-app roles. Contributors only see their own engineer record; admins manage settings; super-admins can also trigger analyses.

    Loading…
    Dashboard Agent Deep Dive Settings
    Measurement model. Weekly per-agent score = min(Productivity, Quality, Engagement) — the same "weakest-link" model used for Product Engineers. Productivity = throughput (tickets resolved, public comments, worklog hours, resolution & first-response times). Quality = compliance % across the 18 checks in the HD Compliance Guide. Engagement = active days across expected workdays (BambooHR-aware).
    Data source: Jira project = "Help Desk" (backend sync not yet wired up — the cards below show the exact metrics the dashboard will populate from Jira).
    Active Agents
    Tickets Resolved (this week)
    Open Tickets (team)
    Team Compliance
    Median First Response
    Agents At Risk
    Tickets Resolved by Agent
    Team Compliance Trend (last 8 weeks)

    ⚠ Team-level alerts (from HD Compliance Guide)

      Sort:

      Per-Agent This Week

      Agent Resolved / Open First Response Avg Resolution Productivity Quality Engagement Overall Justified?

      What this deep-dive will show per agent

      1 · Productivity (1–5)

      • Tickets resolved — count of Solved + Closed transitions with this agent as assignee
      • Tickets touched — unique assigned tickets with ≥ 1 comment or status transition by agent
      • Public comments sentjsdPublic: true comments authored this week
      • Worklog hours — sum of worklog entries
      • Avg resolution time — business hours from To Do → Solved
      • First response time — median & p95 business hours from ticket created → first agent public comment
      Normalized against team median; LLM assigns 1–5.

      2 · Quality (1–5) — compliance from the 18 checks

      • Compliance % = checks passed / checks run across agent's tickets
      • Timeliness — checks 7, 8, 9, 10, 14 (response SLA, stale Pending-on-Client, Solved age)
      • Documentation — checks 11, 17, 18 (resolution summary, escalation rationale, final public notification)
      • Hygiene — checks 1–6, 13, 16 (assignment, correct type/status/priority, escalation linkage & ETA)
      Maps to 1–5: ≥95 % → 5, 90–95 → 4, 80–90 → 3, 65–80 → 2, <65 → 1.

      3 · Engagement (1–5)

      • Active days — business days with ≥ 1 comment or transition
      • Meaningful days — days with ≥ 1 ticket solved OR a substantive public response
      • Expected workdays — from BambooHR (already wired), excludes PTO & holidays
      • Longest gap — longest run of consecutive business days with zero activity
      • Flag — late start / mid-week gap / burst / balanced

      4 · Overall & Justified?

      • Overall = min(Productivity, Quality, Engagement) (same weakest-link rule as PE)
      • Justified? Overall ≥ 3 and per-ticket violation rate ≤ 0.5

      5 · Drill-downs (populated from Jira once wired)

      • Tickets owned this week (link list, status + last-activity)
      • Violations with specifics (which check failed, on which ticket, why)
      • Daily activity spark (comments + transitions)
      • 8-week trend: Resolved · Compliance % · Engagement score
      HD Data Source
      Jira project: Help Desk (HD)
      Cloud ID: 4cf5f3dc-…2612
      Ignored: HD-5102, HD-5010

      Backend sync not yet implemented. This panel will let you manage the agent roster, tune the compliance check toggles, and trigger a weekly sync from Jira.

      Planned compliance checks (from HD Team Compliance Guide)

      1. Unassigned "To Do" tickets
      2. Correct issue type (Access, Billing, Course, Cybersecurity, Feature, Integrations, Tech, Bugs, Other)
      3. Correct status (To Do / In Progress / Pending on Client / Escalated / Escalation Done / Solved / Closed)
      4. Assignee on active tickets
      5. Escalated tickets must have linked engineering issue (cf[11068])
      6. Escalated tickets must have valid ETA (cf[11608])
      7. Public response within 1 business day
      8. "Pending on Client" > 10 business days — needs follow-up
      9. "Pending on Client" > 15 business days — should be Solved
      10. "In Progress" > 1 business day — needs follow-up
      11. Solved tickets must be documented (cf[11740])
      12. Agent time logging (worklog)
      13. Escalated ticket with Done linked issue — should move to Escalation Done
      14. Stale "Solved" (> 5 business days)
      15. Open ticket distribution balance (team-wide)
      16. Priority field set and correct (HD Urgent / HD Normal)
      17. Escalation internal note required
      18. Solved ticket must have final public notification