Skip to content

[aw][code health] _render_model_table in render_summary shows misleading zeros for Requests / Premium Cost for pure-active s [Content truncated due to length] #1191

@microsasa

Description

@microsasa

Root Cause

_render_model_table (called by render_summary) iterates session.model_metrics and displays mm.requests.count and mm.requests.cost directly:

for model_name in sorted(merged):
    mm = merged[model_name]
    table.add_row(
        model_name,
        str(mm.requests.count),   # ← always 0 for pure-active sessions
        str(mm.requests.cost),    # ← always 0 for pure-active sessions
        format_tokens(mm.usage.inputTokens),     # ← 0 (not tracked for active)
        format_tokens(mm.usage.outputTokens),    # ← real value
        format_tokens(mm.usage.cacheReadTokens), # ← 0 (not tracked for active)
        format_tokens(mm.usage.cacheWriteTokens),# ← 0 (not tracked for active)
    )

For pure-active sessions (no shutdown, has_shutdown_metrics=False), _build_active_summary creates a synthetic model_metrics entry as a placeholder:

active_metrics[model] = ModelMetrics(
    usage=TokenUsage(outputTokens=fp.total_output_tokens),
    # requests defaults to RequestMetrics(count=0, cost=0)
    # inputTokens / cache fields default to 0
)

The requests and token fields other than outputTokens are unknown for active sessions — they will only be populated by a session.shutdown event when the session ends. However, _render_model_table shows them as 0, which a user reads as "zero API requests" and "zero cache usage" — factually incorrect for a session still in progress.

Contrast with render_cost_view

render_cost_view already handles this correctly by using the show_requests flag:

show_requests = s.has_shutdown_metrics or not s.is_active
requests_display = str(mm.requests.count) if show_requests else "—"
premium_display  = str(mm.requests.cost)  if show_requests else "—"

render_summary's _render_model_table makes no such distinction, creating an inconsistency between copilot-usage summary and copilot-usage cost for active sessions.

Exact scenario triggering the bug

A session that is still running with a known model and at least one completed assistant message:

  • copilot-usage summary → Per-Model Breakdown table shows:
    model | 0 | 0 | 0 | 1.2K | 0 | 0 (misleadingly implies 0 requests)
  • copilot-usage cost → shows for Requests and Premium Cost (correct)

Impact

Users inspecting copilot-usage summary during an active session see 0 Requests / 0 Premium Cost in the model breakdown, which they may misinterpret as "this model made no premium API calls" — when in fact the data is simply not yet available.

Fix

Extend _aggregate_model_metrics (or create a display-level helper) to accept an is_active / has_shutdown_metrics flag per session so that _render_model_table can substitute "—" for request-count and cost columns when rendering pure-active session entries. Alternatively, separate the display logic so that render_summary calls a variant of _render_model_table that suppresses the Requests and Premium Cost columns (or marks them as "—") for sessions with has_shutdown_metrics=False.

Testing Requirement

Add a unit test in tests/copilot_usage/test_report.py that calls render_summary with a single pure-active session (is_active=True, has_shutdown_metrics=False, known model, non-zero output tokens) and asserts:

  1. The Per-Model Breakdown table contains the model name and the correct output-token value.
  2. The Requests and Premium Cost cells show "—" (not "0").

Generated by Code Health Analysis · ● 6.9M ·

Metadata

Metadata

Assignees

No one assigned

    Labels

    awCreated by agentic workflowaw-dispatchedIssue has been dispatched to implementercode-healthCode cleanup and maintenance

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions