Root Cause
_render_model_table (called by render_summary) iterates session.model_metrics and displays mm.requests.count and mm.requests.cost directly:
for model_name in sorted(merged):
mm = merged[model_name]
table.add_row(
model_name,
str(mm.requests.count), # ← always 0 for pure-active sessions
str(mm.requests.cost), # ← always 0 for pure-active sessions
format_tokens(mm.usage.inputTokens), # ← 0 (not tracked for active)
format_tokens(mm.usage.outputTokens), # ← real value
format_tokens(mm.usage.cacheReadTokens), # ← 0 (not tracked for active)
format_tokens(mm.usage.cacheWriteTokens),# ← 0 (not tracked for active)
)
For pure-active sessions (no shutdown, has_shutdown_metrics=False), _build_active_summary creates a synthetic model_metrics entry as a placeholder:
active_metrics[model] = ModelMetrics(
usage=TokenUsage(outputTokens=fp.total_output_tokens),
# requests defaults to RequestMetrics(count=0, cost=0)
# inputTokens / cache fields default to 0
)
The requests and token fields other than outputTokens are unknown for active sessions — they will only be populated by a session.shutdown event when the session ends. However, _render_model_table shows them as 0, which a user reads as "zero API requests" and "zero cache usage" — factually incorrect for a session still in progress.
Contrast with render_cost_view
render_cost_view already handles this correctly by using the show_requests flag:
show_requests = s.has_shutdown_metrics or not s.is_active
requests_display = str(mm.requests.count) if show_requests else "—"
premium_display = str(mm.requests.cost) if show_requests else "—"
render_summary's _render_model_table makes no such distinction, creating an inconsistency between copilot-usage summary and copilot-usage cost for active sessions.
Exact scenario triggering the bug
A session that is still running with a known model and at least one completed assistant message:
copilot-usage summary → Per-Model Breakdown table shows:
model | 0 | 0 | 0 | 1.2K | 0 | 0 (misleadingly implies 0 requests)
copilot-usage cost → shows — for Requests and Premium Cost (correct)
Impact
Users inspecting copilot-usage summary during an active session see 0 Requests / 0 Premium Cost in the model breakdown, which they may misinterpret as "this model made no premium API calls" — when in fact the data is simply not yet available.
Fix
Extend _aggregate_model_metrics (or create a display-level helper) to accept an is_active / has_shutdown_metrics flag per session so that _render_model_table can substitute "—" for request-count and cost columns when rendering pure-active session entries. Alternatively, separate the display logic so that render_summary calls a variant of _render_model_table that suppresses the Requests and Premium Cost columns (or marks them as "—") for sessions with has_shutdown_metrics=False.
Testing Requirement
Add a unit test in tests/copilot_usage/test_report.py that calls render_summary with a single pure-active session (is_active=True, has_shutdown_metrics=False, known model, non-zero output tokens) and asserts:
- The Per-Model Breakdown table contains the model name and the correct output-token value.
- The Requests and Premium Cost cells show
"—" (not "0").
Generated by Code Health Analysis · ● 6.9M · ◷
Root Cause
_render_model_table(called byrender_summary) iteratessession.model_metricsand displaysmm.requests.countandmm.requests.costdirectly:For pure-active sessions (no shutdown,
has_shutdown_metrics=False),_build_active_summarycreates a syntheticmodel_metricsentry as a placeholder:The
requestsand token fields other thanoutputTokensare unknown for active sessions — they will only be populated by asession.shutdownevent when the session ends. However,_render_model_tableshows them as0, which a user reads as "zero API requests" and "zero cache usage" — factually incorrect for a session still in progress.Contrast with
render_cost_viewrender_cost_viewalready handles this correctly by using theshow_requestsflag:render_summary's_render_model_tablemakes no such distinction, creating an inconsistency betweencopilot-usage summaryandcopilot-usage costfor active sessions.Exact scenario triggering the bug
A session that is still running with a known model and at least one completed assistant message:
copilot-usage summary→ Per-Model Breakdown table shows:model | 0 | 0 | 0 | 1.2K | 0 | 0(misleadingly implies 0 requests)copilot-usage cost→ shows—for Requests and Premium Cost (correct)Impact
Users inspecting
copilot-usage summaryduring an active session see0 Requests/0 Premium Costin the model breakdown, which they may misinterpret as "this model made no premium API calls" — when in fact the data is simply not yet available.Fix
Extend
_aggregate_model_metrics(or create a display-level helper) to accept anis_active/has_shutdown_metricsflag per session so that_render_model_tablecan substitute"—"for request-count and cost columns when rendering pure-active session entries. Alternatively, separate the display logic so thatrender_summarycalls a variant of_render_model_tablethat suppresses theRequestsandPremium Costcolumns (or marks them as"—") for sessions withhas_shutdown_metrics=False.Testing Requirement
Add a unit test in
tests/copilot_usage/test_report.pythat callsrender_summarywith a single pure-active session (is_active=True,has_shutdown_metrics=False, known model, non-zero output tokens) and asserts:"—"(not"0").