Skip to content

[Good First Issue] Add SHAP explainability to segmentation predictions #22

@Oshgig

Description

@Oshgig

Overview

ClimateVision currently produces segmentation masks with zero explainability. NGOs and government agencies cannot trust black-box predictions for high-stakes environmental decisions. We need pixel-level attribution showing WHICH bands and spatial regions drove each classification.

Scope

  • Integrate SHAP (DeepExplainer) into the U-Net forward pass in src/climatevision/inference/pipeline.py
  • Create src/climatevision/governance/explainability.py with:
    • explain_prediction(model, image, analysis_type) → returns SHAP values + heatmap
    • Support for all 3 analysis types (deforestation, ice_melting, flooding)
    • Band-level attribution (e.g., "NIR contributed 47% to this forest pixel classification")
  • Add GET /api/explain/{run_id} endpoint returning SHAP values as GeoJSON/PNG
  • Create notebooks/06_explainability.ipynb with visual examples
  • Add pytest tests verifying explainability output shapes match prediction shapes

Acceptance Criteria

  • Every prediction can return a SHAP heatmap overlay
  • Band attribution is returned in API response
  • Notebook demonstrates explainability across all 3 analysis types
  • Tests pass in CI

Resources

  • src/climatevision/models/unet.py — U-Net architecture
  • src/climatevision/inference/pipeline.py — inference flow
  • SHAP library docs: https://shap.readthedocs.io/

Difficulty: Intermediate
Owner: Linda Oraegbunam (@obielin)
Labels: good first issue, governance, explainability, backend

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions