Warning
Pre-1.0 software: Lampa is currently in the early stages of development.
🚧 🚧 🚧
Expect frequent breaking changes (especially in CLI arguments), bugs, suboptimal code, and limited functionality.
But if you're feeling adventurous - feel free to try it, your feedback is highly appreciated! Please report any issues you encounter, and feel free to share your ideas in Discussions tab — though I can't guarantee immediate prioritization.
Lampa is a small tool that is useful for comparing two releases: it generates overview reports where you can detect changes to third-party dependencies that are added to the build.
Download latest version from Releases page.
or
use mise:
mise use -g ubi:dector/lampaor
for Linux/MacOS use Homebrew:
brew tap dector/lampa
brew install lampaSince we are using Gradle to build Android projects - single runtime dependency is Java.
All commands are executed inside the root folder of Android project (unless you explicitly specify path to project).
Remember that you can always use lampa help if you forget something.
You will need to use this report for comparative HTML report.
lampa collectIf program finished successfully - you can find report file
report.lampa.json in the project folder.
Be aware that by-default program is not rewriting report if it exists.
But you can opt-in for such behavior explicitly by adding --overwrite flag:
lampa collect --overwriteOther useful flags are:
--project <project-dir>- specify path to project root explicitly.--to-dir <out-dir>- change the location of the report(s).--variant <gradle-variant>- specify custom build variant that you use in Gradle. Might be useful if you have flavors etc.--format html/--format json,html- if you need only HTML report or both.--file-name <report-file-name>- if you need to customize generated report filename (without extension).
lampa collect --format htmlFirst, you need to generate JSON report for release 1 (e.g. 1.json).
Then, you need to generate JSON report for release 2 (e.g. 2.json).
After, you need to generate comparative report with lampa compare.
For example:
git checkout v0.28.0
lampa collect --to-dir build --file-name v0.28.0
git checkout v0.28.1
lampa collect --to-dir build --file-name v0.28.1
lampa compare build/v0.28.0.json build/v0.28.1.json build/diff.htmlIf you are working with local server tooling, you can verify that a running server responds on the control port:
lampa proxy pingTo check a custom port explicitly:
lampa proxy ping --port 46899You can configure endpoint processors via control API.
Supported kinds: static|seq|js (static by default).
Static example:
lampa proxy set \
--endpoint /example \
--response.status 200 \
--response.content json \
--response.body '{"ok":true}'Minimal form:
lampa proxy set --endpoint /example --response.body 'hello'Static optional repeatable headers:
lampa proxy set \
--endpoint /example \
--response.body 'hello' \
--response.header 'X-Debug:1' \
--response.header 'Cache-Control:no-store'Static content presets:
json->application/jsontext->text/plainhtml->text/htmlraw-> no autoContent-Type
If you pass --response.header 'Content-Type:...', it overrides the preset.
Sequence processor (--kind seq) uses indexed per-step flags:
--response.body-N(required per step)--response.status-N(optional, default200)--response.content-N(optional, defaulttext)--response.header-N(optional, repeatable)Nis 1-based and must be contiguous (1,2,3...)
Minimal sequence:
lampa proxy set \
--kind seq \
--endpoint /flaky \
--response.body-1 'temporary error' \
--response.body-2 'recovered'Practical full sequence:
lampa proxy set \
--kind seq \
--endpoint /flaky \
--response.status-1 500 \
--response.content-1 text \
--response.body-1 'fail once' \
--response.header-1 'X-Step:1' \
--response.status-2 200 \
--response.content-2 json \
--response.body-2 '{"ok":true}' \
--response.header-2 'Cache-Control:no-store'JS processor example:
lampa proxy set \
--kind js \
--endpoint /dynamic \
--script 'function handle(req){ return Response.json({ok:true, path:req.url}); }'From file:
lampa proxy set --kind js --endpoint /dynamic --script-file ./handler.jsCommon mistakes:
- Missing
--response.body-Nfor a declared step. - Index gaps such as
--response.body-1and--response.body-3without step 2. - Invalid step suffixes (
-0, negative, non-numeric). - Malformed headers (must be
Name:Value).
Set default fallback passthrough processor:
lampa proxy set-default \
--kind pass \
--server http://localhost:3000set-default accepts only --kind pass and requires full upstream URL in --server.
Note: runtime configuration is in-memory only and is not persisted across server restarts.
For seq processors, control API also supports introspection/reset:
GET /api/v0/proc/sequence?endpoint=/your-endpoint
POST /api/v0/proc/sequence/reset
Reset payload example:
{"endpoint":"/your-endpoint","index":0}You can fetch latest in-memory proxy logs via control CLI:
lampa proxy logs get-all -n 20Custom control port:
lampa proxy logs get-all --port 46899 -n 50Control API endpoint:
GET /api/v0/proxy/logs?n=N
Notes:
- logs are kept in memory only
- total log memory is capped at 30MB
- oldest entries are evicted first when cap is exceeded
GitHub Action:
dector/run-lampa@v1
You can use this GitHub Action to integrate Lampa into your CI/CD pipeline.
See detailed instructions on GitHub Marketplace.
Production-ready example workflow
I will add this section latest. For now feel free to contact me directly or open new discussion.
See CHANGELOG.md for a detailed history of changes.
Project is distributed under MIT License.
Protobuf schema from AOSP is covered by Apache2 license.

