Skip to main content
This library gives you ready-to-use prompts for every tool in the Supermodel ecosystem. Copy any prompt into your AI coding assistant, CI pipeline, or terminal to get started immediately.

MCP Server

The Supermodel MCP server gives AI coding agents instant, sub-second access to your codebase’s call graph, dependency graph, and domain structure — without loading files into context.
I just cloned this repo and need to get up to speed. Use the Supermodel MCP tools to give me an architectural overview: what are the major domains, which files are the most depended-on, and what are the key entry points?
Use Supermodel to trace the full call chain starting from the `handleRequest` function. Show me every function it calls, and every function those call, up to 3 levels deep.
Before I rename or change the signature of `parseConfig`, use Supermodel to find every function that calls it so I know the full blast radius of this change.
Use Supermodel's domain graph tools to show me the subsystem boundaries in this codebase. Which modules are tightly coupled and which are cleanly separated?
I'm planning to refactor the `UserService` class. Use Supermodel to map everything that depends on it — direct callers, transitive dependents, and which domains it crosses — so we can plan the rollout safely.

TypeScript SDK

The TypeScript SDK (@supermodeltools/sdk) wraps the Supermodel API with automatic async polling, ZIP compression, and full type definitions.
Write a Node.js script using @supermodeltools/sdk that zips the current directory, submits it to the Supermodel dependency graph endpoint, polls until the job completes, and writes the result to dependency-graph.json.
Using @supermodeltools/sdk, generate a call graph for the repo at ./my-project. Log progress to the console and handle errors gracefully. Save the final graph as call-graph.json.
Write a script with @supermodeltools/sdk that generates a domain graph for my codebase and prints a summary of all discovered domains and the files that belong to each one.
Using @supermodeltools/sdk, submit my repo to all five Supermodel endpoints (dependency, call, domain, parse, supermodel) in parallel, wait for all to complete, and save each result to its own JSON file in an ./analysis directory.
Write a Node.js script for our GitHub Actions CI that uses @supermodeltools/sdk to generate a supermodel graph of the changed files on every pull request. The SUPERMODEL_API_KEY should come from an environment variable. Exit with code 1 if the API call fails.
Show me how to use @supermodeltools/sdk with an AbortSignal so the graph generation job automatically cancels if it takes longer than 60 seconds.

Direct API (curl)

Use these prompts to interact with the Supermodel API directly from your terminal.
Show me the curl command to zip my current directory (excluding .git and node_modules), submit it to the Supermodel dependency graph endpoint with my API key, and save the jobId from the response.
Write a bash loop that polls the Supermodel API with the same idempotency key every 10 seconds until the job status is "completed", then pretty-prints the result with jq.
Give me the curl command to submit a call graph job to the Supermodel API at /v1/graphs/call with my API key and a zipped repo file.
Show me how to use curl to submit my repo to the Supermodel /v1/graphs/parse endpoint and retrieve the structured AST output.

arch-docs GitHub Action

The arch-docs action auto-generates a full static architecture documentation website from your codebase on every push.
Add the supermodeltools/arch-docs GitHub Action to this repo so it generates and publishes an architecture documentation site to GitHub Pages on every push to main. The API key should come from a SUPERMODEL_API_KEY secret.
Set up the arch-docs GitHub Action to generate an architecture documentation preview on every pull request and post the site URL as a PR comment.
Configure the supermodeltools/arch-docs action to output the generated site into a ./docs/architecture directory instead of the default location.
Show me how to use the arch-docs action outputs (site_path, entity_count, page_count) in a subsequent GitHub Actions step to print a summary to the job log.

dead-code-hunter GitHub Action

The dead-code-hunter action uses Supermodel’s static analysis to detect unreachable functions, unused exports, and orphaned files on every pull request.
Add the supermodeltools/dead-code-hunter GitHub Action to this repo so it runs on every pull request, posts findings as a PR comment, and fails the check if any high-confidence dead code is found.
Configure dead-code-hunter to fail the CI check only when there are more than 5 high-confidence dead code findings. Lower-confidence findings should be reported but not block merging.
Set up dead-code-hunter with ignore patterns so it skips files matching **/generated/**, **/vendor/**, and **/*.pb.go.
Configure dead-code-hunter as a scheduled GitHub Actions workflow that runs every Monday morning and opens a GitHub issue with the findings if any dead code is detected.

bigiron — Autonomous SDLC

bigiron is an autonomous software development lifecycle system that uses Supermodel code graphs to plan, validate, and implement changes across an 8-phase cycle.
Use bigiron's `factory run` command to implement the following feature: [describe your feature]. Before you start, run a health check so I can see the current state of the codebase.
Run `factory health` on this repo and give me a report on structural risks, overly coupled modules, and any subsystems that look like they need attention before I start adding new features.
Run `factory improve` on this codebase. Focus on reducing coupling between the auth and billing subsystems. Don't touch anything in the /legacy directory.
Before bigiron implements this change, use the call and dependency graphs to show me which existing tests will need to be updated and which modules are in the blast radius.
I've drafted a new `PaymentProcessor` class. Before we write any more code, use bigiron to validate that it fits the existing architecture and doesn't introduce circular dependencies or cross-domain violations.

mcpbr — MCP Benchmark Runner

mcpbr measures whether an MCP server actually improves LLM agent performance using controlled A/B comparisons against SWE-bench tasks.
Set up mcpbr and run the baseline benchmark suite for the Supermodel MCP server — one run with the MCP tools enabled and one without — and show me the performance comparison.
Add mcpbr to our GitHub Actions CI workflow so it runs benchmarks on every MCP server release and exits with code 1 if performance drops more than 10% compared to the previous run.
Show me how to run the full mcpbr benchmark suite inside Docker with pinned dependencies so results are reproducible across machines.
Configure mcpbr to post benchmark results and any detected regressions to our #eng-alerts Slack channel after each run.

Uncompact — Context Reinjection for Claude Code

Uncompact automatically reinjects Supermodel architectural context back into Claude Code sessions after compaction events, keeping AI reasoning sharp across long sessions.
Install the Uncompact Claude Code plugin for this repo and configure it with my Supermodel API key so that architectural context is automatically reinjected after every compaction event.
Configure Uncompact to use a 3,000-token budget for reinjected context instead of the default 2,000 tokens, so more architectural detail is preserved after compaction.
After the last compaction event, what architectural context did Uncompact reinject? Summarize the domains, key entry points, and cross-subsystem relationships that were restored.
My Claude Code session seems to have lost track of the codebase architecture after a compaction. Use Uncompact to reinject the Supermodel context now and then re-summarize the current task with full architectural awareness.

Cross-Tool Workflows

Prompts that combine multiple Supermodel tools together.
I just joined this project. Use the Supermodel MCP server to give me an architectural overview, then generate an arch-docs site I can share with my team, and finally run a dead code check so we know what's safe to clean up.
Before merging this PR: (1) use Supermodel to check the blast radius of the changed functions, (2) run dead-code-hunter to see if any of the changes introduced orphaned code, and (3) check if the changes cross any domain boundaries that weren't crossed before.
Set up a GitHub Actions workflow that runs on every PR and does the following in parallel: generates a fresh arch-docs site, runs dead-code-hunter and posts findings as a comment, and uses the TypeScript SDK to diff the dependency graph against the base branch.
Use bigiron to refactor the data-access layer to remove all direct database calls from the API route handlers. Before each file change, use the Supermodel MCP tools to verify we're not breaking any callers. After the refactor, run dead-code-hunter to confirm no orphaned code was left behind.