Last Light
An open-source agent that looks after repos in maintenance mode. Triages issues, reviews PRs, chases down stale items — so you can focus on what's next.
Labels, deduplicates, and chases missing info — every 15 minutes, rain or shine.
Reads the diff, leaves honest feedback. Critical issues first, nits last. No feelings spared.
Monday morning summary of what's piling up, what's gone stale, and what needs you.
Cron jobs run triage, reviews, and reports on autopilot. Set it up once, then move on.
Built on Hermes Agent with a custom MCP server packing 28 GitHub tools. Authenticates as its own GitHub App — every action is clearly from the bot, not you.
List what you want watched in .hermes.md
Cron jobs keep running — scanning for new issues, unreviewed PRs, stale items
Labels, reviews, closes duplicates, writes reports
Everything happens on GitHub — comments, labels, reviews. No extra dashboard to check
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash git clone https://github.com/cliftonc/lastlight.git
cd lastlight Head to github.com/settings/apps/new and create an app with these permissions:
Generate a private key, save the .pem file into the lastlight folder, and install the app on your repos.
cp .env.example .env
cp config.yaml.example config.yaml Fill in your GitHub App credentials in .env:
GITHUB_APP_ID=123456
GITHUB_APP_PRIVATE_KEY_PATH=./your-app.private-key.pem
GITHUB_APP_INSTALLATION_ID=789012 Then add your repos to .hermes.md:
## Managed Repositories
- yourname/your-repo
- yourname/another-repo Last Light needs an LLM to think with. Set the provider and model in config.yaml:
model:
default: anthropic/claude-sonnet-4 # or any model your provider supports
provider: openrouter # routes to 100+ models via one key Then add your API key to .env. Pick whichever provider you prefer:
One API key, many models. OpenAI, Anthropic, Google, open-source — all through one endpoint.
OPENROUTER_API_KEY=sk-or-... Direct access to GPT and Codex models.
provider: openai
# or: provider: openai-codex Direct access to Claude models.
provider: anthropic Use models via your AWS account. Needs AWS credentials configured.
provider: bedrock Last Light runs commands through Hermes Agent's terminal. By default it uses local — commands run directly on your machine. For isolation or remote execution, pick a different backend in config.yaml. See the Hermes configuration docs for full details:
Sandboxed container with dropped capabilities, PID limits, and tmpfs. Needs Docker installed.
terminal:
backend: docker Cloud VM sandbox. Ephemeral, scalable, great for evals. Needs a Modal account.
terminal:
backend: modal Set MODAL_TOKEN_ID and MODAL_TOKEN_SECRET in .env.
Run on a remote server. Persistent shell with connection reuse. Good for beefy hardware.
terminal:
backend: ssh Set TERMINAL_SSH_HOST and TERMINAL_SSH_USER in .env.
Managed cloud dev environment with stop/resume. Needs a Daytona API key.
terminal:
backend: daytona For HPC clusters and shared machines where Docker isn't available. Uses Apptainer/Singularity.
terminal:
backend: singularity Commands run directly on your machine. No isolation — the agent has full access to your filesystem.
terminal:
backend: local All container backends support resource limits:
terminal:
container_cpu: 1 # CPU cores
container_memory: 5120 # Memory in MB
container_disk: 51200 # Disk in MB
container_persistent: true Talk to Last Light from Discord, Slack, Telegram, and more. See the Hermes gateway docs for the full list. Run the setup wizard:
./lastlight gateway setup Then run it as a persistent service:
./lastlight gateway # run in foreground
./lastlight gateway install # install as a system service Create a bot at the Discord Developer Portal. Enable Presence Intent and Message Content Intent under Privileged Gateway Intents or the bot won't connect.
DISCORD_BOT_TOKEN=your-token
DISCORD_HOME_CHANNEL=channel-id Create an app at api.slack.com/apps with bot token scopes, then set in .env:
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-... Create a bot via @BotFather, then set in .env:
TELEGRAM_BOT_TOKEN=your-token
TELEGRAM_HOME_CHANNEL=chat-id React to repo events in real time. Enable in .env and point your GitHub App's webhook URL at it. The endpoint must be publicly reachable — use a tunnel like ngrok or Cloudflare Tunnel if running locally.
WEBHOOK_ENABLED=true
WEBHOOK_PORT=8644
WEBHOOK_SECRET=your-secret ./lastlight That's it. You can also run single commands:
./lastlight chat -q "Review the latest PR on yourname/your-repo"
./lastlight chat -q "Triage open issues on yourname/another-repo" /pr-review Review a PR with structured, severity-ranked feedback
/issue-triage Label, deduplicate, and prioritize issues
/repo-health Generate a health report with metrics and action items
Check the roadmap for what's coming next
These run automatically, out of the box: