I spent a full day getting Kamal to deploy from GitHub Actions with Akeyless as the secrets manager. It shouldn't have taken that long. The individual pieces each have decent documentation. But the intersection of the three has zero public content — at least none I could find. Every integration point had at least one undocumented pitfall.
For context, the three tools:
- Kamal — Rails' default deployment tool. Builds Docker images, pushes to a registry, and deploys to servers via SSH. Zero-downtime, no Kubernetes.
- GitHub Actions — GitHub's CI/CD platform. Runs workflows on ephemeral runners triggered by events (push, PR, manual dispatch).
- Akeyless — A secrets management platform. Stores credentials centrally and provides CLI/API access with fine-grained authentication (API keys, OIDC, etc.).
This post is the guide I wish I'd found before starting. It covers the full setup, the design decisions behind it, and — most importantly — every problem I hit and how I solved it.
Table of contents
Part 1: The Problem
The starting point
I have a Rails app deployed on DigitalOcean Droplets via Kamal. Two environments: staging and production. Secrets (database URLs, API keys, encryption keys) live in Akeyless, resolved at deploy time via the Akeyless CLI. The deploy workflow was simple: kamal deploy -d staging from my laptop. It worked.
Why move to CI
The problem: all deploys ran from my machine. No audit trail beyond git log. No option to gate deploys on passing tests. No way for someone else to deploy if I'm unavailable. I needed the same kamal deploy to run from GitHub Actions.
The three challenges
Running kamal deploy from a GitHub Actions runner means solving three problems:
- SSH access — Kamal connects to the servers via SSH. The runner needs a key that the Droplets trust.
- Secret resolution — Kamal's
.kamal/secrets.*files call the Akeyless CLI with named profiles. Those profiles don't exist on the ephemeral runner. - Docker registry auth — Kamal pushes images to GHCR. The runner needs a push token.
The naive approach to problem #2 — duplicate every secret as a GitHub Actions secret — doesn't scale. We have secrets in Akeyless as the single source of truth. Duplicating them means maintaining two sources, and Kamal's secrets files still try to call the Akeyless CLI. You'd have to rewrite the secrets files for CI, breaking the local flow.
The approach I chose: install the Akeyless CLI on the runner, authenticate via OIDC, and let the existing secrets files work unchanged.
What is OIDC and why it matters here
If you've ever clicked "Sign in with Google" on a website, you've used OIDC. OpenID Connect is a protocol that lets one system prove its identity to another without sharing passwords. Instead of credentials, the systems exchange short-lived tokens signed by a trusted party.
In our case, the two systems are GitHub Actions and Akeyless. The traditional approach would be to store Akeyless API credentials as GitHub Secrets — static strings that never change, sitting in a config screen, potentially leaked in logs, and shared across every workflow run. OIDC eliminates this entirely.
Here's how it works in practice:
- GitHub Actions generates a token — every workflow run can request a short-lived JWT (JSON Web Token) signed by GitHub's own OIDC provider. This token contains claims about the run: which repository triggered it, which branch, which user, etc.
- Akeyless verifies the token — Akeyless is configured to trust GitHub's signing keys (via a JWKS URL). When it receives the token, it validates the signature and checks the claims (e.g., "is this from my repository?").
- Akeyless issues its own token — if the claims match, Akeyless returns a temporary access token that the CLI can use to fetch secrets.
The result: the workflow authenticates to Akeyless without any stored credentials. The GitHub token is generated fresh on every run, lives for minutes, and is scoped to the specific workflow execution. If it leaks, it's already expired by the time someone could use it.
Part 2: The Solution
Architecture overview
With OIDC understood, here's the full deploy flow from GitHub Actions:
GitHub Actions runner
│
├── 1. Gets OIDC token from GitHub
├── 2. Exchanges it for an Akeyless token (JWT auth)
├── 3. Runs `kamal deploy -d <environment>`
│ ├── Kamal evaluates .kamal/secrets.<env>
│ │ └── get-secret.sh detects AKEYLESS_TOKEN, uses --token
│ ├── Builds Docker image
│ ├── Pushes to GHCR
│ ├── SSHs to Droplet, pushes env, runs migrations
│ ├── Starts new container, health check passes
│ └── Swaps traffic, stops old container
│
└── Done
The setup has six steps. Each one includes the pitfalls I encountered inline — you'll hit them at exactly that point if you follow this guide.
Step 1: Adapt secret resolution for CI
How Kamal resolves secrets
Kamal's secret resolution is flexible by design. Each environment has a secrets file (.kamal/secrets.staging, .kamal/secrets.production) that Kamal evaluates as a shell script at deploy time. Each line is a KEY=VALUE assignment where the value can come from anywhere: a hardcoded string, an environment variable, or the output of a command. This is how Kamal supports different secrets managers — you wire up whatever CLI or tool you use.
In my case, I use Akeyless. My .kamal/secrets.staging looks like this:
# Bootstrap credentials (for the running container to fetch runtime secrets)
AKEYLESS_ACCESS_ID=$(.kamal/scripts/require-env.sh AKEYLESS_ACCESS_ID_STAGING)
AKEYLESS_ACCESS_KEY=$(.kamal/scripts/require-env.sh AKEYLESS_ACCESS_KEY_STAGING)
# Application secrets (resolved via Akeyless CLI)
DATABASE_URL=$(.kamal/scripts/get-secret.sh -n /myapp/staging/database_url --profile myapp-staging)
REDIS_URL=$(.kamal/scripts/get-secret.sh -n /myapp/staging/redis_url --profile myapp-staging)
SECRET_KEY_BASE=$(.kamal/scripts/get-secret.sh -n /myapp/staging/secret_key_base --profile myapp-staging)
Notice the two groups. They serve different purposes and are resolved differently:
Application secrets (DATABASE_URL, REDIS_URL, SECRET_KEY_BASE, etc.) are fetched from Akeyless via get-secret.sh. These are infrastructure secrets the app needs to start — without a database URL, it crashes immediately. They're resolved once at deploy time, pushed to the server as an env file, and the container reads them as plain environment variables. They don't change until the next deploy.
Bootstrap credentials (AKEYLESS_ACCESS_ID, AKEYLESS_ACCESS_KEY) are different. They're Akeyless API keys that get injected into the running container. The app uses them at boot time to connect to Akeyless and fetch a second set of secrets — service-specific ones like New Relic API keys, email provider tokens, analytics IDs. These runtime secrets can rotate without a redeploy: the app reads them fresh from Akeyless every time it starts.
Here's where it gets confusing in the CI context: OIDC handles the first group, but not the second. The GitHub Actions runner authenticates to Akeyless via OIDC to resolve the application secrets — no static credentials needed for that. But the bootstrap credentials are not for the runner. They're for the container that will run on the server. The container is a plain Docker process on a Droplet — it has no OIDC, no GitHub context, no way to generate tokens. It needs static API keys.
So in the GitHub Actions workflow, the bootstrap credentials flow through three layers:
- Stored as GitHub Secrets (
AKEYLESS_ACCESS_ID_STAGING,AKEYLESS_ACCESS_KEY_STAGING) - Exported as env vars on the runner (so
require-env.shfinds them) - Captured by Kamal and pushed to the server as part of the container's env file
They're not used by the runner itself. GitHub Actions is just the vehicle that carries them from the GitHub Secrets store to the Droplet's env file.
Now let's look at how get-secret.sh resolves the application secrets. The --profile myapp-staging flag tells the Akeyless CLI which credentials to use for authentication.
What are Akeyless profiles?
An Akeyless CLI profile is a named credential set stored in ~/.akeyless/profiles/ on the deployer's machine. You configure them once:
akeyless configure --profile myapp-staging \
--access-id <id> --access-key <key>
After that, any akeyless get-secret-value --profile myapp-staging call authenticates using those stored credentials automatically. It's like an SSH config entry — configure once, use everywhere.
Why profiles don't work in CI
In a GitHub Actions runner, those profiles don't exist. There's no ~/.akeyless/profiles/myapp-staging.toml sitting on the ephemeral runner. You could create them on every run, but then you'd need to store the Akeyless access keys as GitHub Secrets — static credentials, exactly what we want to avoid.
The OIDC approach solves this differently: instead of stored credentials, the runner authenticates with a short-lived token. But the Akeyless CLI's --profile flag and --token flag are different authentication paths. You can't use both at the same time — and the secrets files are hardcoded with --profile.
The bridge: get-secret.sh
The wrapper script get-secret.sh already existed in my setup. Its original purpose was to prevent Kamal from silently deploying broken secrets.
Here's the problem it solves: when the Akeyless CLI fails to resolve a secret (wrong profile, expired token, network error), it returns the literal string "null" to stdout with exit code 0. Kamal evaluates the secrets file as a shell script, so DATABASE_URL=$(.kamal/scripts/get-secret.sh ...) simply captures whatever the command outputs. If the command returns "null", Kamal sets DATABASE_URL=null and proceeds with the deploy. Your app starts, tries to connect to a database at address null, and crashes — but the error points to the database layer, not the secrets layer.
The wrapper intercepts this by checking both the exit code and the resolved value. If either indicates failure, it exits with a non-zero status. Since Kamal evaluates the secrets file as a shell script, a non-zero exit from any command substitution causes the entire evaluation to fail — and Kamal aborts the deploy before building or pushing anything.
#!/bin/sh
# Original get-secret.sh — aborts on failure instead of letting
# Kamal silently deploy "null" secrets.
STDERR_FILE=$(mktemp)
trap 'rm -f "$STDERR_FILE"' EXIT
VALUE=$(akeyless get-secret-value "$@" 2>"$STDERR_FILE")
STATUS=$?
ERROR_OUTPUT=$(cat "$STDERR_FILE")
if [ "$STATUS" -ne 0 ]; then
echo "ERROR: akeyless get-secret-value failed (exit $STATUS) for: $*" >&2
[ -n "$ERROR_OUTPUT" ] && echo "$ERROR_OUTPUT" >&2
exit 1
fi
if [ "$VALUE" = "null" ] || [ -z "$VALUE" ]; then
echo "ERROR: Secret resolution returned null/empty for: $*" >&2
exit 1
fi
printf '%s\n' "$VALUE"
To support CI, I extended it to detect the deploy context and adapt the authentication. The logic is:
- Check if
AKEYLESS_TOKENis set. The workflow exports this env var after OIDC authentication. If it's present, we're in CI. If not, we're running locally. - In CI: strip
--profileand inject--token. The secrets files pass--profile myapp-stagingto every call, but profiles don't exist on the runner. The script iterates through the arguments, removes--profileand its value, and prepends--token $AKEYLESS_TOKENinstead. The OIDC token already carries the authentication — no profile needed. - Locally: pass everything through unchanged. The
--profileflag reaches the Akeyless CLI, which authenticates using the stored credentials. Nothing changes from the original behavior.
The argument stripping uses POSIX shift + set -- to rebuild the positional parameters safely, preserving arguments that contain spaces or special characters. Here's the extended version:
#!/bin/sh
# Extended get-secret.sh — supports both CI (OIDC token) and local (profiles).
STDERR_FILE=$(mktemp)
trap 'rm -f "$STDERR_FILE"' EXIT
if [ -n "$AKEYLESS_TOKEN" ]; then
# CI: authenticated via OIDC — use explicit token, strip --profile flags.
n=$#
i=0
SKIP_NEXT=false
while [ $i -lt $n ]; do
arg="$1"; shift; i=$((i + 1))
if [ "$SKIP_NEXT" = true ]; then SKIP_NEXT=false; continue; fi
case "$arg" in
--profile) SKIP_NEXT=true ;;
*) set -- "$@" "$arg" ;;
esac
done
VALUE=$(akeyless get-secret-value --token "$AKEYLESS_TOKEN" "$@" 2>"$STDERR_FILE")
else
# Local: use profile-based auth as configured.
VALUE=$(akeyless get-secret-value "$@" 2>"$STDERR_FILE")
fi
STATUS=$?
ERROR_OUTPUT=$(cat "$STDERR_FILE")
if [ "$STATUS" -ne 0 ]; then
echo "ERROR: akeyless get-secret-value failed (exit $STATUS) for: $*" >&2
[ -n "$ERROR_OUTPUT" ] && echo "$ERROR_OUTPUT" >&2
exit 1
fi
if [ "$VALUE" = "null" ] || [ -z "$VALUE" ]; then
echo "ERROR: Secret resolution returned null/empty for: $*" >&2
exit 1
fi
printf '%s\n' "$VALUE"
The .kamal/secrets.staging file stays identical in both contexts. No if CI then... branching, no separate secrets files. The wrapper absorbs all the context-switching.
Step 2: Configure Akeyless OIDC
This step happens in the Akeyless console — it's a one-time setup that tells Akeyless to trust GitHub Actions' OIDC tokens.
Create the JWT Auth Method
Navigate to Users & Auth Methods > + New > OAuth 2.0 / JWT and fill in:
| Field | Value |
|---|---|
| Name | GitHubActionsOIDC |
| JWKs URL | https://token.actions.githubusercontent.com/.well-known/jwks |
| Unique Identifier | repository |
| JWT TTL (in minutes) | 30 |
| Require Sub Claim on role association | Checked |
Two things that bit me here:
Pitfall: JWT TTL too short. I initially set it to 5 minutes — seems reasonable for a short-lived token. But a Docker build with a cold cache takes 5+ minutes. By the time Kamal resolved secrets after the build, the token had expired. The error: credentials have expired. Set it to 30 minutes — generous enough for slow builds.
Pitfall: Sub-claim checkbox is critical. "Require Sub Claim on role association" is not optional. Without it, any GitHub repository in the world could authenticate to your Akeyless and read your production secrets. It's the difference between "only my repo can deploy" and "anyone can read my database URL."
After creation, note the Access ID (format: p-xxxxxxxxxxxxxxxx). You'll need it in Step 4.
Create an Access Role and associate it
Create a role with read + list permissions on your secret paths (e.g., /myapp/staging/*, /myapp/production/*). Then associate the auth method with the role, adding a sub-claim:
| Sub-claim key | Sub-claim value |
|---|---|
repository |
your-org/your-repo |
Pitfall: Wrong repository name gives misleading errors. I initially typed the wrong repo name. Akeyless returned 404 for every secret — with no indication that it was a sub-claim mismatch, not a missing secret. The error looked identical to a permissions issue. Double-check the exact org/repo value.
Step 3: Set up SSH keys
Each environment needs a dedicated CI SSH key pair. The private key goes in GitHub Secrets; the public key gets installed on the Droplet.
Generate the keys
# Staging
ssh-keygen -t ed25519 -f ~/.ssh/kamal_staging_ci -N "" -C "github-actions-deploy-staging"
# Production
ssh-keygen -t ed25519 -f ~/.ssh/kamal_production_ci -N "" -C "github-actions-deploy-production"
Pitfall: Keys must not have a passphrase. The -N "" flag is essential. If the SSH key has a passphrase, ssh-agent in GitHub Actions hangs waiting for interactive input. The step shows Enter passphrase for (stdin): and never completes. No timeout, no error — just silence. Never use a developer's personal key (which likely has a passphrase) for CI.
Install on the Droplets
ssh root@YOUR_STAGING_IP "cat >> ~/.ssh/authorized_keys" < ~/.ssh/kamal_staging_ci.pub
ssh root@YOUR_PRODUCTION_IP "cat >> ~/.ssh/authorized_keys" < ~/.ssh/kamal_production_ci.pub
Step 4: Configure GitHub
Go to Settings > Secrets and variables > Actions in your repository.
Variables (Variables tab)
| Variable | Purpose | Where to get it |
|---|---|---|
AKEYLESS_OIDC_ACCESS_ID |
Access ID of the JWT auth method | From Step 2 (p-xxx) |
Pitfall: Variable, not Secret. The Access ID is not sensitive — it identifies the auth method, not a credential. But if you put it in Secrets and reference it via ${{ vars.AKEYLESS_OIDC_ACCESS_ID }}, the value is empty and the OIDC step fails silently. GitHub Secrets are only accessible via ${{ secrets.* }}; ${{ vars.* }} only reads from Variables. This took an embarrassingly long time to debug.
Secrets (Secrets tab)
| Secret | Purpose |
|---|---|
GHCR_PAT |
GitHub Container Registry push token |
DEPLOY_SSH_KEY_STAGING |
CI SSH private key (from Step 3) |
DEPLOY_SSH_KEY_PRODUCTION |
CI SSH private key (from Step 3) |
AKEYLESS_ACCESS_ID_STAGING |
Runtime Akeyless credential for staging containers |
AKEYLESS_ACCESS_KEY_STAGING |
Runtime Akeyless credential for staging containers |
AKEYLESS_ACCESS_ID_PRODUCTION |
Runtime Akeyless credential for production containers |
AKEYLESS_ACCESS_KEY_PRODUCTION |
Runtime Akeyless credential for production containers |
The AKEYLESS_ACCESS_* secrets are not for the deploy itself (that uses OIDC). They're injected into the running containers so the app can fetch runtime secrets (New Relic API keys, email provider tokens, etc.) at boot time.
The deploy step exports only the target environment's credentials — a staging deploy sets the production pair to empty string. This limits blast radius if the runner is compromised.
Step 5: Write the workflow
Here's the complete GitHub Actions workflow. Each non-obvious decision is annotated, and the Akeyless CLI pitfalls are called out where they occur:
name: Manual Deploy
on:
workflow_dispatch:
inputs:
environment:
description: 'Choose environment to deploy'
required: true
type: choice
options:
- staging
- production
jobs:
deploy:
if: ${{ github.actor == 'your-username' }}
permissions:
contents: read
packages: write # push to ghcr.io
id-token: write # OIDC token for Akeyless
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up SSH agent
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ inputs.environment == 'staging'
&& secrets.DEPLOY_SSH_KEY_STAGING
|| secrets.DEPLOY_SSH_KEY_PRODUCTION }}
- name: Add SSH known hosts
run: |
mkdir -p ~/.ssh
ssh-keyscan YOUR_STAGING_IP >> ~/.ssh/known_hosts
ssh-keyscan YOUR_PRODUCTION_IP >> ~/.ssh/known_hosts
- name: Install Akeyless CLI
run: |
curl -sSf -o akeyless https://akeyless-cli.s3.us-east-2.amazonaws.com/cli/latest/production/cli-linux-amd64
chmod +x akeyless
./akeyless configure --profile default \
--access-id dummy --access-type access_key \
2>/dev/null || true
echo "$HOME/.akeyless/bin" >> "$GITHUB_PATH"
- name: Authenticate with Akeyless via OIDC
run: |
set -euo pipefail
OIDC_TOKEN=$(curl -sSf \
-H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" \
"$ACTIONS_ID_TOKEN_REQUEST_URL&audience=akeyless.io" \
| jq -r '.value')
if [ -z "$OIDC_TOKEN" ] || [ "$OIDC_TOKEN" = "null" ]; then
echo "::error::Failed to obtain OIDC token"; exit 1
fi
echo "::add-mask::$OIDC_TOKEN"
AKEYLESS_TOKEN=$(akeyless auth \
--access-id "${{ vars.AKEYLESS_OIDC_ACCESS_ID }}" \
--access-type jwt \
--jwt "$OIDC_TOKEN" \
--json true | jq -r '.token')
if [ -z "$AKEYLESS_TOKEN" ] || [ "$AKEYLESS_TOKEN" = "null" ]; then
echo "::error::Failed to authenticate with Akeyless"; exit 1
fi
echo "::add-mask::$AKEYLESS_TOKEN"
echo "AKEYLESS_TOKEN=$AKEYLESS_TOKEN" >> "$GITHUB_ENV"
- uses: ruby/setup-ruby@v1
with:
ruby-version: '3.4'
bundler: none
- run: gem install kamal
- uses: docker/setup-buildx-action@v3
with:
driver: docker-container
- name: Deploy with Kamal
env:
CR_PAT: ${{ secrets.GHCR_PAT }}
AKEYLESS_ACCESS_ID_STAGING: ${{ inputs.environment == 'staging'
&& secrets.AKEYLESS_ACCESS_ID_STAGING || '' }}
AKEYLESS_ACCESS_KEY_STAGING: ${{ inputs.environment == 'staging'
&& secrets.AKEYLESS_ACCESS_KEY_STAGING || '' }}
AKEYLESS_ACCESS_ID_PRODUCTION: ${{ inputs.environment == 'production'
&& secrets.AKEYLESS_ACCESS_ID_PRODUCTION || '' }}
AKEYLESS_ACCESS_KEY_PRODUCTION: ${{ inputs.environment == 'production'
&& secrets.AKEYLESS_ACCESS_KEY_PRODUCTION || '' }}
run: kamal deploy -d ${{ inputs.environment }}
The id-token: write permission
This single line is what enables the entire OIDC flow. Without it, $ACTIONS_ID_TOKEN_REQUEST_URL is not available and the OIDC step fails with a cryptic "Unable to get ACTIONS_ID_TOKEN_REQUEST_URL env variable" error.
The ssh-keyscan step
Pitfall: Don't hash hostnames. ssh-keyscan -H hashes the hostname in known_hosts. When Kamal later connects via SSH, it may resolve the host differently from the original scan. The hashed entry doesn't match, and SSH fails with "fingerprint does not match." Use ssh-keyscan without -H.
The Install Akeyless CLI step
Three pitfalls in this single step. Each one blocks the workflow completely:
Pitfall 1: The interactive wizard. On first use, the Akeyless CLI launches an interactive setup wizard. It asks you to configure a profile, choose your vault URL, and offers to relocate the binary. In a headless CI runner, this hangs the workflow indefinitely. No timeout, no error — just silence. The fix: pre-configure a dummy profile with ./akeyless configure. This satisfies the "first use" check. The profile is never used for authentication.
Pitfall 2: The binary relocation. That same configure command silently moves the akeyless binary from wherever it is to ~/.akeyless/bin/. If you downloaded it to the current directory, it's gone. The next step that runs akeyless auth fails with "command not found." The fix: add ~/.akeyless/bin to $GITHUB_PATH.
Pitfall 3: access-type jwt, not oauth2. In the Akeyless console, the auth method is called "OAuth 2.0 / JWT". Natural assumption: use --access-type oauth2. This fails with "invalid access type oauth2." The CLI only accepts --access-type jwt for this auth method type. The oauth2 type exists but is only valid in the --gateway-url context. The naming mismatch between console and CLI is confusing, and the error message doesn't hint at the correct value.
The OIDC authentication step
Both tokens (GitHub OIDC and Akeyless) are masked with ::add-mask:: so they never appear in logs, even with debug mode enabled. Both are validated against empty string and "null" before proceeding. set -euo pipefail ensures any failure in the pipeline aborts the step immediately. curl -sSf — the -f flag makes curl return non-zero on HTTP errors.
Step 6: Deploy and verify
First deploy: use staging
- Go to Actions > Manual Deploy > Run workflow
- Select environment: staging, click Run workflow
What to check in the logs
| Step | What to look for |
|---|---|
| Set up SSH agent | Identity added (no passphrase prompt) |
| Install Akeyless CLI | No interactive wizard output |
| Authenticate with Akeyless via OIDC | No ::error:: messages |
| Deploy with Kamal | Finished in ... seconds with exit status 0 |
Verify the app
curl -s https://your-staging-domain/up
# Should return 200 OK
Verify local deploy still works
After a successful CI deploy, run kamal deploy -d staging from your machine to confirm the get-secret.sh changes don't break the profile-based local flow.
Then production
Once staging is validated, repeat with environment: production.
Part 3: What I'd do differently
Start with the Akeyless CLI in CI before wiring up Kamal. I jumped straight to kamal deploy and had to debug multiple layers at once. A minimal workflow that just runs akeyless auth + akeyless get-secret-value for one secret would have caught the CLI wizard, binary relocation, and access-type issues in minutes instead of hours.
Set JWT TTL to 30 minutes from the start. 5 minutes seems safe but isn't — a Docker build with a cold cache easily exceeds that.
Stepping back: the final setup is clean — one workflow file, one wrapper script, no changes to Kamal's secrets files. Local deploys and CI deploys use the same configuration. Akeyless remains the single source of truth for secrets, accessed via OIDC (CI) or CLI profiles (local).
But getting there required navigating undocumented behavior in three different tools. The Akeyless CLI's interactive wizard, binary relocation, and naming mismatch between console and CLI. GitHub Actions' distinction between Secrets and Variables. SSH key hashing causing fingerprint mismatches.
None of these are bugs, exactly. They're integration seams — the places where one tool's assumptions don't match another's. I hope this guide saves you the day I spent finding them.
References
- Kamal: documentation and source repository
- GitHub Actions documentation: "About security hardening with OpenID Connect"
- Akeyless documentation: "OAuth 2.0 / JWT authentication"
- Akeyless documentation: "CLI reference"
- GitHub Action: "webfactory/ssh-agent" — used to load the deployer SSH key on the runner
- Kamal GitHub Discussion: "Secrets and GitHub Actions"
- Kamal GitHub Discussion: "How to use Kamal with GitHub Actions"
- Kamal GitHub Discussion: "Env vars not available in .kamal/secrets"
- Wikipedia: "OpenID Connect" — protocol overview