Initial release: 100-file Claude Code toolkit
20 specialized agents, 10 skills, 17 slash commands, 6 plugins, 12 hooks with scripts, 8 rule sets, 3 CLAUDE.md templates, 14 MCP server configs, and interactive setup installer.
This commit is contained in:
6
plugins/deploy-pilot/.claude-plugin/plugin.json
Normal file
6
plugins/deploy-pilot/.claude-plugin/plugin.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"name": "deploy-pilot",
|
||||
"version": "1.0.0",
|
||||
"description": "Deployment automation with Dockerfile generation, CI/CD pipelines, and infrastructure as code",
|
||||
"commands": ["commands/dockerfile.md", "commands/ci-pipeline.md", "commands/k8s-manifest.md"]
|
||||
}
|
||||
64
plugins/deploy-pilot/commands/ci-pipeline.md
Normal file
64
plugins/deploy-pilot/commands/ci-pipeline.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# /deploy-pilot:ci-pipeline
|
||||
|
||||
Generate a GitHub Actions CI/CD workflow tailored to the current project.
|
||||
|
||||
## Process
|
||||
|
||||
1. Analyze the project to determine the pipeline requirements:
|
||||
- Detect language and package manager from manifest files
|
||||
- Check for existing test commands in package.json scripts, Makefile, or pyproject.toml
|
||||
- Identify linting tools already configured (ESLint, Prettier, Ruff, golangci-lint, Clippy)
|
||||
- Look for existing Docker configuration
|
||||
- Check for deployment targets (Vercel, AWS, GCP, Kubernetes manifests)
|
||||
|
||||
2. Generate the workflow file at `.github/workflows/ci.yml` with these jobs:
|
||||
|
||||
### Lint Job
|
||||
- Install dependencies with caching (npm ci, pip install, go mod download)
|
||||
- Run the project's configured linter
|
||||
- Run format checking (prettier --check, ruff format --check, gofmt)
|
||||
- Run type checking if applicable (tsc --noEmit, mypy, pyright)
|
||||
- Fail fast: this job should complete in under 2 minutes
|
||||
|
||||
### Test Job
|
||||
- Run the full test suite with coverage reporting
|
||||
- Upload coverage artifacts for later reference
|
||||
- For Node.js: use matrix strategy for Node 20 and 22
|
||||
- For Python: use matrix strategy for Python 3.11 and 3.12
|
||||
- Set timeout to prevent hung tests from blocking the pipeline
|
||||
- Run tests in parallel if the framework supports it (pytest -n auto, vitest)
|
||||
|
||||
### Build Job
|
||||
- Depends on lint and test passing
|
||||
- Build the application artifacts
|
||||
- For Docker projects: build the image and optionally push to registry
|
||||
- For libraries: build the package and verify it can be published (dry-run)
|
||||
- Cache build outputs between runs
|
||||
|
||||
### Deploy Job (optional, triggered on main branch only)
|
||||
- Depends on build passing
|
||||
- Deploy to the detected target environment
|
||||
- Use environment protection rules and manual approval for production
|
||||
- Include rollback instructions in job comments
|
||||
|
||||
3. Configure these workflow settings:
|
||||
- Trigger on push to main and pull requests
|
||||
- Cancel in-progress runs when new commits are pushed to the same PR
|
||||
- Set appropriate permissions (contents: read, packages: write if publishing)
|
||||
- Use `actions/checkout@v4`, `actions/setup-node@v4`, or equivalent
|
||||
- Cache dependency directories (node_modules, .pip-cache, ~/go/pkg/mod)
|
||||
|
||||
4. Add status badge markdown for the README.
|
||||
|
||||
## Output
|
||||
|
||||
Write the workflow to `.github/workflows/ci.yml`. If a workflow already exists, present a diff of the changes. Include comments in the YAML explaining non-obvious configuration choices.
|
||||
|
||||
## Rules
|
||||
|
||||
- Pin all action versions to full SHA hashes, not just major versions, for supply chain security
|
||||
- Never store secrets directly in the workflow file; reference GitHub Secrets
|
||||
- Keep the total pipeline under 10 minutes for a fast feedback loop
|
||||
- Use `concurrency` groups to prevent redundant runs
|
||||
- Set explicit timeouts on every job to prevent runaway costs
|
||||
- Make the pipeline work for forks (no secrets required for lint and test)
|
||||
64
plugins/deploy-pilot/commands/dockerfile.md
Normal file
64
plugins/deploy-pilot/commands/dockerfile.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# /deploy-pilot:dockerfile
|
||||
|
||||
Generate an optimized multi-stage Dockerfile for the current project.
|
||||
|
||||
## Process
|
||||
|
||||
1. Detect the project type and runtime:
|
||||
- Check for `package.json` (Node.js), `requirements.txt`/`pyproject.toml` (Python), `go.mod` (Go), `Cargo.toml` (Rust), `pom.xml`/`build.gradle` (Java)
|
||||
- Identify the framework (Next.js, FastAPI, Gin, Actix, Spring Boot, etc.)
|
||||
- Read the existing entrypoint script or start command
|
||||
|
||||
2. Select the appropriate base image:
|
||||
- Use official slim/alpine variants for smaller image sizes
|
||||
- Pin to a specific version tag, never use `latest`
|
||||
- For Node.js: `node:22-alpine` for runtime, `node:22` for build
|
||||
- For Python: `python:3.12-slim` for runtime
|
||||
- For Go: `golang:1.23-alpine` for build, `gcr.io/distroless/static-debian12` for runtime
|
||||
- For Rust: `rust:1.82-slim` for build, `debian:bookworm-slim` for runtime
|
||||
|
||||
3. Structure the Dockerfile with these stages:
|
||||
|
||||
### Stage 1: Dependencies
|
||||
- Copy only dependency manifests (package.json, go.sum, Cargo.lock, etc.)
|
||||
- Install dependencies in a cached layer
|
||||
- This stage changes only when dependencies change, maximizing cache hits
|
||||
|
||||
### Stage 2: Build
|
||||
- Copy source code
|
||||
- Run the build command (compile, transpile, bundle)
|
||||
- Run tests if a `--with-tests` flag is specified
|
||||
- Prune dev dependencies after build
|
||||
|
||||
### Stage 3: Runtime
|
||||
- Start from a minimal base image
|
||||
- Copy only the built artifacts and production dependencies
|
||||
- Set a non-root user for security
|
||||
- Configure health check with appropriate endpoint or command
|
||||
- Set resource limits via labels where applicable
|
||||
- Define the entrypoint and default command
|
||||
|
||||
4. Apply these optimizations:
|
||||
- Order COPY instructions from least to most frequently changed
|
||||
- Use `.dockerignore` patterns: suggest creating one if missing
|
||||
- Set `NODE_ENV=production` or equivalent environment variables
|
||||
- Use `COPY --from=build` to minimize final image layers
|
||||
- Add `LABEL` instructions for maintainer, version, and description
|
||||
- Set `EXPOSE` for the application port
|
||||
- Use `tini` or `dumb-init` as PID 1 for proper signal handling in Node.js
|
||||
|
||||
5. Generate a companion `.dockerignore` if one does not exist, including:
|
||||
- `.git`, `node_modules`, `__pycache__`, `.env`, `*.log`
|
||||
- Build output directories, test fixtures, documentation
|
||||
|
||||
## Output
|
||||
|
||||
Write the Dockerfile to the project root. If a Dockerfile already exists, present a diff and ask before overwriting. Include inline annotations explaining each optimization decision.
|
||||
|
||||
## Rules
|
||||
|
||||
- Never hardcode secrets or credentials in the Dockerfile
|
||||
- Always run as non-root in the final stage
|
||||
- Keep the final image under 200MB when possible
|
||||
- Support both amd64 and arm64 architectures (avoid platform-specific binaries)
|
||||
- Test the Dockerfile with `docker build --target runtime .` if Docker is available
|
||||
68
plugins/deploy-pilot/commands/k8s-manifest.md
Normal file
68
plugins/deploy-pilot/commands/k8s-manifest.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# /deploy-pilot:k8s-manifest
|
||||
|
||||
Generate production-ready Kubernetes manifests for the current application.
|
||||
|
||||
## Process
|
||||
|
||||
1. Gather application details:
|
||||
- Read the Dockerfile or build config to determine the container image and port
|
||||
- Check for existing k8s manifests in `k8s/`, `deploy/`, `manifests/`, or `kubernetes/` directories
|
||||
- Identify environment variables from `.env.example` or the application config
|
||||
- Determine resource requirements based on the application type
|
||||
|
||||
2. Generate a Deployment manifest:
|
||||
- Set `replicas: 2` as the default for high availability
|
||||
- Configure rolling update strategy with `maxSurge: 1` and `maxUnavailable: 0`
|
||||
- Set resource requests and limits:
|
||||
- Web apps: 128Mi memory request, 256Mi limit, 100m CPU request, 500m limit
|
||||
- APIs: 256Mi memory request, 512Mi limit, 200m CPU request, 1000m limit
|
||||
- Workers: 512Mi memory request, 1Gi limit, 250m CPU request, 1000m limit
|
||||
- Add liveness and readiness probes:
|
||||
- HTTP probe for web services (GET /healthz)
|
||||
- TCP probe for non-HTTP services
|
||||
- Set initialDelaySeconds, periodSeconds, and failureThreshold appropriately
|
||||
- Mount environment variables from ConfigMap and secrets from Secret
|
||||
- Set security context: runAsNonRoot, readOnlyRootFilesystem, drop all capabilities
|
||||
- Add pod disruption budget for graceful scaling
|
||||
|
||||
3. Generate a Service manifest:
|
||||
- ClusterIP service for internal communication
|
||||
- Match the deployment selector labels
|
||||
- Map the container port to the service port
|
||||
|
||||
4. Generate an Ingress manifest (if the app is externally accessible):
|
||||
- Use the `networking.k8s.io/v1` API version
|
||||
- Configure TLS with a placeholder for cert-manager cluster issuer annotation
|
||||
- Set appropriate path type (Prefix for SPAs, Exact for APIs)
|
||||
- Include a placeholder hostname that the user should customize
|
||||
|
||||
5. Generate supporting resources:
|
||||
- ConfigMap for non-sensitive environment variables
|
||||
- Secret manifest (with placeholder values) for sensitive configuration
|
||||
- HorizontalPodAutoscaler targeting 70% CPU utilization, scaling 2-10 replicas
|
||||
- NetworkPolicy restricting ingress to only the necessary ports
|
||||
|
||||
6. Add namespace and label conventions:
|
||||
- Use `app.kubernetes.io/name`, `app.kubernetes.io/version`, `app.kubernetes.io/component` labels
|
||||
- Include a namespace definition or note that one should be created
|
||||
|
||||
## Output
|
||||
|
||||
Write manifests to a `k8s/` directory with separate files:
|
||||
- `k8s/deployment.yaml`
|
||||
- `k8s/service.yaml`
|
||||
- `k8s/ingress.yaml`
|
||||
- `k8s/configmap.yaml`
|
||||
- `k8s/hpa.yaml`
|
||||
- `k8s/networkpolicy.yaml`
|
||||
|
||||
Alternatively, offer a single `k8s/app.yaml` with all resources separated by `---`.
|
||||
|
||||
## Rules
|
||||
|
||||
- Always use specific image tags, never `latest`
|
||||
- Set resource limits on every container to prevent noisy neighbor issues
|
||||
- Include comments explaining each non-obvious configuration choice
|
||||
- Validate manifests with `kubectl apply --dry-run=client -f` if kubectl is available
|
||||
- Use `kustomize`-friendly structure if the project already uses kustomize
|
||||
- Never include actual secret values; use placeholders with instructions to populate from a secret manager
|
||||
Reference in New Issue
Block a user