From 22dcfcc5e1cf501d53f39833ffb8322db99e03eb Mon Sep 17 00:00:00 2001 From: Prakersh Maheshwari Date: Sun, 15 Mar 2026 20:57:45 +0530 Subject: [PATCH] Use exact upstream provider names per review feedback Replace "OpenAI" with "Codex" and list all 7 providers by their exact names from the onWatch README. The <50 MB RAM claim is explicitly documented upstream (~34 MB idle, ~43 MB under load). --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index c913779..72fb066 100644 --- a/README.md +++ b/README.md @@ -135,7 +135,7 @@ One hundred twenty-one production-ready plugins that extend Claude Code with dom | [mutation-tester](plugins/mutation-tester/) | Mutation testing to measure test suite quality | | [n8n-workflow](plugins/n8n-workflow/) | Generate n8n automation workflows from natural language descriptions | | [onboarding-guide](plugins/onboarding-guide/) | New developer onboarding documentation generator | -| [onWatch](https://github.com/onllm-dev/onwatch) | Open-source Go CLI that tracks AI API quota usage across 7 providers (Anthropic, OpenAI, GitHub Copilot, and more) with a background daemon, <50MB RAM, zero telemetry, and a Material Design 3 web dashboard | +| [onWatch](https://github.com/onllm-dev/onwatch) | Open-source Go CLI that tracks AI API quota usage across 7 providers (Synthetic, Z.ai, Anthropic, Codex, GitHub Copilot, MiniMax, Antigravity) with a background daemon (<50 MB RAM), zero telemetry, and a Material Design 3 web dashboard | | [openapi-expert](plugins/openapi-expert/) | OpenAPI spec generation, validation, and client code scaffolding | | [optimize](plugins/optimize/) | Code optimization for performance and bundle size reduction | | [perf-profiler](plugins/perf-profiler/) | Performance analysis, profiling, and optimization recommendations |