Use exact upstream provider names per review feedback

Replace "OpenAI" with "Codex" and list all 7 providers by their
exact names from the onWatch README. The <50 MB RAM claim is
explicitly documented upstream (~34 MB idle, ~43 MB under load).
This commit is contained in:
Prakersh Maheshwari
2026-03-15 20:57:45 +05:30
parent e8a6da7c3c
commit 22dcfcc5e1

View File

@@ -135,7 +135,7 @@ One hundred twenty-one production-ready plugins that extend Claude Code with dom
| [mutation-tester](plugins/mutation-tester/) | Mutation testing to measure test suite quality |
| [n8n-workflow](plugins/n8n-workflow/) | Generate n8n automation workflows from natural language descriptions |
| [onboarding-guide](plugins/onboarding-guide/) | New developer onboarding documentation generator |
| [onWatch](https://github.com/onllm-dev/onwatch) | Open-source Go CLI that tracks AI API quota usage across 7 providers (Anthropic, OpenAI, GitHub Copilot, and more) with a background daemon, <50MB RAM, zero telemetry, and a Material Design 3 web dashboard |
| [onWatch](https://github.com/onllm-dev/onwatch) | Open-source Go CLI that tracks AI API quota usage across 7 providers (Synthetic, Z.ai, Anthropic, Codex, GitHub Copilot, MiniMax, Antigravity) with a background daemon (<50 MB RAM), zero telemetry, and a Material Design 3 web dashboard |
| [openapi-expert](plugins/openapi-expert/) | OpenAPI spec generation, validation, and client code scaffolding |
| [optimize](plugins/optimize/) | Code optimization for performance and bundle size reduction |
| [perf-profiler](plugins/perf-profiler/) | Performance analysis, profiling, and optimization recommendations |