Add anti-regression framework and safe multi-install preflight

Co-authored-by: Yacineutt <Yacineutt@users.noreply.github.com>
This commit is contained in:
Cursor Agent
2026-03-09 22:20:14 +00:00
parent 368f507406
commit 9746f5b31c
10 changed files with 653 additions and 1 deletions

View File

@@ -0,0 +1,123 @@
# Plan d'execution des chantiers restants (zero regression)
Date: 2026-03-09
Scope: ETHICA, Tracking, Factory SaaS, Multi-install WEVADS/ADX
## 1) Objectif de gouvernance
Mettre en place un dispositif **durable** qui evite les regressions et permet de traiter les chantiers restants en sequence controlee:
- Gate GO/NO-GO automatise (script `nonreg-framework.sh`)
- Preflight multi-install sans risque (script `multiinstall-safe-preflight.sh`)
- Validation de qualite par lot (batch) avant extension de perimetre
---
## 2) Priorites P0 (immediat)
### P0.1 - Stabilite multi-install (sans toucher PMTA/SSH/global tuning)
Definition of Done:
- 100% des serveurs d'un batch passent le preflight
- 0 serveur lance avec lock dpkg actif
- 0 echec "undefined / nothing in process" sur batch valide
Procedure:
1. Construire `servers.csv` (id, ip, username, password)
2. Executer `./multiinstall-safe-preflight.sh servers.csv`
3. Lancer uniquement les serveurs `ready=YES`
4. Ne jamais depasser la taille de batch validee (ex: 3-5)
### P0.2 - Gate anti-regression avant toute release
Definition of Done:
- 0 FAIL sur rapport anti-regression
- WEVIA greeting < 3s
- WEVIA deep < 60s
- Pages critiques en HTTP 200
- Scan confidentialite propre
Procedure:
1. `chmod +x nonreg-framework.sh`
2. `./nonreg-framework.sh`
3. Si FAIL > 0 => NO-GO
---
## 3) Priorites P1 (fiabilisation metier)
### P1.1 - ETHICA
Objectifs:
- fiabiliser les sources (fallback MarocMedecin, Tabibi listing-based)
- conserver cadence crons sans grossissement de logs
Checklist:
- [ ] Source alternative active si Cloudflare bloque
- [ ] Tabibi passe en mode listing (plus de dependance ID-only)
- [ ] logrotate actif sur logs scrapers
- [ ] KPI min: croissance `medecins_real` + taux de validation stable
### P1.2 - Tracking S3 + endpoints
Objectifs:
- assurer coherence tracking URL partout
- eliminer drift entre S3 redirect / configs locales / BDD
Checklist:
- [ ] redirect.html S3 aligne sur tracking actuel
- [ ] configs app (wevads/fmg) alignees
- [ ] domaine tracking resolu et accessible (200/301/302)
### P1.3 - Factory SaaS
Objectifs:
- smoke tests API avant publication
- distinction claire: app LIVE vs landing-only
Checklist:
- [ ] endpoints smoke verifies (DeliverScore/MedReach/GPU/Content)
- [ ] mapping modeles GPU aligne UI/backend
- [ ] status public documente par produit (LIVE/BETA/LANDING)
---
## 4) SLO / Six Sigma operable (pragmatique)
Metriques cibles:
- Disponibilite checks critiques: >= 99.5% (sur 7 jours glissants)
- Regressions bloquantes post-release: 0
- Echecs batch multi-install: < 5%
- MTTR incident critique: < 30 min
Cadence:
- Daily: run anti-regression
- Avant deploy: gate obligatoire
- Apres incident: post-mortem + ajout test de non-regression associe
---
## 5) Commandes utiles
```bash
# 1) Gate anti-regression
chmod +x nonreg-framework.sh
./nonreg-framework.sh
# 2) Gate anti-regression avec API key (GPU check actif)
API_KEY="xxx" GPU_MODEL="qwen2.5:3b" ./nonreg-framework.sh
# 3) Preflight multi-install
chmod +x multiinstall-safe-preflight.sh
./multiinstall-safe-preflight.sh servers.csv
```
---
## 6) Regles anti-incident (obligatoires)
1. Ne pas modifier PMTA/JAR/SSH global pour accelerer un batch.
2. Toujours preflight avant lancement.
3. Pas de nouveau batch tant que le precedent n'est pas stable.
4. Toute correction manuelle en prod => ajouter un test dans le framework.

View File

@@ -0,0 +1,60 @@
# Rapport final - chantiers restants (execution Codex)
Date: 2026-03-09
Branche: `cursor/ethica-saas-chantiers-a789`
## 1) Ce qui a ete livre
1. **Framework anti-regression**: `nonreg-framework.sh`
- checks pages critiques (site + produits)
- checks WEVIA (greeting/deep avec seuils)
- checks APIs SaaS (DeliverScore/MedReach/GPU)
- checks tracking (IP + domaine)
- scan confidentialite (mode strict optionnel)
- sortie rapport markdown horodate dans `reports/`
2. **Preflight multi-install safe**: `multiinstall-safe-preflight.sh`
- aucun changement PMTA/SSH/global config
- verifie reachability, auth, disque, RAM, lock dpkg, sante apt
- sort un CSV de readiness batch avant lancement multi-install
3. **Plan d'execution chantiers**: `CHANTIERS_RESTANTS_EXECUTION_PLAN.md`
- priorites P0/P1
- gates GO/NO-GO
- regles anti-incident
- metriques d'operation (SLO)
4. **Template d'entree multi-install**: `servers.example.csv`
---
## 2) Resultat du run live
Run effectue: `./nonreg-framework.sh`
- PASS: 24
- WARN: 4
- FAIL: 0
- Verdict: **GO**
Warnings detectes:
- termes sensibles encore presents sur:
- `/products/wevads-ia.html`
- `/products/workspace.html`
- DeliverScore en rate-limit (429) sans cle API
- test GPU saute (API key non fournie)
> Pour mode blocant total, lancer:
>
> `STRICT_CONFIDENTIALITY=1 API_KEY="xxx" ./nonreg-framework.sh`
---
## 3) Cadre no-regression active
Le dispositif est maintenant en place:
- **Avant chaque release**: run anti-regression obligatoire
- **Avant chaque batch multi-install**: run preflight obligatoire
- **Si FAIL > 0**: NO-GO automatique

View File

@@ -4,4 +4,10 @@
- **RAM**: 62GB DDR4
- **Disk**: 1.7TB NVMe
- **Ollama**: localhost:11434
- **Models**: deepseek-r1:8b, deepseek-r1:32b, llama3.1:8b
- **Legacy local models**: deepseek-r1:8b, deepseek-r1:32b, llama3.1:8b
## Ops scripts in this repo
- `nonreg-framework.sh`: anti-regression gate (HTTP/API/WEVIA/tracking/confidentiality checks)
- `multiinstall-safe-preflight.sh`: safe server preflight before multi-install batches
- `CHANTIERS_RESTANTS_EXECUTION_PLAN.md`: execution plan and GO/NO-GO criteria

138
multiinstall-safe-preflight.sh Executable file
View File

@@ -0,0 +1,138 @@
#!/usr/bin/env bash
set -euo pipefail
# -------------------------------------------------------------------
# Multi-install SAFE preflight
# Goal: reduce failed batches without touching PMTA/SSH/global config.
#
# Input file format (CSV-like, no header):
# server_id,ip,username,password
# Example:
# 180,101.46.69.207,root,Yacine.123
# -------------------------------------------------------------------
INPUT_FILE="${1:-}"
CONNECT_TIMEOUT="${CONNECT_TIMEOUT:-5}"
SSH_BIN="${SSH_BIN:-ssh}"
SSHPASS_BIN="${SSHPASS_BIN:-sshpass}"
OUT_DIR="${OUT_DIR:-./reports}"
RUN_ID="$(date +%Y%m%d_%H%M%S)"
OUT_CSV="${OUT_DIR}/multiinstall_preflight_${RUN_ID}.csv"
if [[ -z "${INPUT_FILE}" || ! -f "${INPUT_FILE}" ]]; then
echo "Usage: $0 <servers.csv>"
echo "Missing input file: ${INPUT_FILE:-<empty>}"
exit 1
fi
mkdir -p "${OUT_DIR}"
echo "server_id,ip,ssh_tcp,ssh_auth,disk_ok,ram_ok,dpkg_lock,apt_health,ready,notes" > "${OUT_CSV}"
check_tcp_22() {
local ip="$1"
timeout "${CONNECT_TIMEOUT}" bash -c "exec 3<>/dev/tcp/${ip}/22" >/dev/null 2>&1
}
run_ssh_password() {
local user="$1" ip="$2" pass="$3" cmd="$4"
"${SSHPASS_BIN}" -p "${pass}" "${SSH_BIN}" \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
-o ConnectTimeout="${CONNECT_TIMEOUT}" \
"${user}@${ip}" "${cmd}"
}
run_ssh_key() {
local user="$1" ip="$2" cmd="$3"
"${SSH_BIN}" \
-o StrictHostKeyChecking=no \
-o UserKnownHostsFile=/dev/null \
-o ConnectTimeout="${CONNECT_TIMEOUT}" \
"${user}@${ip}" "${cmd}"
}
HAVE_SSHPASS=0
if command -v "${SSHPASS_BIN}" >/dev/null 2>&1; then
HAVE_SSHPASS=1
fi
while IFS=',' read -r server_id ip username password; do
[[ -z "${server_id}" ]] && continue
[[ "${server_id}" =~ ^# ]] && continue
ssh_tcp="FAIL"
ssh_auth="FAIL"
disk_ok="FAIL"
ram_ok="FAIL"
dpkg_lock="UNKNOWN"
apt_health="UNKNOWN"
ready="NO"
notes=""
if check_tcp_22 "${ip}"; then
ssh_tcp="PASS"
else
notes="port22_unreachable"
echo "${server_id},${ip},${ssh_tcp},${ssh_auth},${disk_ok},${ram_ok},${dpkg_lock},${apt_health},${ready},${notes}" >> "${OUT_CSV}"
continue
fi
if [[ "$HAVE_SSHPASS" == "1" ]]; then
SSH_RUN=(run_ssh_password "${username}" "${ip}" "${password}")
else
SSH_RUN=(run_ssh_key "${username}" "${ip}")
notes="${notes:+$notes|}sshpass_missing_using_key_auth"
fi
if "${SSH_RUN[@]}" "echo ok" >/dev/null 2>&1; then
ssh_auth="PASS"
else
notes="ssh_auth_failed"
echo "${server_id},${ip},${ssh_tcp},${ssh_auth},${disk_ok},${ram_ok},${dpkg_lock},${apt_health},${ready},${notes}" >> "${OUT_CSV}"
continue
fi
# Disk check: >= 8GB free on /
if "${SSH_RUN[@]}" \
"avail=\$(df -BG / | awk 'NR==2 {gsub(\"G\",\"\",\$4); print \$4}'); [ \"\${avail:-0}\" -ge 8 ]"; then
disk_ok="PASS"
else
notes="${notes:+$notes|}low_disk"
fi
# RAM check: >= 2GB
if "${SSH_RUN[@]}" \
"mem=\$(awk '/MemTotal/ {print int(\$2/1024/1024)}' /proc/meminfo); [ \"\${mem:-0}\" -ge 2 ]"; then
ram_ok="PASS"
else
notes="${notes:+$notes|}low_ram"
fi
# dpkg/apt lock check
if "${SSH_RUN[@]}" \
"if fuser /var/lib/dpkg/lock >/dev/null 2>&1 || fuser /var/lib/dpkg/lock-frontend >/dev/null 2>&1; then exit 1; else exit 0; fi"; then
dpkg_lock="PASS"
else
dpkg_lock="FAIL"
notes="${notes:+$notes|}dpkg_lock_detected"
fi
# apt health check (read-only)
if "${SSH_RUN[@]}" "apt-cache policy >/dev/null 2>&1"; then
apt_health="PASS"
else
apt_health="FAIL"
notes="${notes:+$notes|}apt_health_failed"
fi
if [[ "${ssh_tcp}" == "PASS" && "${ssh_auth}" == "PASS" && "${disk_ok}" == "PASS" && "${ram_ok}" == "PASS" && "${dpkg_lock}" == "PASS" && "${apt_health}" == "PASS" ]]; then
ready="YES"
fi
echo "${server_id},${ip},${ssh_tcp},${ssh_auth},${disk_ok},${ram_ok},${dpkg_lock},${apt_health},${ready},${notes}" >> "${OUT_CSV}"
done < "${INPUT_FILE}"
echo "Preflight report generated: ${OUT_CSV}"
echo "Ready servers:"
awk -F',' 'NR>1 && $9=="YES" {print " - " $1 " (" $2 ")"}' "${OUT_CSV}"

285
nonreg-framework.sh Executable file
View File

@@ -0,0 +1,285 @@
#!/usr/bin/env bash
set -euo pipefail
# -------------------------------------------------------------------
# WEVADS / WEVIA anti-regression framework
# Safe by design: read-only HTTP checks, no infra mutation.
# -------------------------------------------------------------------
BASE_URL="${BASE_URL:-https://weval-consulting.com}"
TRACKING_BASE_URL="${TRACKING_BASE_URL:-http://151.80.235.110}"
TRACKING_DOMAIN_URL="${TRACKING_DOMAIN_URL:-https://culturellemejean.charity}"
API_KEY="${API_KEY:-}"
GPU_MODEL="${GPU_MODEL:-qwen2.5:3b}"
MAX_GREETING_SECONDS="${MAX_GREETING_SECONDS:-3}"
MAX_DEEP_SECONDS="${MAX_DEEP_SECONDS:-60}"
STRICT_CONFIDENTIALITY="${STRICT_CONFIDENTIALITY:-0}"
REPORT_DIR="${REPORT_DIR:-./reports}"
RUN_ID="$(date +%Y%m%d_%H%M%S)"
REPORT_FILE="${REPORT_DIR}/nonreg_${RUN_ID}.md"
mkdir -p "${REPORT_DIR}"
PASS_COUNT=0
FAIL_COUNT=0
WARN_COUNT=0
declare -a FAILURES
declare -a WARNINGS
log() { printf '%s\n' "$*"; }
record_pass() {
PASS_COUNT=$((PASS_COUNT + 1))
log "PASS | $1"
}
record_fail() {
FAIL_COUNT=$((FAIL_COUNT + 1))
FAILURES+=("$1")
log "FAIL | $1"
}
record_warn() {
WARN_COUNT=$((WARN_COUNT + 1))
WARNINGS+=("$1")
log "WARN | $1"
}
http_status() {
local url="$1"
curl -sS -L -o /tmp/nonreg_body_${RUN_ID}.tmp -w "%{http_code} %{time_total}" --max-time 120 "$url"
}
check_status_200() {
local name="$1"
local url="$2"
local out code t
out="$(http_status "$url" || true)"
code="$(awk '{print $1}' <<<"$out")"
t="$(awk '{print $2}' <<<"$out")"
if [[ "$code" == "200" ]]; then
record_pass "${name} (${url}) code=${code} t=${t}s"
else
record_fail "${name} (${url}) expected 200 got ${code:-N/A} t=${t:-N/A}s"
fi
}
check_not_confidential_terms() {
local url="$1"
local body
body="$(curl -sS -L --max-time 60 "$url" || true)"
if [[ -z "$body" ]]; then
record_fail "Confidentiality scan cannot fetch ${url}"
return
fi
if rg -n -i "McKinsey|PwC|Deloitte|OpenAI|Anthropic|Abbott|AbbVie|J&J|CX3|DoubleM|89\\.167\\.40\\.150|88\\.198\\.4\\.195|\\b646\\b|\\b604\\b" <<<"$body" >/dev/null; then
if [[ "$STRICT_CONFIDENTIALITY" == "1" ]]; then
record_fail "Confidentiality terms detected in ${url}"
else
record_warn "Confidentiality terms detected in ${url} (strict mode disabled)"
fi
else
record_pass "Confidentiality scan clean for ${url}"
fi
}
check_wevia_greeting() {
local out code t
out="$(curl -sS -o /tmp/nonreg_wevia_${RUN_ID}.json -w "%{http_code} %{time_total}" \
--max-time 60 \
-H "Content-Type: application/json" \
-d '{"message":"Bonjour","mode":"fast"}' \
"${BASE_URL}/api/weval-ia" || true)"
code="$(awk '{print $1}' <<<"$out")"
t="$(awk '{print $2}' <<<"$out")"
if [[ "$code" != "200" ]]; then
record_fail "WEVIA greeting expected 200 got ${code:-N/A}"
return
fi
if awk "BEGIN {exit !($t < $MAX_GREETING_SECONDS)}"; then
record_pass "WEVIA greeting latency ${t}s < ${MAX_GREETING_SECONDS}s"
else
record_fail "WEVIA greeting latency ${t}s >= ${MAX_GREETING_SECONDS}s"
fi
}
check_wevia_deep() {
local out code t
out="$(curl -sS -o /tmp/nonreg_wevia_full_${RUN_ID}.json -w "%{http_code} %{time_total}" \
--max-time 120 \
-H "Content-Type: application/json" \
-d '{"message":"Fais une analyse concise supply chain internationale.","mode":"deep"}' \
"${BASE_URL}/api/weval-ia-full" || true)"
code="$(awk '{print $1}' <<<"$out")"
t="$(awk '{print $2}' <<<"$out")"
if [[ "$code" != "200" ]]; then
record_fail "WEVIA deep expected 200 got ${code:-N/A}"
return
fi
if awk "BEGIN {exit !($t < $MAX_DEEP_SECONDS)}"; then
record_pass "WEVIA deep latency ${t}s < ${MAX_DEEP_SECONDS}s"
else
record_fail "WEVIA deep latency ${t}s >= ${MAX_DEEP_SECONDS}s"
fi
}
check_gpu_chat() {
if [[ -z "$API_KEY" ]]; then
record_warn "GPU chat check skipped (API_KEY not set)"
return
fi
local payload out code
payload="$(printf '{"message":"Donne 3 points pour optimiser une campagne email.","model":"%s"}' "$GPU_MODEL")"
out="$(curl -sS -o /tmp/nonreg_gpu_${RUN_ID}.json -w "%{http_code}" \
--max-time 120 \
-H "Content-Type: application/json" \
-H "X-API-Key: ${API_KEY}" \
-d "$payload" \
"${BASE_URL}/api/gpu/chat.php" || true)"
code="$out"
if [[ "$code" == "200" ]]; then
if rg -n -i "Model not available" /tmp/nonreg_gpu_${RUN_ID}.json >/dev/null; then
record_fail "GPU chat returned model-not-available despite HTTP 200"
else
record_pass "GPU chat functional (model=${GPU_MODEL})"
fi
else
record_fail "GPU chat expected 200 got ${code:-N/A}"
fi
}
check_tracking_smoke() {
local out1 out2 c1 c2
out1="$(curl -sS -o /dev/null -w "%{http_code}" --max-time 30 "${TRACKING_BASE_URL}" || true)"
out2="$(curl -sS -o /dev/null -w "%{http_code}" --max-time 30 "${TRACKING_DOMAIN_URL}" || true)"
c1="$out1"
c2="$out2"
if [[ "$c1" =~ ^(200|301|302)$ ]]; then
record_pass "Tracking base reachable (${TRACKING_BASE_URL}) code=${c1}"
else
record_fail "Tracking base unreachable (${TRACKING_BASE_URL}) code=${c1:-N/A}"
fi
if [[ "$c2" =~ ^(200|301|302)$ ]]; then
record_pass "Tracking domain reachable (${TRACKING_DOMAIN_URL}) code=${c2}"
else
record_fail "Tracking domain unreachable (${TRACKING_DOMAIN_URL}) code=${c2:-N/A}"
fi
}
check_deliverscore_smoke() {
local out code t
if [[ -n "$API_KEY" ]]; then
out="$(curl -sS -o /tmp/nonreg_deliver_${RUN_ID}.json -w "%{http_code} %{time_total}" \
--max-time 120 \
"${BASE_URL}/api/deliverscore/scan.php?domain=gmail.com&api_key=${API_KEY}" || true)"
else
out="$(curl -sS -o /tmp/nonreg_deliver_${RUN_ID}.json -w "%{http_code} %{time_total}" \
--max-time 120 \
"${BASE_URL}/api/deliverscore/scan.php?domain=gmail.com" || true)"
fi
code="$(awk '{print $1}' <<<"$out")"
t="$(awk '{print $2}' <<<"$out")"
if [[ "$code" == "200" ]]; then
record_pass "DeliverScore smoke code=${code} t=${t}s"
elif [[ "$code" == "429" ]]; then
record_warn "DeliverScore rate-limited code=429 t=${t}s"
elif [[ "$code" =~ ^(401|403)$ ]]; then
record_warn "DeliverScore auth required code=${code} (provide API_KEY for strict test)"
else
record_fail "DeliverScore smoke unexpected code=${code:-N/A} t=${t:-N/A}s"
fi
}
main() {
log "=== NON-REG FRAMEWORK START (${RUN_ID}) ==="
log "BASE_URL=${BASE_URL}"
log "TRACKING_BASE_URL=${TRACKING_BASE_URL}"
log "TRACKING_DOMAIN_URL=${TRACKING_DOMAIN_URL}"
# Core pages
check_status_200 "Home" "${BASE_URL}/"
check_status_200 "Products hub" "${BASE_URL}/products/"
check_status_200 "WEVIA page" "${BASE_URL}/wevia"
check_status_200 "Platform" "${BASE_URL}/platform/"
# Products (known core URLs)
declare -a product_pages=(
"academy.html"
"arsenal.html"
"blueprintai.html"
"content-factory.html"
"deliverscore.html"
"gpu-inference.html"
"medreach.html"
"proposalai.html"
"storeforge.html"
"wevads.html"
"wevads-ia.html"
"wevia-whitelabel.html"
"workspace.html"
)
for page in "${product_pages[@]}"; do
check_status_200 "Product ${page}" "${BASE_URL}/products/${page}"
done
# Confidentiality scans on strategic pages
check_not_confidential_terms "${BASE_URL}/"
check_not_confidential_terms "${BASE_URL}/products/"
check_not_confidential_terms "${BASE_URL}/products/wevads-ia.html"
check_not_confidential_terms "${BASE_URL}/products/workspace.html"
# WEVIA performance checks
check_wevia_greeting
check_wevia_deep
# SaaS API checks (smoke)
check_deliverscore_smoke
check_status_200 "MedReach smoke" "${BASE_URL}/api/medreach/search.php?specialty=cardiologue&country=FR&limit=3"
check_gpu_chat
check_tracking_smoke
{
echo "# Rapport anti-regression ${RUN_ID}"
echo
echo "- Base URL: ${BASE_URL}"
echo "- Tracking base: ${TRACKING_BASE_URL}"
echo "- Tracking domain: ${TRACKING_DOMAIN_URL}"
echo
echo "## Resume"
echo
echo "- PASS: ${PASS_COUNT}"
echo "- WARN: ${WARN_COUNT}"
echo "- FAIL: ${FAIL_COUNT}"
echo
if (( WARN_COUNT > 0 )); then
echo "## Warnings"
printf -- "- %s\n" "${WARNINGS[@]}"
echo
fi
if (( FAIL_COUNT > 0 )); then
echo "## Failures"
printf -- "- %s\n" "${FAILURES[@]}"
echo
fi
echo "## Verdict"
if (( FAIL_COUNT == 0 )); then
echo "GO (no hard regression detected)."
else
echo "NO-GO (${FAIL_COUNT} hard failures)."
fi
} > "${REPORT_FILE}"
log "Report written: ${REPORT_FILE}"
log "=== NON-REG FRAMEWORK END (${RUN_ID}) ==="
if (( FAIL_COUNT > 0 )); then
exit 1
fi
}
main "$@"

11
reports/README.md Normal file
View File

@@ -0,0 +1,11 @@
# Reports output
This folder stores generated artifacts from:
- `nonreg-framework.sh`
- `multiinstall-safe-preflight.sh`
Examples currently present:
- `nonreg_*.md`: anti-regression run summaries
- `multiinstall_preflight_*.csv`: server readiness preflight outputs

View File

@@ -0,0 +1 @@
server_id,ip,ssh_tcp,ssh_auth,disk_ok,ram_ok,dpkg_lock,apt_health,ready,notes
1 server_id ip ssh_tcp ssh_auth disk_ok ram_ok dpkg_lock apt_health ready notes

View File

@@ -0,0 +1,4 @@
server_id,ip,ssh_tcp,ssh_auth,disk_ok,ram_ok,dpkg_lock,apt_health,ready,notes
180,101.46.69.207,PASS,FAIL,FAIL,FAIL,UNKNOWN,UNKNOWN,NO,ssh_auth_failed
181,101.46.69.121,PASS,FAIL,FAIL,FAIL,UNKNOWN,UNKNOWN,NO,ssh_auth_failed
182,101.46.65.209,PASS,FAIL,FAIL,FAIL,UNKNOWN,UNKNOWN,NO,ssh_auth_failed
1 server_id ip ssh_tcp ssh_auth disk_ok ram_ok dpkg_lock apt_health ready notes
2 180 101.46.69.207 PASS FAIL FAIL FAIL UNKNOWN UNKNOWN NO ssh_auth_failed
3 181 101.46.69.121 PASS FAIL FAIL FAIL UNKNOWN UNKNOWN NO ssh_auth_failed
4 182 101.46.65.209 PASS FAIL FAIL FAIL UNKNOWN UNKNOWN NO ssh_auth_failed

View File

@@ -0,0 +1,20 @@
# Rapport anti-regression 20260309_221755
- Base URL: https://weval-consulting.com
- Tracking base: http://151.80.235.110
- Tracking domain: https://culturellemejean.charity
## Resume
- PASS: 24
- WARN: 4
- FAIL: 0
## Warnings
- Confidentiality terms detected in https://weval-consulting.com/products/wevads-ia.html (strict mode disabled)
- Confidentiality terms detected in https://weval-consulting.com/products/workspace.html (strict mode disabled)
- DeliverScore rate-limited code=429 t=0.540570s
- GPU chat check skipped (API_KEY not set)
## Verdict
GO (no hard regression detected).

4
servers.example.csv Normal file
View File

@@ -0,0 +1,4 @@
# server_id,ip,username,password
180,101.46.69.207,root,CHANGE_ME
181,101.46.69.121,root,CHANGE_ME
182,101.46.65.209,root,CHANGE_ME
1 # server_id ip username password
2 180 101.46.69.207 root CHANGE_ME
3 181 101.46.69.121 root CHANGE_ME
4 182 101.46.65.209 root CHANGE_ME