Test: load-test/dual-project.js
Duration: 35 minutes (5m ramp + 25m sustain + 5m ramp-down)
VUs: 30 total — 15 project1 (todos) + 15 project2 (JiiaCat inventory)
Scenarios: 6 (p1_casual/active/power + p2_casual/active/power)
What This Test Was For
All previous tests targeted only project1. The cluster’s purpose is serving two isolated Supabase stacks simultaneously. This test answered three questions:
- Does running both stacks under concurrent load cause cross-stack interference?
- Does the more complex project2 schema (JOIN query + INSERT/UPDATE triggers for alerts) degrade project2 response times relative to project1?
- Does the VPS RAM hold under 17 services all under concurrent load?
k6 Result: Threshold Breach (exit 99)
Ran to full completion. The threshold breach is the same geography artefact as all
previous tests: p(95)<500ms reads from Grafana’s US load zone to Frankfurt (~130ms RTT
baseline). No 5xx errors, no service restarts.
VPS Memory: Both Stacks Under Concurrent Load
| Service | Start | End | Delta | Limit | % at end |
|---|---|---|---|---|---|
| project1_kong | 223 MB | 228 MB | +5 MB | 512 MB | 44.6% |
| project2_kong | 223 MB | 224 MB | +1 MB | 512 MB | 43.8% |
| project1_realtime | 165 MB | 165 MB | 0 MB | 512 MB | 32.3% |
| project2_realtime | 55 MB | 55 MB | 0 MB | 512 MB | 10.8% |
| project1_db | 75 MB | 78 MB | +3 MB | 1024 MB | 7.6% |
| project2_db | 49 MB | 50 MB | +1 MB | 1024 MB | 4.9% |
| project1_rest | 16 MB | 16 MB | 0 MB | 256 MB | 6.5% |
| project2_rest | 20 MB | 20 MB | 0 MB | 256 MB | 7.9% |
| project1_meta | 70 MB | 70 MB | 0 MB | 256 MB | 27.4% |
| project2_meta | 37 MB | 36 MB | -1 MB | 256 MB | 14.3% |
| project1_auth | 11 MB | 11 MB | 0 MB | 256 MB | 4.6% |
| project2_auth | 13 MB | 14 MB | +1 MB | 256 MB | 5.6% |
VPS host RAM: 2206–2224 MB used of 3819 MB. Flat throughout — no pressure.
Key Findings
1. No cross-stack interference
Both stacks ran simultaneously for 35 minutes without affecting each other. project1 services showed identical CPU and memory patterns to when project2 was idle. The overlay network isolation (project1_internal / project2_internal) held completely.
2. Kong memory growth rate scales with load
| Test | VUs per Kong | Kong growth over test | Rate |
|---|---|---|---|
| Soak (project1 only) | 30 | +23 MB over 60 min | ~24 MB/hr |
| Dual-project (project1) | 15 | +5 MB over 35 min | ~8.5 MB/hr |
| Dual-project (project2) | 15 | +1 MB over 35 min | ~1.7 MB/hr |
The growth is proportional to request rate, not time. project2’s heavier query pattern meant fewer total requests per VU (joins + triggers take longer → less throughput), hence the lower growth.
3. project2 PostgREST uses slightly more memory
project2_rest settled at 20 MB vs project1_rest at 16 MB. The JiiaCat schema has 21 tables vs project1’s 1 table — PostgREST caches schema introspection on startup. This is a one-time baseline difference, not a growth pattern.
4. project2 DB is lighter
project2_db sat at 49–50 MB vs project1_db at 75–80 MB. project1 has accumulated data from multiple test runs; project2’s DB is fresher. Both well within the 1 GB limit.
5. Realtime BEAM VMs: completely stable and isolated
Both _realtime services stayed flat throughout. project2_realtime has a smaller
baseline (55 MB vs project1’s 165 MB) because it has fewer CDC subscriptions. Neither
showed any growth.
6. Total VPS footprint under dual concurrent load
Both stacks active + 15 VUs each: ~2.2 GB / 3.8 GB (58% of RAM)
Both stacks idle: ~2.1 GB / 3.8 GB (55% of RAM)
The difference between “idle” and “both under load” is ~100 MB spread across all services. The cluster has ~1.6 GB of headroom even with both projects active.
Service Health Post-Test
All 17 services at 1/1 replicas. Zero restarts across both stacks.
project1_* 8/8 services 1/1
project2_* 8/8 services 1/1
traefik 1/1
Summary
| Question | Answer |
|---|---|
| Cross-stack interference? | None detected — stacks fully isolated |
| project2 degraded by complex schema? | No — triggers and joins had negligible VPS impact |
| VPS RAM under dual load? | 58% used — ~1.6 GB headroom remaining |
| Any memory leaks? | None — all services flat |
| Any OOMs or restarts? | None |
| Can the CX22 serve both projects simultaneously? | Yes, comfortably |
| Would a 3rd project fit? | Likely yes — est. ~2.5 GB used with 3 projects (65% of RAM) |
Cluster verdict: production-ready for 2 concurrent projects. Memory headroom suggests a third project is feasible — the next test to run would be a triple-stack simulation to confirm.
Comparison: project1 vs project2 resource profile
| Metric | project1 (todos) | project2 (JiiaCat) |
|---|---|---|
| DB memory | 75–80 MB | 49–50 MB |
| Rest (PostgREST) memory | 16 MB | 20 MB |
| Realtime baseline | 165 MB | 55 MB |
| Kong memory | 223–228 MB | 223–224 MB |
| Meta memory | 70 MB | 36–37 MB |
| Write cost | Low (simple INSERT) | Higher (INSERT + alert trigger) |
| Read cost | Low (single table) | Higher (JOIN across 3 tables) |
project2 is leaner on most services because it’s a newer deployment with less accumulated data and fewer CDC replication connections in Realtime. Over time, as more data accumulates, its profile will converge toward project1’s.