Test: load-test/dual-project.js Duration: 35 minutes (5m ramp + 25m sustain + 5m ramp-down) VUs: 30 total — 15 project1 (todos) + 15 project2 (JiiaCat inventory) Scenarios: 6 (p1_casual/active/power + p2_casual/active/power)


What This Test Was For

All previous tests targeted only project1. The cluster’s purpose is serving two isolated Supabase stacks simultaneously. This test answered three questions:

  1. Does running both stacks under concurrent load cause cross-stack interference?
  2. Does the more complex project2 schema (JOIN query + INSERT/UPDATE triggers for alerts) degrade project2 response times relative to project1?
  3. Does the VPS RAM hold under 17 services all under concurrent load?

k6 Result: Threshold Breach (exit 99)

Ran to full completion. The threshold breach is the same geography artefact as all previous tests: p(95)<500ms reads from Grafana’s US load zone to Frankfurt (~130ms RTT baseline). No 5xx errors, no service restarts.


VPS Memory: Both Stacks Under Concurrent Load

ServiceStartEndDeltaLimit% at end
project1_kong223 MB228 MB+5 MB512 MB44.6%
project2_kong223 MB224 MB+1 MB512 MB43.8%
project1_realtime165 MB165 MB0 MB512 MB32.3%
project2_realtime55 MB55 MB0 MB512 MB10.8%
project1_db75 MB78 MB+3 MB1024 MB7.6%
project2_db49 MB50 MB+1 MB1024 MB4.9%
project1_rest16 MB16 MB0 MB256 MB6.5%
project2_rest20 MB20 MB0 MB256 MB7.9%
project1_meta70 MB70 MB0 MB256 MB27.4%
project2_meta37 MB36 MB-1 MB256 MB14.3%
project1_auth11 MB11 MB0 MB256 MB4.6%
project2_auth13 MB14 MB+1 MB256 MB5.6%

VPS host RAM: 2206–2224 MB used of 3819 MB. Flat throughout — no pressure.


Key Findings

1. No cross-stack interference

Both stacks ran simultaneously for 35 minutes without affecting each other. project1 services showed identical CPU and memory patterns to when project2 was idle. The overlay network isolation (project1_internal / project2_internal) held completely.

2. Kong memory growth rate scales with load

TestVUs per KongKong growth over testRate
Soak (project1 only)30+23 MB over 60 min~24 MB/hr
Dual-project (project1)15+5 MB over 35 min~8.5 MB/hr
Dual-project (project2)15+1 MB over 35 min~1.7 MB/hr

The growth is proportional to request rate, not time. project2’s heavier query pattern meant fewer total requests per VU (joins + triggers take longer → less throughput), hence the lower growth.

3. project2 PostgREST uses slightly more memory

project2_rest settled at 20 MB vs project1_rest at 16 MB. The JiiaCat schema has 21 tables vs project1’s 1 table — PostgREST caches schema introspection on startup. This is a one-time baseline difference, not a growth pattern.

4. project2 DB is lighter

project2_db sat at 49–50 MB vs project1_db at 75–80 MB. project1 has accumulated data from multiple test runs; project2’s DB is fresher. Both well within the 1 GB limit.

5. Realtime BEAM VMs: completely stable and isolated

Both _realtime services stayed flat throughout. project2_realtime has a smaller baseline (55 MB vs project1’s 165 MB) because it has fewer CDC subscriptions. Neither showed any growth.

6. Total VPS footprint under dual concurrent load

Both stacks active + 15 VUs each:  ~2.2 GB / 3.8 GB (58% of RAM)
Both stacks idle:                  ~2.1 GB / 3.8 GB (55% of RAM)

The difference between “idle” and “both under load” is ~100 MB spread across all services. The cluster has ~1.6 GB of headroom even with both projects active.


Service Health Post-Test

All 17 services at 1/1 replicas. Zero restarts across both stacks.

project1_*   8/8 services  1/1
project2_*   8/8 services  1/1
traefik               1/1

Summary

QuestionAnswer
Cross-stack interference?None detected — stacks fully isolated
project2 degraded by complex schema?No — triggers and joins had negligible VPS impact
VPS RAM under dual load?58% used — ~1.6 GB headroom remaining
Any memory leaks?None — all services flat
Any OOMs or restarts?None
Can the CX22 serve both projects simultaneously?Yes, comfortably
Would a 3rd project fit?Likely yes — est. ~2.5 GB used with 3 projects (65% of RAM)

Cluster verdict: production-ready for 2 concurrent projects. Memory headroom suggests a third project is feasible — the next test to run would be a triple-stack simulation to confirm.


Comparison: project1 vs project2 resource profile

Metricproject1 (todos)project2 (JiiaCat)
DB memory75–80 MB49–50 MB
Rest (PostgREST) memory16 MB20 MB
Realtime baseline165 MB55 MB
Kong memory223–228 MB223–224 MB
Meta memory70 MB36–37 MB
Write costLow (simple INSERT)Higher (INSERT + alert trigger)
Read costLow (single table)Higher (JOIN across 3 tables)

project2 is leaner on most services because it’s a newer deployment with less accumulated data and fewer CDC replication connections in Realtime. Over time, as more data accumulates, its profile will converge toward project1’s.