Realtime

Benchmarks

Benchmark results for Supabase Realtime.


This guide explores what can be expected from Realtime's Postgres Changes, broadcast and presence performance. A set of load tests demonstrates its scaling capabilities.

Methodology

The benchmarks were conducted using k6, an open-source load testing tool, against a Realtime Cluster deployed on AWS. The cluster configurations used 2-6 nodes, tested in both single-region and multi-region setups, all connected to a single Supabase project. The load generators (k6 servers) were also deployed on AWS to minimize network latency impact on the results. Tests were executed with the full load from the start without warm-up runs.

The metrics collected include message throughput, latency percentiles, CPU and memory utilization, and connection success rates. It's worth noting that performance in production environments may vary based on factors such as network conditions, hardware specifications, and specific usage patterns.

Workloads

The proposed workloads are designed to demonstrate Supabase Realtime's throughput and scalability capabilities. These benchmarks focus on core functionality and common usage patterns.

The benchmarking results include the following workloads:

  1. Broadcast Performance
  2. Payload Size Impact on Broadcast
  3. Large-Scale Broadcasting
  4. Authentication and New Connection Rate
  5. Database Events

Results

Realtime broadcast performance

This workload evaluates the system's capacity to handle multiple concurrent WebSocket connections and message broadcasting. Each virtual user (VU) in the test:

  • Establishes and maintains a WebSocket connection
  • Joins two distinct channels:
    • An echo channel (1 user per channel) for direct message reflection
    • A broadcast channel (6 users per channel) for group communication
  • Generates traffic by sending 2 messages per second to each joined channel for 10 minutes

Broadcast Performance

MetricValue
Concurrent Users32_000
Total Channel Joins64_000
Message Throughput224_000 msgs/sec
Median Latency6 ms
Latency (p95)28 ms
Latency (p99)213 ms
Data Received47.2 MB/s (29.6 GB total)
Data Sent15.2 MB/s (9.6 GB total)
New Connection Rate320 conn/sec
Channel Join Rate640 joins/sec

Payload size impact

This workload tests the system's performance with different message payload sizes to understand how data volume affects throughput and latency. Each virtual user (VU) follows the same connection pattern as the broadcast test, but with varying message sizes:

  • Establishes and maintains a WebSocket connection
  • Joins two distinct channels:
    • An echo channel (1 user per channel) for direct message reflection
    • A broadcast channel (6 users per channel) for group communication
  • Sends messages with payloads of 1KB, 10KB, and 50KB
  • Generates traffic by sending 2 messages per second to each joined channel for 5 minutes

1KB payload

1KB Payload Broadcast Performance

10KB payload

10KB Payload Broadcast Performance

50KB payload

50KB Payload Broadcast Performance

Metric1KB Payload10KB Payload50KB Payload50KB Payload (Reduced Load)
Concurrent Users4_0004_0004_0002_000
Message Throughput28_000 msgs/sec28_000 msgs/sec28_000 msgs/sec14_000 msgs/sec
Median Latency13 ms16 ms27 ms19 ms
Latency (p95)36 ms42 ms81 ms39 ms
Latency (p99)85 ms93 ms146 ms82 ms
Data Received31.2 MB/s (10.4 GB)268 MB/s (72 GB)1284 MB/s (348 GB)644 MB/s (176 GB)
Data Sent9.2 MB/s (3.1 GB)76 MB/s (20.8 GB)384 MB/s (104 GB)192 MB/s (52 GB)

Note: The final column shows results with reduced load (2,000 users) for the 50KB payload test, demonstrating how the system performs with larger payloads under different concurrency levels.

Large-Scale broadcasting

This workload demonstrates Realtime's capability to handle high-scale scenarios with a large number of concurrent users and broadcast channels. The test simulates a scenario where each user participates in group communications with periodic message broadcasts:

  • Each virtual user (VU):
    • Establishes and maintains a WebSocket connection (30-120 minutes)
    • Joins 2 broadcast channels
    • Sends 1 message per minute to each joined channel
    • Each message is broadcasted to 100 other users

Large Broadcast Performance

MetricValue
Concurrent Users250_000
Total Channel Joins500_000
Users per Channel100
Message Throughput>800_000 msgs/sec
Median Latency58 ms
Latency (p95)279 ms
Latency (p99)508 ms
Data Received68 MB/s (600 GB)
Data Sent0.64 MB/s (5.7 GB)

Realtime Auth

This workload demonstrates Realtime's capability to handle large amounts of new connections per second and channel joins per second with Authentication Row Level Security (RLS) enabled for these channels. The test simulates a scenario where large volumes of users connect to realtime and participate in auth protected communications:

  • Each virtual user (VU):
    • Establishes and maintains a WebSocket connection (2.5 minutes)
    • Joins 2 broadcast channels
    • Sends 1 message per minute to each joined channel
    • Each message is broadcasted to 100 other users

Broadcast Auth Performance

MetricValue
Concurrent Users50_000
Total Channel Joins100_000
Users per Channel100
Message Throughput>150_000 msgs/sec
New Connection Rate500 conn/sec
Channel Join Rate1000 joins/sec
Median Latency19 ms
Latency (p95)49 ms
Latency (p99)96 ms

Realtime's Postgres Changes

Realtime systems usually require forethought because of their scaling dynamics. For the Postgres Changes feature, every change event must be checked to see if the subscribed user has access. For instance, if you have 100 users subscribed to a table where you make a single insert, it will then trigger 100 "reads": one for each user.

There can be a database bottleneck which limits message throughput. If your database cannot authorize the changes rapidly enough, the changes will be delayed until you receive a timeout.

Database changes are processed on a single thread to maintain the change order. That means compute upgrades don't have a large effect on the performance of Postgres change subscriptions. You can estimate the expected maximum throughput for your database below.

If you are using Postgres Changes at scale, you should consider using a separate "public" table without RLS and filters. Alternatively, you can use Realtime server-side only and then re-stream the changes to your clients using a Realtime Broadcast.

Enter your database settings to estimate the maximum throughput for your instance:

Set your expected parameters

Current maximum possible throughput

Total DB changes /secMax messages per client /secMax total messages /secLatency p95
646432,000238ms

Don't forget to run your own benchmarks to make sure that the performance is acceptable for your use case.

Supabase continues to make improvements to Realtime's Postgres Changes. If you are uncertain about your use case performance, reach out using the Support Form. The support team can advise on the best solution for each use-case.