The Hund Blog
Tailing Database Events to Users
We recently released a status widget which allows our status page customers to display their operational status on their website (ours is in the footer below). While competing services offer something similar, there's one big difference between their widgets and ours: live status updates.
This widget lists ongoing issues, upcoming issues, and displays the current operational status. There are a few ways to make this data available to users, but we decided to choose server-sent events. SSE is preferred here for these reasons: it's designed for pushing data to clients (i.e. it's one-sided), it has much less overhead than legacy polling methods, and it's simple to implement.
As soon as an event hits our database, the event is streamed to all open connections (potentially tens of thousands of active users for a single customer). This article talks about how we've optimized our application to handle such cases.
Real-Time Events
Initially the temptation for some developers might be to poll your database to send new events to clients. Certainly, database polling might work passably with minimal load, but things will go downhill quickly with a few more clients.
A few open-source databases have the functionality to build real-time streaming like this: PostgreSQL's NOTIFY, Firebird's POST_EVENT, and MongoDB's tailable cursors. Because we use MongoDB at Hund, tailable cursors enable us to send clients events instantaneously (i.e. tail -f
style).
Application Development
Using SSE was an absolute breeze with Ruby on Rails. There's an older, but lesser known feature available to controllers: ActionController::Live
. Including this class makes all controller actions able to stream data to clients, sending them data upon write.
Stream Serving
During development, you must use a web server like Puma instead of WEBrick, since WEBrick buffers all output. If you use WEBrick and call your stream endpoint, that request and all future requests will hang.
In production, it's important to pay attention to system configurations like file descriptor limits which might come back to bite you with larger client loads. We use Passenger and nginx, so supporting SSE connections was simple:
location /live {
send_timeout 320s;
client_body_timeout 320s;
keepalive_timeout 320s;
passenger_force_max_concurrent_requests_per_process 0;
}
Passenger Enterprise customers will want to look at the passenger_concurrency_model
and passenger_thread_count
options.
The reason the above timeouts are set to 320 seconds instead of a higher/unlimited value is because we send ping events to determine if a client has closed a connection (i.e. to see if we can write). These ping events are sent every minute and allow the connection to stay open for an indefinite amount of time since the above timeouts are greater.
Documentation
Documentation for consuming our events is available on our knowledge base.