Real-time communication with WebSockets & Kafka

Solving the WebSockets load balancer problem using the Pub/Sub pattern

# kafka # pub/sub # services

Introduction

In a classic Client/Server scenario where a browser frontend application (now frontend) needs to monitor and show the progress of the operations that the server backend application (now backend) is performing, the most intuitive solution is to use a direct real time connection via WebSockets, allowing the backend to send its update directly to the frontend via messages on the socket connection.

However, this solution can easily become very complicated when the backend needs to scale horizontally and replicate its instances, and a load balancer handles the routing of requests to different instances of the backend, no longer ensuring that the connection opened via websocket is between the frontend and the backend instance actually handling our request.

In this blog post, I want to discuss a solution to this problem that we implemented using Kafka.

Kafka logo

The problem with WebSockets

WebSockets seem like the best choice to develop our solution; WebSockets, or WebSocket API, are an advanced technology that make it possible to open a two-way interactive communication session between the user’s browser and a server.

It enables the browser to send messages to the server and receive responses without having to poll the server.

A WebSocket can reduce latency because it sends data much faster, as the connection is already established. There’s no need for any additional packet to establish other connections.

But scaling WebSockets isn’t as simple as just increasing the number of server instances. They need to be configured in order to build a fully scalable architecture and the persistent behavior of WebSockets can often lead to communication problems.

For instance: can we be sure that the backend instance to which our frontend is connected via WebSocket is the instance actually executing the requested task?

We can’t!

(Unless we are lucky, and if we have a lot of instances, we are very very lucky)

We don’t know which backend instance is processing and performing our frontend request, beacause the load balancer can choose one of the instances according to its policy.

To deal with this problem we could have used sticky sessions, but in doing so we could not have counted on true load balancing policies.

About Sticky Sessions

Sticky sessions are persistent sessions that load balancers create between a client and a specific server for all the duration of a session.

This means that every request from a client in a sticky session will be sent to the same server/node that received the initial request.

So we decided to implement the Pub/Sub design pattern using Kafka.

Kafka to the rescue

Apache Kafka is a distributed data store optimized for ingesting and processing streaming data in real-time. Kafka can handle thousands of data constantly generated by thousands of sources. Kafka handles this constant influx of data, and can process it sequentially and incrementally.

Basically Kafka combines messaging, storage, and data stream processing providing three main functions:

  • Publish and subscribe to streams of records
  • Effectively store streams of records in the exact order they were generated
  • Process streams of records in real time

We expanded the backend possibility with the capability of sending and receiving messages through a Kafka queue on a specific topic and we added a bit of logic to send messages in the function performing the requested task.

So the backend sends short messages with the information needed from the frontend on the specific topic, that messages are then received by every other backend instance, messages are then reformatted and sent to a WebSocket topic destination and received in the Kafka queue.

So now all backend instances have received and re-sent information needed from the frontend ensuring that the frontend that opened the WebSocket connection in the first place will be served with the information that was waiting for.

Conclusion

Thanks to Kafka we implemented a Pub/Sub design pattern ensuring that the real-time functionality of our backend server application could be effectively scaled.

In a very similar way you can use this pattern with RabbitMQ, Redis and other technologies, and even with the integrated Pub/Sub functionality in the main Cloud providers.