IO Strategies and event loop integration

WebSocket++ core implements a stand alone WebSocket state machine. It does not include an event loop, but does require one to manage input and output in a useful manor. WebSocket++ abstracts the interface between the core and input/output via its transport policy. WebSocket++ 0.3.x and later ships with two transport policies that allow a number of different I/O strategies to be used in the end application. Additionally, a non-functional stub transport is included to demonstrate the minimum API necessary for writing a custom transport policy.

iostream transport

The iostream transport does not implement an event loop. Input bytes (from the network, or some other source) must be manually fed in via the connection's >> operator or the read_some() method. Output bytes are written to the ostream that was registered for that connection or result in the write_handler being called with a buffer to be written. Your application can bring its own event loop and network infrastructure, manually create its own sockets and WebSocket++ connections, and feed bytes received via your socket into the associated WebSocket++ connection for processing. Anything written to the associated ostream/write_handler can then be written back to the socket that you are managing.

The iostream_server example as well as many of the unit tests demonstrates this transport in practice. In general this transport is designed to be used in cases where you already have both an event loop and socket infrastructure set up (perhaps to service HTTP requests, or raw sockets on a proprietary protocol) and just want to add WebSocket processing logic as an option. Alternatively, it can be used for unit testing where the network is stubbed out entirely or for unix-like utilities where the "network" is delivered via files, standard in / out, pipes, domain sockets, etc.

asio transport

If you don't want to provide your own network hardware, the Boost Asio transport is available as an alternative. The asio transport handles all aspects of networking (DNS lookups, creating TCP sockets, connecting and listening, etc). It supports IPv4 and IPv6 out of the box and supports both plain and TLS secured sockets (when the appropriate libraries are available). For efficient non-blocking I/O to service thousands of connections at once, the Asio transport uses Boost Asio's io_service event loop.

If you are building a standalone system this can function as the event loop for your application. You are free to post your own events to the io_service loop that WebSocket++ uses or free to tell WebSocket++ to post its events to an io_service that your application maintains. Keep in mind that if you do this, you should keep your handlers short and non-blocking to maintain network i/o responsiveness.

If you are already using a non Asio io_service event loop (for example the event loops provided by other GUI libraries like SDL, Qt, etc) there are a few options:


You can run both event loops in separate threads and synchronize communication between them. Please refer to the Thread Safety article of this manual for more information about how to safely use WebSocket++ in a multithreaded program. A multi-threaded system will provide the best responsiveness for your WebSocket connections. Threading, however, may introduce unnecessary complexity to some systems. In addition it introduces a non-deterministic relationship between when messages are handled and the primary application event loop. For a simpler implementation with more control the WebSocket++ event loop can be manually advanced via polling.


An alternative to threading is to manually advance the io_service event loop that WebSocket++ uses at a specified point in your application's event loop. WebSocket++ provides some convenience methods that let you access some key methods of the underlying io_service. Alternatively, you can manually create and manage your own io_service object, giving you full access to all of its methods, and register that with WebSocket++.

endpoint::run will run the event loop as a blocking call. For a server, this will not return until your endpoint is told to stop listening for new connections and all existing connections have ended or endpoint::stop is called to forcibly stop the endpoint. For clients, this will run until all queued connections have completed. After endpoint::run returns, either by being stopped, or running out of work, endpoint::reset must be called before the endpoint event loop can be started again. Run is the method that would be used in the case of a multithreaded server or a stand-alone server.

endpoint::poll and endpoint::poll_one can be used to manually advance the event loop in a non-blocking way. Poll will execute all events that are ready to run and return when all outstanding handlers are blocked. Poll_one will run exactly one handler and then return. Caution: With sufficient connection activity, poll may never run out of ready work so be careful when using it in an alternate event loop. A loop with a fixed number of endpoint::poll_one calls can be used to ensure that you don't poll forever.

All of these functions wrap functions of the same name in boost::asio::io_service. See the Boost Asio documentation for more details about how they work.

Stopping the Asio transport event loop

The endpoint::run() method (or the io_service::run method of the underlying io_service) will run in and block whatever thread called it until there is no more work to do. What no more work means depends on whether your endpoint is a client or server and what options have been set. If you run multiple endpoints on the same external io_service it will not stop until the exit conditions are satisfied for *all* endpoints. If you manually add additional tasks to your io_service loop, they will also affect when it exits.

Clients start in regular mode (which causes run to exit as soon as all connections are done). They can optionally be set to perpetual mode endpoint::start_perpetual() where their respective run() call will not exist until perpetual mode stops with endpoint::stop_perpetual() or the endpoint is forcibly stopped with endpoint::stop().

Regular mode is useful if you want to run a single connection until completion then exit or do some resetting and relaunch. Perpetual mode is useful if you want to start the client loop in a background thread and want to be able to make connections on demand and not worry about there being brief periods where there are no connections.

Servers start by accepting connections endpoint::start_accept(). They will accept a new connection hand it off to its own processing strand and then immediately wait and accept the next connection. As such, they will run indefinitely even if there are no active connections. You may call endpoint::stop_listening() to stop this listen/accept loop. Stopping listening will prevent new connections from being accepted but not stop existing connections from being processed. You (or the remote endpoint) will need to manually close each outstanding connection. Once all connections are closed the endpoint's run loop will exit.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.