Hi, On Thu, Nov 17, 2011 at 17:09, Jeremy Johnson <[email protected]> wrote: > Hi guys, > > This is my first project with Jetty, so please pardon any newb-ness. > > I’m developing a server that will receive a live data stream from a socket, > translate said data into a pretty JSON format and then push the JSON out to > a sizable number of clients connected to a WebSocketServlet. > > I’ll have one or more threads accepting the data from the non-Jetty TCP > server, then pushing the data to the WebSocket clients via a call to Jetty’s > WebSocketConnection.sendMessage(). I’m guessing I should I consider > creating my own thread pool to handle the multiple sendMessage() calls to > the clients, as it doesn’t appear that sendMessage() is non-blocking and > thus a misbehaving or poorly performing call to sendMessage() could affect > QOS of the outgoing streams to the other clients. Is this correct? Or does > Jetty use a thread pool under the hood when sendMessage() is called?
Your use case is perfect for CometD (http://cometd.org). The CometD project solves the problem of threading for you, and provides also a number of facilities that you have otherwise build from scratch. Lastly, it provides a fallback to other proven comet techniques in case WebSocket is not available for any reason (e.g. blocked by proxies). As per your question, Jetty does not do any thread pooling behind sendMessage(), so a slow client will block all the others (but this problem, as I said, is solved by CometD). Simon -- http://cometd.org http://intalio.com http://bordet.blogspot.com ---- Finally, no matter how good the architecture and design are, to deliver bug-free software with optimal performance and reliability, the implementation technique must be flawless. Victoria Livschitz _______________________________________________ jetty-users mailing list [email protected] https://dev.eclipse.org/mailman/listinfo/jetty-users
