This is the same architecture employed by >1 IB for their in-house solution.

It's a pretty solid pattern.

Ben

On Sun, Apr 16, 2017 at 6:03 AM, Michael Barker <[email protected]> wrote:
> With Web-based traffic (long poll/http streaming rather than web sockets),
> we maintain separate buffers for each message type (market data, trade
> market data, execution reports).  As each message type has different rules
> around how events can be coalesced and/or throttled (e.g. market data can
> be, execution reports can't).
>
> For FIX we have separate servers for market data and order processing, so in
> effect we have separate buffers for each event type, but because market data
> behaves quite a bit differently to order flow having separate servers allows
> the implementation to differ where needs be.
>
> Mike.
>
> On 16 April 2017 at 01:41, Vero K. <[email protected]> wrote:
>>
>> jus to add here - I mean I want to use multiple disruptors (or coalescing
>> ring buffer + disruptor) per user because we can merge some fast ticking
>> data and some slow data (trade info) we can't merge, do you think it will
>> work?
>>
>> On Saturday, April 15, 2017 at 12:54:15 PM UTC + 3, Vero K. wrote:
>>>
>>>
>>> thanks quite useful answer. if we have around 700 clients, do we need to
>>> create around 700 disruptors? We also stream different types of data (3
>>> types), will it be a good idea to create 700 * 3 disruptors?
>>>
>>>
>>>
>>>
>>> On Saturday, April 15, 2017 at 1:07:03 AM UTC + 3, mikeb01 wrote:
>>>>
>>>> We've found That as our exchange volumes have Increased the only
>>>> protocol capable of handling a full un-throttled feed is ITCH (Multicast
>>>> over UDP). For all of our other stream-based TCP feeds (FIX, HTTP) we are
>>>> moving toward rate throttling and coalescing events based on the symbol in
>>>> all cases - Already we do it in the Majority of our connections. We 
>>>> maintain
>>>> a buffer per connection (Disruptor or coalescing ring buffer depending on
>>>> the Implementation) So that the rate at Which a remote connection consumes
>>>> does not have any impact of the other connections. With the FIX We also
>>>> maintain some CodeThatCalendar if we detect a ring buffer Becoming too full
>>>> (eg> 50%), then we pro-Activelly tear down That connection under the
>>>> assumption That Their connection is not fast enough to handle the full feed
>>>> or it has disconnected and we did not get a set FIN packet. If you have a
>>>> non-blocking I / O available, then you can be a little bit smarter 
>>>> Regarding
>>>> the implementation (Unfortunately not an option with the web Standardized
>>>> socket APIs).
>>>>
>>>> Mike.
>>>>>
>>>>>
>>>>> -
>>>>> Studying for the Turing test
>>>>>
>>>>> -
>>>>> You Received this message because you are Subscribed to the Google
>>>>> Groups "mechanical-sympathy" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to mechanical-sympathy + [email protected] .
>>>>> For more options, visit https://groups.google.com/d/ optout .
>>>>
>>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "mechanical-sympathy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to