[This message was posted by Majkara Majka of me <[email protected]> to the "FAST Protocol" discussion forum at http://fixprotocol.org/discuss/46. You can reply to it on-line at http://fixprotocol.org/discuss/read/aa3461de - PLEASE DO NOT REPLY BY MAIL.]
Fair comment Franck, and I feel obliged to respond. The solution is very simple, but it does require a look at the problems first. First of all, you cannot design against network limitations and waste of MTUs on so many ends. Second you cannot send enveloped data continously and so badly that you lose context for recovery in any decent period of time. Thirdly, the aproach of utterly serial nature of FAST design is nothing short of a beginner mistake. Not only does FAST not reduce bandwidth and latency, it introduces it across the board and pushes us back 20 years in terms of processing rates. No exchange has anything less of an arcane recovery protocol, and no customer end has a satisfactory quality of service in response to this design. Now 'preamble' is in fashion they will repeat the same tragedy for customers and their own network but with more IP ports 'cleared up'. Moreover, each data vendor introduces their own syntactic and semantic quirks that make it nearly impossible to have any stable progress or standard. Lastly, the expense of something that was meant to be dynamic/adaptable and standard is proving to be very unstable (years after 'design'). Bandwidth is much less of a problem that FAST designers seem to continously tout about. FAST isn't that good at reducing it when compared against many other and well tested (not few message 'proofs of concept') approaches. As for latency, good luck. I have yet to see anyone claim they can sustain 20GB of data a single exchange spits out in FAST which in effect is nothing more than 2GB a day sustainable by a single box - guaranteeing a response time. Great job! Great job in reducing bandwith and latency with inefficient and unportable decoding, dictionary resets, bit fidling abstraction the entire industry managed to clear out (apart from finance). And a terrific job in not providing industry standard schemas, and data types that are so obvious but late in the spec. And finally, PMAP and template feature are two opposing concepts if someone failed to pick it up. It might sound like a flame but it is the harsh reality as you can see on your budget, redundancy and duplication issues, and inefficiency and complexity of implementation. No business responds well to waste of resources and time in trivial data distribution problem. Distribution should have reliability and sound 'prevention of circumventing a standard' in mind first and foremost. But that's not the politics of it is, ie. exchange-acts-as-an-embassy and sells network bandwidth monopoly rather than access to trading service. The consortium, as far as I am concerned and patchy Service Packs approach demonstrates it better than anything. It is just another CDS-like stack of cards that sells consultancy and cuts out everyone but the clearing members from usual 'flashy' timing. Optimising for bandwidth but not latency for selective customers in order to sell bandwith is not a choice, it is there by-design from exchanges and clearing members. The lobby is strong but technically not above anything or anyone. Asking for an alternative at this stage is futile. The structural problems need fixing FASTer, as they are currently SLOW ( Seriously Latent for Objective Wonderworld). That only means one thing, yet another industry push for another cycle, plenty of new versions and hacks, templates and bit twiddling. It must look important and elitist so a very simple problem remains unsolved while introducing more complexity. [You can unsubscribe from this discussion group by sending a message to mailto:[email protected]] -- You received this message because you are subscribed to the Google Groups "Financial Information eXchange" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/fix-protocol?hl=en.
