[This message was posted by Majkara Majka of me <[email protected]> to the "FAST Protocol" discussion forum at http://fixprotocol.org/discuss/46. You can reply to it on-line at http://fixprotocol.org/discuss/read/1f606f44 - PLEASE DO NOT REPLY BY MAIL.]
Some bold statements there (and in typical "enlightened by FAST or FIX" fashion, completely missing out on what should be the primary concern). First of all the theoretical maximum for a trivial work and size, tad above nop op and word-size, i/o messaging on latest CPUs and architectures is circa 14million/s per core. What you incur with the best of FAST implementations out there is a 15 fold reduction for trivial messages. This brings you to sub 1 million per core for sure. Bring in environment factors and you will witness it drop much further. Unless you are dreaming that message is a field or those messages cannot be complex enough to slow you by another factor of 100, then sure you are ‘right’, a nop-like trivialities decode at that speed, duh! However, it is not a constant by any means, so I don't know how anyone can deliver any complexity or latency observation on it but trivial data (and marketing it via a yet another fair-practice firm like Intel). You guys should simply benchmark yourself against a mixed stream of Eurodollar and Crude contract and let us know, what the rate is, on a volatile day sample of 4GB. Best of all randomize it across huge range days (oh I forget FAST is as serial as a Kellogs cornflake). But, this will give you the latency and bandwidth reduction we can then compare against an alternative that will beat the Fast VMs or other impls by a factor of 100 in latency and bandwidth reduction: Apples vs Apples. Once again, this has been done, so whenever you are ready.. And for heaven's sake make the sample public so we can start something other than fanboy, non-critical and subjective demonstration of solution to the CEO problems. Claiming 7M+ msgs or even 5M+ msgs/second is Cloudy Reasoning at best, and for any sufficiently complex CME stream. For the record, if you want lowest latency get everything out of the FAST encoding and be done, which shows the point of FAST being pointless. So where is the problem again? Do not target bits or tags or hacks of bad designs; target the domain that's the problem (simple pre-graduate level mistakes FIX and FAST keep repeating). FAST did some better typing but miserably fails with basics such as enumerations, decimal-equity reasoning etc while complicating to oblivion and making, serial->serial->serial->serial type of processing the new 2010+ trend. Same old from same old guys, and 5 to 10 years later you still do not satisfy basic semantics, mechanics or models of anything but equities (which themselves give you integration headaches on a 'standard'). Hell of a long time to keep messing up but that is not all. Extensions keep appearing which tells you a lot, all while not satisfying trivial exchange processing but messing up with volumes of pdfs and hacky, misunderstood and misapplied XML or templates or tags or bit jokes. Java consultant influences are at work here making it a non-brainer it will be bloated. Do we even need to get started on things: 1. Like use of XML comments, specific formatting in comments, and unstructured and messed up data to denote specific versions, types and transport detail. 2. Like using undergraduate level XML for template definitions design (you know, rather than an invention that is at complete odds to its deduction of fields rather than types and fighting the InfoSet ideas ). 3. Like avoiding use of decimal, something that plagues minds of people who obviously believe Stock marketplace is the only one (OMXs and eSpeeds and other nonsense). And not forgetting floating-point 'scientists' fighting the digital space. 4. Like duplication that is everywhere in terms of FIX field target and template name target, whilst it all has an ID because someone really smart invented a tag/value mapping and called it : FIXammer. 5. Like avoiding Service Pack abominations and some seriously depleted designs that still do not satisfy basic object models of many trading places (this is again the fault of stock-based directors and consultants since the consortium inception and their technical incompetence). 6. Even FIX itself is challenged in integration with anything/anyone and after so many years of hacking new tags and no semantic mapping or better than hacker-style versioning (just look at the childish schemas they produce in their releases. ) 7. Fixing and Fasting the PDFs, machine readability and rules ie. its vocabulary, formats and choice of some pretty dumb words while concepts are known and exist in computing for ages (but this is the COBOL-is-alive workforce behind it, it is too obvious). 8. Sealing the base standard and not allowing any deviation from it. If there is one, it is no longer a result of consortium work and must not be indorsed in any form or carry any of the logos or trademarks involved. 9. Transport Independence idea of 4th July kind, and putting it into context of FAST and FIX flaky design. Think about it, an oxymoronism. 10. Mixing up concepts in bits and meta expression. 11+ We could go on for another 100 points and couple of months of clear hacking instances. Whatever you do, you should seriously consider the damage you have caused and will keep going before taking on board yet another ‘brilliant’ idea. You have to stop creating hacky tech for the sake of tech, and mega cost for the sake of cost. If you can get over that you will progress, otherwise the whole industry is going down faster than banks did and your taxes and many new generations will pay for it, again. And instead of investing in a Not-Invented-Here, broken, type system in bit-space (oxymoron), someone should seriously consider putting a halt to this mess of a charade. Yes many 'designers' and Java-bloat firms had their time and made a business out of it, but is that the end of it? All those ‘solutions’ just to get a valid price or submit a market order!! You need a truck and a dozen libraries and VMs to kick it off? Plus, that price won't last long in example of CME which does not even satisfy snail-level recovery response times. Plus, you do not solve the bandwidth problem as you can see from your exchange and telco requirements and costs. It all clearly lead to this incompetent result and refusal to let the same mistakes ripple to infinity. Anyone ever wondered why ( apart from incompetence and flaky domain knowledge hacking )? Why, as in critically and without subjective reason to defend their own company/business or delusion of elitism or tehnically sound solution ? You're Welcome, Le Sake Of Mankind [You can unsubscribe from this discussion group by sending a message to mailto:[email protected]] -- You received this message because you are subscribed to the Google Groups "Financial Information eXchange" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/fix-protocol?hl=en.
