[This message was posted by Rolf Andersson of Pantor Engineering <[email protected]> to the "FAST Protocol" discussion forum at http://fixprotocol.org/discuss/46. You can reply to it on-line at http://fixprotocol.org/discuss/read/a95183b2 - PLEASE DO NOT REPLY BY MAIL.]
> [1] I believe I can summarize Majkara's recent emails in four words > without loss of content: "I don't like FAST." I actually re-read his post to make sure I didn't miss any interesting, intriguing or inspiring piece hidden behind the impertinence, innuendo, insults and incorrect statements. I came up with the following questions that I don't really have a good answer for. This doesn't mean that any of the questions is relevant, but I find them "interesting" in a way: - What is the "exact" processing overhead of FAST field and transfer encoding respectively? As those of you who have been involved for some time know, we didn't really focus on speed at the outset, but rather on descent encoding compactness and ease of use. Numbers quoted so far have been the total for both field encoding and transfer encoding, as that is what matters in real life. But when I read his post saying "14 million/s", "15 fold reduction", "environment factors", "factor of 100", it strikes me as pure conjecture. The numbers are definitely wrong as they are not even close to what we saw in our testing. But given different work loads, what are the _actual_ numbers? We know from a number of implementers that they have succeeded in getting very good performance. As people are getting more experience with how to implement FAST, I expect to see even higher performance. The evolution on the hardware side seems to work for us. I have personal experience of the speed improvements offered by Intel processors over the last five years. There's an interesting discussion going on over at LinkedIn. It was started by Malcolm Spence of Object Computing, Inc. and he has shared some interesting data points relating to recent performance testing done at OCI. I will make a note to myself that I need to revisit the tests we did a few years ago. I'll let you know if and when I have had time to do some new testing. - To what extent and in which ways does the serial nature of FAST create problems for implementers and users? - Are there ways that we can improve the situation? - Is it possible to improve without creating compatibility problems? Our hypothesis is that we should not change the FAST Protocol without very good reason. People have been very clear that they prefer a stable protocol and better docs. I'm working together with the core mdowg group to improve documentation, presentation and training materials to improve the learning curve for new-comers, as well as encouraging existing implementers to be able to reap the full benefit of FAST. We welcome any comments and suggestions on what to include in the work that we currently do. That said, I think that posts like Brendan's post above regarding tid and pmap placement are important. They force us to re-visit previous discussions and to re-examine previous results and views. I welcome any post that challenges the current state in a constructive manner and I will do my best to account for what we did earlier. I want this to be a forum where people feel encouraged to ask questions. Other than that, I can only wish Mr Majka the best of luck and I hope he finds some peace of mind. I will not hold my breath, but I would not be surprised if he could contribute if he stopped being nasty and started keeping his eyes on the ball. The rest of us are here for a reason. Best, Rolf [You can unsubscribe from this discussion group by sending a message to mailto:[email protected]] -- You received this message because you are subscribed to the Google Groups "Financial Information eXchange" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/fix-protocol?hl=en.
