I think it works both ways. Is it better to offer the ultimate default out the box performance in a narrow set of uses cases but suck at everything else (which is what we appear to have now), or deliver great out the box performance across the wider set and possibly merely 'good' performance in the narrower cases? I would tend to favour the latter.
I think we that if we care enough about pleasing people wanting the halo performance numbers in the more focused use cases then we should clearly document suggested tuning options and give some expected results to show exactly what is posible and the difference tuning can cause on a given setup, then go the other way and make the defaults sensible for the more general use cases. It seems a bit backwards to me that we are making life easiest out-the-box for the people who are are actually likely to be willing to tune for max performance, instead of the ones who arent overly interested in such things and are satisfied so long as performance is good enough to meet their needs, i.e doesnt completely suck for what they are doing. Those are the people that might not be interested in turning dials to tweak things or looking for answers before simply giving up on Qpid and moving onto something else, so those are the people I think we should be aiming to help most with our out-the-box experience. The tests I did were far from conclusive of all use cases, but some reached up to 5 times the throughput I got from them last week and none of them were slower. Robbie On 8 November 2011 19:19, Rajith Attapattu <[email protected]> wrote: > TCP_NODELAY makes a considerable improvement in synchronous cases > (sync pub, sync ack etc) and small tx cases and we generally recommend > that as a tuning option to our users/customers. > > The reason for making TCP_NODELAY false by default is based on the > assumption that in most cases people will want to do high message > volumes out-of-the-box to compare Qpid for performance. So it kind of > makes sense to optimize for that case. > However I've never tried to gauge the impact it has on high volume > cases. Perhaps we should just try it out and see what the impact is. > If the impact is negligible then we should make it enabled by default. > If not then I'd be less inclined to change the default. > > I'd also be interested to know what others think on this. > > Regards, > > Rajith > > On Tue, Nov 8, 2011 at 1:00 PM, Robbie Gemmell <[email protected]> > wrote: >> Hi all, >> >> I was looking into some sluggish consumer creation performance with >> the Java client for a posting on the user list, and eventually >> narrowed the issue down to TCP_NODELAY being set false by default, >> leading to ExecutionSyncs taking an extraordinary amount of time to >> complete. This made every consumer creation take 40-80ms on average, >> depending on whether the queue already existed or not (which >> influences the operations and number of syncs performed). Enabling >> TCP_NODLAY gets this process down to 2-3ms. >> >> We have previosuly noted poorer performance with the Java broker using >> 0-10 than it managed historically with its 0-9 support (which used a >> different IO layer on the client until recently), especially when >> using transient messages and transactions. Running some noddy tests >> sugggests TCP_NODELAY (which is enabled on the broker by default) to >> be the root cause of those issues too, because it also caused >> performance to increase somewhere between mildly noticable and mouth >> open levels depending on what you are doing. >> >> I understand there may be certain situations where this is slightly >> slower and that it could lead to higher bandwidth usage in some cases, >> but the effect of enabling it seems generally far too positive not to >> do so by default (which is what some other messaging products seem to >> do). Is there a strong argument not to? Does anyone know off hand what >> the other clients do? >> >> I plan to change the default for the Java client to true in a day or >> two unless talked out of it. >> >> Robbie >> >> --------------------------------------------------------------------- >> Apache Qpid - AMQP Messaging Implementation >> Project: http://qpid.apache.org >> Use/Interact: mailto:[email protected] >> >> > > --------------------------------------------------------------------- > Apache Qpid - AMQP Messaging Implementation > Project: http://qpid.apache.org > Use/Interact: mailto:[email protected] > > --------------------------------------------------------------------- Apache Qpid - AMQP Messaging Implementation Project: http://qpid.apache.org Use/Interact: mailto:[email protected]
