On Jan 28, 2014, at 8:52 AM, Yaron Goland <[email protected]> wrote:
> I did read it and I didn't agree with it.
Ilya Grigorik works on performance on the Chrome team at Google, so I'm
inclined to trust him on statements about practical aspects of HTTP. (I worked
on Chrome for a year+ but not on HTTP-level stuff.)
>> * A single slow response blocks all requests behind it.
> The same is true of bulk get.
No, because a bulk_get response doesn't have to return the documents in the
same order they're requested. It can fetch them all in parallel if it wants,
and send them out in the order they're ready.
I really don't want to get into an argument about pipelining. I can point to
the entry on browser compatibility in the Wikipedia article, which shows many
browsers not supporting it due to server-side issues like head-of-line blocking
and buggy gateways:
http://en.wikipedia.org/wiki/HTTP_pipelining#Implementation_in_web_browsers
> Your first argument is that the overhead of GET is so bad that even in the
> face of pipelining the performance will still be significantly worse than a
> bulk request. Well you said you already implemented bulk requests. So um...
> why not publish some numbers and the code you used to generate it?
I implemented _bulk_get in the Couchbase Sync Gateway, not in CouchDB (I don't
work on CouchDB). I doubt the code would be of interest to people here. :)
Before I take the time to set up and run tests and publish numbers, I'd like to
know whether that effort would make a difference to people considering whether
to implement this API call.
—Jens