On 9 February 2014 19:28, Noah Kantrowitz <n...@coderanger.net> wrote:
>
> On Feb 8, 2014, at 6:25 PM, Robert Collins <robe...@robertcollins.net> wrote:
>

>> 5/s sounds really low - if the RPC's take less than 200ms to answer
>> (and I sure hope they do), a single threaded mirroring client (with
>> low latency to PyPI's servers // pipelined requests) can easily it.
>> Most folk I know writing API servers aim for response times in the
>> single to low 10's of ms digits... What is the 95% percentile for PyPI
>> to answer these problematic APIs ?
>>
>
> If you are making lots of sequential requests, you should be putting a sleep 
> in there. "as fast as possible" isn't a design goal, it is good service for 
> all clients.

As fast as possible (on the server side) and good service for all
clients are very tightly correlated (and some would say there is a
causative relationship in fact).

On the client side, I totally support limiting concurrency, but I've
yet to see a convincing explanation for rate limiting already
serialised requests that doesn't boil down to 'assume the server is
badly written'. Note - I'm not assuming - or implying - that about
PyPI.

-Rob

-- 
Robert Collins <rbtcoll...@hp.com>
Distinguished Technologist
HP Converged Cloud
_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to