Hi Stas!

On 05/02/15 09:30, Stanislav Malyshev wrote:
> Hi!
> 
>> think raphf is far more of practical use. Why should HTTP, or even more
>> HTTPS or HTTP2, be any different than another service, especially when
> 
> Which "another service"?

Databases (see my pecl/pq example in the RFC), key/value stores, message
queues, whatever you can think of.

> 
>> HTTP APIs are so common nowadays.
> 
> HTTP APIs are common, but almost none of them ever require persistent
> connections. The nature of HTTP is a stateless protocol oriented for
> short-lived connections (yes, I know there are exceptions, but they are
> rare and mostly abuse HTTP rather than use it for intended purposes).

All of that is true, but it's also true that HTTP has Keep-Alive which
we should take advantage of if supported. Nothing else is happening here.

>> With default of raphf.persistent_handle.limit=-1 (unlimited):
>> █ mike@smugmug:~$ time php -r 'for ($i=0;$i<20;++$i) {(new
>> http\Client("curl","google"))->enqueue(new http\Client\Request("GET",
>> "http://www.google.at/";))->send();}'
> 
> I'm not sure why you need persistence here - it's all happening within
> one request - or why would you make 20 connections to the same service?

To demonstrate to ou how it would work out over multiple requests.

> If some service is used for multiple requests, it should implement
> either batching or HTTP keepalive should be used, but simulating it
> through keeping HTTP connection open when it is supposed to be closed by
> the protocol sounds wrong. If you want to keep HTTP connection, why not
> just have the client keep it?

Why do you think the connection should automatically be closed?
That's not the default case since HTTP/1.1, except the server is
explicitely configured to close each connection after serving.

> 
>> 0.03s user 0.01s system 2% cpu 1.530 total
>>
>>
>> With raphf effectively disabled:
>> █ mike@smugmug:~$ time php -d raphf.persistent_handle.limit=0 -r 'for
>> ($i=0;$i<20;++$i) {(new http\Client("curl","google"))->enqueue(new
>> http\Client\Request("GET", "http://www.google.at/";))->send();}'
>>
>> 0.04s user 0.01s system 1% cpu 2.790 total
> 
> So, the difference is microscopic even here. But with proper HTTP
> handling - like batch requests or keepalive - it would be even less.

Microscopic?! 50%?! Could you please elaborate? :)


-- 
Regards,
Mike

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to