On Feb 5, 2015 3:17 PM, "Michael Wallner" <m...@php.net> wrote: > > Hi Stas! > > On 05/02/15 00:43, Stanislav Malyshev wrote: > > Hi! > > > >> Points explicitely marked for discussion in the RFC itself: > >> > >> * pecl/propro > >> Proxies for properties representing state in internal C structs > >> https://wiki.php.net/rfc/pecl_http#peclpropro > >> > >> * pecl/raphf > >> (Persistent) handle management within objects instead of resources > >> https://wiki.php.net/rfc/pecl_http#peclraphf > >> Also, take special note of the INI setting: > >> https://wiki.php.net/rfc/pecl_http#raphf_ini_setting > > > > I'm still not sure why we need these two, to be frank. E.g., for the > > former I can kind of get it, though I don't see any use-case that really > > requires going to such lengths, for the latter I'm not even sure what > > the case for that is - i.e., why exactly would one need persistent HTTP > > connections surviving the request? > > Uh, for me it's actually the reverse :) While propro is nice to have, I > think raphf is far more of practical use. Why should HTTP, or even more > HTTPS or HTTP2, be any different than another service, especially when > HTTP APIs are so common nowadays. > > Compare the timings accessing google 20 times sequentually: > > With default of raphf.persistent_handle.limit=-1 (unlimited): > █ mike@smugmug:~$ time php -r 'for ($i=0;$i<20;++$i) {(new > http\Client("curl","google"))->enqueue(new http\Client\Request("GET", > "http://www.google.at/"))->send();}' > > 0.03s user 0.01s system 2% cpu 1.530 total > > > With raphf effectively disabled: > █ mike@smugmug:~$ time php -d raphf.persistent_handle.limit=0 -r 'for > ($i=0;$i<20;++$i) {(new http\Client("curl","google"))->enqueue(new > http\Client\Request("GET", "http://www.google.at/"))->send();}' > > 0.04s user 0.01s system 1% cpu 2.790 total
While I like the idea, I would not take it as it. Many things could affect it and I am not sure the persistent resource is what spare times. Any profiling info with delta?