Re: [pacman-dev] Dropping SIGPIPE workaround from dload.c

2020-04-20 Thread Dave Reisner
On Mon, Apr 20, 2020, 18:04 Anatol Pomozov  wrote:

> Hi folks
>
> I am looking at the lib/libalpm/dload.c download codepath and found
> that it has a following snippet:
>
>/* Ignore any SIGPIPE signals. With libcurl, these shouldn't be
> happening,
> * but better safe than sorry. Store the old signal handler first.
> */
>

Yup, this is fine to remove. Curl would actually signal DNS timeouts with
SIGALRM not SIGPIPE. This is leftover crap from the libfetch days.

dR

   mask_signal(SIGPIPE, SIG_IGN, _sig_pipe);
>dload_interrupted = 0;
>mask_signal(SIGINT, , _sig_int);
>
> My understanding that this code tries to handle DNS timeouts that are
> signaled as a SIGPIPE. And to avoid killing the whole app this code
> installs this sighandler.
>
> But my experiment (at my dev branch that removes the codesnippet
> above) shows that curl handles DNS timeout correctly:
>
>  sudo pacman -U http://blackhole.webpagetest.org/foo.zst
> :: Retrieving packages...
>  foo.zst failed to download
>  foo.zst.sig failed to download
> error: failed retrieving file 'foo.zst' from blackhole.webpagetest.org
> : Connection timed out after 1 milliseconds
> error: failed retrieving file 'foo.zst.sig' from
> blackhole.webpagetest.org : Connection timed out after 1
> milliseconds
> warning: failed to retrieve some files
>
> There is also a test that checks for SIGPIPE handling
> (test/pacman/tests/scriptlet-signal-reset.py) and it passes as well:
>
> 157/332 scriptlet-signal-reset.py   OK
> 0.11832118034362793 s
>
> Which makes me believe that SIGPIPE handler trick is not needed and
> mCURL handles timeouts correctly.
>
> I've been using a branch without this handler for two months and never
> seen any signal related issues with my parallel-download pacman.
>
> I am thinking of *not* carrying this workaround to the new multiplexed
> function and want to check with you if you are OK with it.
>


[pacman-dev] Dropping SIGPIPE workaround from dload.c

2020-04-20 Thread Anatol Pomozov
Hi folks

I am looking at the lib/libalpm/dload.c download codepath and found
that it has a following snippet:

   /* Ignore any SIGPIPE signals. With libcurl, these shouldn't be
happening,
* but better safe than sorry. Store the old signal handler first. */
   mask_signal(SIGPIPE, SIG_IGN, _sig_pipe);
   dload_interrupted = 0;
   mask_signal(SIGINT, , _sig_int);

My understanding that this code tries to handle DNS timeouts that are
signaled as a SIGPIPE. And to avoid killing the whole app this code
installs this sighandler.

But my experiment (at my dev branch that removes the codesnippet
above) shows that curl handles DNS timeout correctly:

 sudo pacman -U http://blackhole.webpagetest.org/foo.zst
:: Retrieving packages...
 foo.zst failed to download
 foo.zst.sig failed to download
error: failed retrieving file 'foo.zst' from blackhole.webpagetest.org
: Connection timed out after 1 milliseconds
error: failed retrieving file 'foo.zst.sig' from
blackhole.webpagetest.org : Connection timed out after 1
milliseconds
warning: failed to retrieve some files

There is also a test that checks for SIGPIPE handling
(test/pacman/tests/scriptlet-signal-reset.py) and it passes as well:

157/332 scriptlet-signal-reset.py   OK  0.11832118034362793 s

Which makes me believe that SIGPIPE handler trick is not needed and
mCURL handles timeouts correctly.

I've been using a branch without this handler for two months and never
seen any signal related issues with my parallel-download pacman.

I am thinking of *not* carrying this workaround to the new multiplexed
function and want to check with you if you are OK with it.