Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Lloyd Fournier via RT
When I filed this ticket I kinda expected that somehow rakudo or libuv
would handle this for me under the hood. But what Timo and Brandon say
makes sense. The process is still running when you slurp-rest. slurp-rest
neds EOF before it stops blocking. It will never get it because the writing
process is keeping itself alive until it can finish writing to ERR. But it
will never finish because it's still needs to write the 8193rd byte.

Consider:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,:out,:err); say $proc.out.get'
win

Using .get instead of slurp-rest works fine. This suggested to me that
waiting for the process to finish before .slurp-rest would work. And it did

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,:out,:err); $proc.exitcode; say
$proc.out.slurp-rest'
win

But for some reason, just sleeping didn't:

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,:out,:err); sleep 1; say $proc.out.slurp-rest'  #
hangs forever

I'd say this is closable. The solution is to wait for the process to exit
before reading or to use Proc::Async.

Thanks!

On Thu, Mar 8, 2018 at 2:51 PM Brandon Allbery via RT <
perl6-bugs-follo...@perl.org> wrote:

> And in the cases where it "works", the buffer is larger. Which runs the
> risk of consuming all available memory in the worst case, if someone tries
> to "make it work" with an expanding buffer. The fundamental deadlock
> between processes blocked on I/O is not solved by buffering. Something
> needs to actually consume data instead of blocking, to break the deadlock.
>
> Perl 5 and Python both call this the open3 problem.
>
> On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen  wrote:
>
> > This is a well-known problem in IPC. If you don't do it async, you risk
> > the buffer you're not currently reading from filling up completely. Now
> > your client program is trying to write to stderr, but can't because it's
> > full. Your parent program is hoping to read from stdin, but nothing is
> > arriving, and it never reads from stderr, so it's a deadlock.
> >
> > Wouldn't call this a rakudo bug.
> >
> >
> > On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> > > On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
> > >> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> > >> given the relationship of the OSes).
> > > Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
> > my machine:
> > >
> > > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
> > 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> > > ^C
> > > $ perl6 --version
> > > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> > > implementing Perl 6.c.
> > > $ uname -a
> > > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
> >
>
>
>
> --
> brandon s allbery kf8nh   sine nomine
> associates
> allber...@gmail.com
> ballb...@sinenomine.net
> unix, openafs, kerberos, infrastructure, xmonad
> http://sinenomine.net
>
>


Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Lloyd Fournier
When I filed this ticket I kinda expected that somehow rakudo or libuv
would handle this for me under the hood. But what Timo and Brandon say
makes sense. The process is still running when you slurp-rest. slurp-rest
neds EOF before it stops blocking. It will never get it because the writing
process is keeping itself alive until it can finish writing to ERR. But it
will never finish because it's still needs to write the 8193rd byte.

Consider:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,:out,:err); say $proc.out.get'
win

Using .get instead of slurp-rest works fine. This suggested to me that
waiting for the process to finish before .slurp-rest would work. And it did

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,:out,:err); $proc.exitcode; say
$proc.out.slurp-rest'
win

But for some reason, just sleeping didn't:

perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n");
$*ERR.print("8" x 8193);|,:out,:err); sleep 1; say $proc.out.slurp-rest'  #
hangs forever

I'd say this is closable. The solution is to wait for the process to exit
before reading or to use Proc::Async.

Thanks!

On Thu, Mar 8, 2018 at 2:51 PM Brandon Allbery via RT <
perl6-bugs-follo...@perl.org> wrote:

> And in the cases where it "works", the buffer is larger. Which runs the
> risk of consuming all available memory in the worst case, if someone tries
> to "make it work" with an expanding buffer. The fundamental deadlock
> between processes blocked on I/O is not solved by buffering. Something
> needs to actually consume data instead of blocking, to break the deadlock.
>
> Perl 5 and Python both call this the open3 problem.
>
> On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen  wrote:
>
> > This is a well-known problem in IPC. If you don't do it async, you risk
> > the buffer you're not currently reading from filling up completely. Now
> > your client program is trying to write to stderr, but can't because it's
> > full. Your parent program is hoping to read from stdin, but nothing is
> > arriving, and it never reads from stderr, so it's a deadlock.
> >
> > Wouldn't call this a rakudo bug.
> >
> >
> > On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> > > On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
> > >> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> > >> given the relationship of the OSes).
> > > Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
> > my machine:
> > >
> > > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
> > 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> > > ^C
> > > $ perl6 --version
> > > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> > > implementing Perl 6.c.
> > > $ uname -a
> > > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
> >
>
>
>
> --
> brandon s allbery kf8nh   sine nomine
> associates
> allber...@gmail.com
> ballb...@sinenomine.net
> unix, openafs, kerberos, infrastructure, xmonad
> http://sinenomine.net
>
>


Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Brandon Allbery via RT
And in the cases where it "works", the buffer is larger. Which runs the
risk of consuming all available memory in the worst case, if someone tries
to "make it work" with an expanding buffer. The fundamental deadlock
between processes blocked on I/O is not solved by buffering. Something
needs to actually consume data instead of blocking, to break the deadlock.

Perl 5 and Python both call this the open3 problem.

On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen  wrote:

> This is a well-known problem in IPC. If you don't do it async, you risk
> the buffer you're not currently reading from filling up completely. Now
> your client program is trying to write to stderr, but can't because it's
> full. Your parent program is hoping to read from stdin, but nothing is
> arriving, and it never reads from stderr, so it's a deadlock.
>
> Wouldn't call this a rakudo bug.
>
>
> On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> > On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
> >> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> >> given the relationship of the OSes).
> > Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
> my machine:
> >
> > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
> 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> > ^C
> > $ perl6 --version
> > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> > implementing Perl 6.c.
> > $ uname -a
> > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
>



-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net


Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Brandon Allbery
And in the cases where it "works", the buffer is larger. Which runs the
risk of consuming all available memory in the worst case, if someone tries
to "make it work" with an expanding buffer. The fundamental deadlock
between processes blocked on I/O is not solved by buffering. Something
needs to actually consume data instead of blocking, to break the deadlock.

Perl 5 and Python both call this the open3 problem.

On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen  wrote:

> This is a well-known problem in IPC. If you don't do it async, you risk
> the buffer you're not currently reading from filling up completely. Now
> your client program is trying to write to stderr, but can't because it's
> full. Your parent program is hoping to read from stdin, but nothing is
> arriving, and it never reads from stderr, so it's a deadlock.
>
> Wouldn't call this a rakudo bug.
>
>
> On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> > On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
> >> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> >> given the relationship of the OSes).
> > Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
> my machine:
> >
> > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
> 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> > ^C
> > $ perl6 --version
> > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> > implementing Perl 6.c.
> > $ uname -a
> > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
>



-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net


Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Timo Paulssen via RT
This is a well-known problem in IPC. If you don't do it async, you risk
the buffer you're not currently reading from filling up completely. Now
your client program is trying to write to stderr, but can't because it's
full. Your parent program is hoping to read from stdin, but nothing is
arriving, and it never reads from stderr, so it's a deadlock.

Wouldn't call this a rakudo bug.


On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
>> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
>> given the relationship of the OSes).
> Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my 
> machine:
>
> $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 
> 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> ^C
> $ perl6 --version
> This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> implementing Perl 6.c.
> $ uname -a
> Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux


Re: [perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Timo Paulssen
This is a well-known problem in IPC. If you don't do it async, you risk
the buffer you're not currently reading from filling up completely. Now
your client program is trying to write to stderr, but can't because it's
full. Your parent program is hoping to read from stdin, but nothing is
arriving, and it never reads from stderr, so it's a deadlock.

Wouldn't call this a rakudo bug.


On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
>> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
>> given the relationship of the OSes).
> Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my 
> machine:
>
> $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 
> 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> ^C
> $ perl6 --version
> This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> implementing Perl 6.c.
> $ uname -a
> Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux


[perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Christian Bartolomaeus via RT
On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> given the relationship of the OSes).

Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on my 
machine:

$ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 
224001);|,:out,:err); say $proc.out.slurp'   ## hangs
^C
$ perl6 --version
This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
implementing Perl 6.c.
$ uname -a
Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux


[perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2018-03-07 Thread Christian Bartolomaeus via RT
On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> given the relationship of the OSes).

This still hangs on MoarVM, but works on JVM (I didn't check the behaviour on 
JVM last year):

$ ./perl6-j -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x 
8193);|,:out,:err); say $proc.out.slurp-rest; say "alive"'

alive
$ ./perl6-j --version
This is Rakudo version 2018.02.1-124-g8d954027f built on JVM
implementing Perl 6.c.


[perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2017-02-10 Thread Christian Bartolomaeus via RT
FWIW that hangs on FreeBSD as well (maybe not too much a surprise, given the 
relationship of the OSes).


[perl #127682] [OSX] writing more than 8192 bytes to IO::Handle causes it to hang forever

2016-03-09 Thread via RT
# New Ticket Created by  Lloyd Fournier 
# Please include the string:  [perl #127682]
# in the subject line of all future correspondence about this issue. 
# https://rt.perl.org/Ticket/Display.html?id=127682 >


perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
8193);|,:out,:err);
   say $proc.out.slurp-rest' #hangs forever

If you swap $*ERR with $*OUT and $proc.out with $proc.err the same thing
happens. I dunno whether it's a problem with the process reading or the
process writing.

I made RT #127681 ( which is the same thing and can be closed ) today. But
now that I have golfed it to this I felt it deserved its own ticket.