When I filed this ticket I kinda expected that somehow rakudo or libuv would handle this for me under the hood. But what Timo and Brandon say makes sense. The process is still running when you slurp-rest. slurp-rest neds EOF before it stops blocking. It will never get it because the writing process is keeping itself alive until it can finish writing to ERR. But it will never finish because it's still needs to write the 8193rd byte.
Consider: $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n"); $*ERR.print("8" x 8193);|,:out,:err); say $proc.out.get' win Using .get instead of slurp-rest works fine. This suggested to me that waiting for the process to finish before .slurp-rest would work. And it did perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n"); $*ERR.print("8" x 8193);|,:out,:err); $proc.exitcode; say $proc.out.slurp-rest' win But for some reason, just sleeping didn't: perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*OUT.print("win\n"); $*ERR.print("8" x 8193);|,:out,:err); sleep 1; say $proc.out.slurp-rest' # hangs forever I'd say this is closable. The solution is to wait for the process to exit before reading or to use Proc::Async. Thanks! On Thu, Mar 8, 2018 at 2:51 PM Brandon Allbery via RT < perl6-bugs-follo...@perl.org> wrote: > And in the cases where it "works", the buffer is larger. Which runs the > risk of consuming all available memory in the worst case, if someone tries > to "make it work" with an expanding buffer. The fundamental deadlock > between processes blocked on I/O is not solved by buffering. Something > needs to actually consume data instead of blocking, to break the deadlock. > > Perl 5 and Python both call this the open3 problem. > > On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen <t...@wakelift.de> wrote: > > > This is a well-known problem in IPC. If you don't do it async, you risk > > the buffer you're not currently reading from filling up completely. Now > > your client program is trying to write to stderr, but can't because it's > > full. Your parent program is hoping to read from stdin, but nothing is > > arriving, and it never reads from stderr, so it's a deadlock. > > > > Wouldn't call this a rakudo bug. > > > > > > On 07/03/18 23:04, Christian Bartolomaeus via RT wrote: > > > On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote: > > >> FWIW that hangs on FreeBSD as well (maybe not too much a surprise, > > >> given the relationship of the OSes). > > > Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on > > my machine: > > > > > > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x > > 224001);|,:out,:err); say $proc.out.slurp' ## hangs > > > ^C > > > $ perl6 --version > > > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10 > > > implementing Perl 6.c. > > > $ uname -a > > > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux > > > > > > -- > brandon s allbery kf8nh sine nomine > associates > allber...@gmail.com > ballb...@sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > >