And in the cases where it "works", the buffer is larger. Which runs the
risk of consuming all available memory in the worst case, if someone tries
to "make it work" with an expanding buffer. The fundamental deadlock
between processes blocked on I/O is not solved by buffering. Something
needs to actually consume data instead of blocking, to break the deadlock.

Perl 5 and Python both call this the open3 problem.

On Wed, Mar 7, 2018 at 6:42 PM, Timo Paulssen <t...@wakelift.de> wrote:

> This is a well-known problem in IPC. If you don't do it async, you risk
> the buffer you're not currently reading from filling up completely. Now
> your client program is trying to write to stderr, but can't because it's
> full. Your parent program is hoping to read from stdin, but nothing is
> arriving, and it never reads from stderr, so it's a deadlock.
>
> Wouldn't call this a rakudo bug.
>
>
> On 07/03/18 23:04, Christian Bartolomaeus via RT wrote:
> > On Fri, 10 Feb 2017 23:48:54 -0800, barto...@gmx.de wrote:
> >> FWIW that hangs on FreeBSD as well (maybe not too much a surprise,
> >> given the relationship of the OSes).
> > Hmm, looks like it hangs on Linux too -- with more than 224000 bytes on
> my machine:
> >
> > $ perl6 -e 'my $proc = run($*EXECUTABLE, "-e", q| $*ERR.print("8" x
> 224001);|,:out,:err); say $proc.out.slurp'   ## hangs
> > ^C
> > $ perl6 --version
> > This is Rakudo Star version 2017.10 built on MoarVM version 2017.10
> > implementing Perl 6.c.
> > $ uname -a
> > Linux p6 3.2.0-4-amd64 #1 SMP Debian 3.2.96-2 x86_64 GNU/Linux
>



-- 
brandon s allbery kf8nh                               sine nomine associates
allber...@gmail.com                                  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonad        http://sinenomine.net

Reply via email to