Re: Process Pipes

2018-10-11 Thread Steven Schveighoffer via Digitalmars-d-learn

On 10/10/18 4:07 AM, Gorker wrote:

On Wednesday, 10 October 2018 at 08:02:29 UTC, Kagamin wrote:
stderr buffer is full (it's about 8kb or so) and gcc waits when you 
read from it.


Thank you for your kind reply,

How to just try to read from stdout (not blocking), and then try to read 
from stderr (not blocking)?

I mean, how to check both pipes for data?


Probably you could use OS mechanisms like poll or select. Given your 
usage of gcc, I'm assuming you are on some non-Windows system? Those 
tools should be available.


Of course, it's hard to write a cross platform way to do this, so I 
don't think std.process has ways to check if the pipe has data 
asynchronously. Things are complicated by the fact that it wraps the 
pipes in FILE * constructs, which means the file descriptor may not 
reflect if data is available if it's already in the buffer.


I think the most reasonable way is to use multiple threads to process 
the output. It may seem heavy-handed, but I don't know of a better way 
given how std.process works.



As an alternative, how to raise the 8kb limit?


You would have to use the OS mechanisms to do this (e.g. 
https://stackoverflow.com/questions/5218741/set-pipe-buffer-size). I 
strongly suggest not taking this route, I'm not sure how portable it is 
to raise the buffer size after the pipe is in use. Plus, how much do you 
raise the limit? What if you compile an even worse C program and get 
100k of errors? ;)


-Steve


Re: Process Pipes

2018-10-10 Thread Colin via Digitalmars-d-learn

On Tuesday, 9 October 2018 at 09:15:09 UTC, Gorker wrote:

Hi all,

I'm on macOS 10.11.6 with dmd 2.081.2 and I've a problem with 
std.process.


---
gork ():foo gorker$ gcc -c -Iinclude -o foo.cpp.o src/foo.cpp
In file included from src/foo.cpp:2:
include/foo/foo.hpp:22:10: warning: scoped enumerations are a 
C++11 extension [-Wc++11-extensions]

enum class foo_event_type_t
 ^


56 warnings and 9 errors generated.
---
No output, (zero bytes) in stdout.

If I use standard process to collect both stream with:
---
auto processPipes = pipeProcess(args, Redirect.all, null, 
Config.none, workDir);
foreach (c; pipes.stdout.byChunk(100)) writeln(cast(string) c); 
// <<< it halts here: stdout file is empty, but not EOF

foreach (c; pipes.stderr.byChunk(100)) writeln(cast(string) c);
---
Everything is fine if I don't redirect the stderr to a pipe.

Suggestions?


If you want to use asynch io - you'll have to fork and use 
NONBLOCK output on the file descriptor.


I wrote this years ago (not sure if it still compiles tbh, but it 
should)


https://github.com/grogancolin/dexpect

Specifically this block should help you: 
https://github.com/grogancolin/dexpect/blob/master/source/dexpect.d#L343-L352


Re: Process Pipes

2018-10-10 Thread Danny Arends via Digitalmars-d-learn

On Wednesday, 10 October 2018 at 09:16:43 UTC, Gorker wrote:

On Wednesday, 10 October 2018 at 08:31:36 UTC, Kagamin wrote:
Maybe read them with parallelism? 
http://dpldocs.info/experimental-docs/std.parallelism.parallel.2.html


thanks, but I'd rather avoid having to use threads just for 
this reason.

some other suggestion?


This might be a way to do it under Linux / Posix systems (it 
reads byte by byte), adapted from my web server which uses it to 
read the output from php/perl/cgi scripts:


https://github.com/DannyArends/DaNode/blob/master/danode/process.d

It uses fcntl to make sure the pipe is non-blocking
you can read 1 byte from the stdout pipe, then 1 byte from stderr 
pipe


bool nonblocking(ref File file) {
  version(Posix) {
return(fcntl(fileno(file.getFP()), F_SETFL, O_NONBLOCK) != 
-1);

  }else{
return(false);
  }
}

int readpipe(ref Pipe pipe, int verbose = NORMAL){
  File fp = pipe.readEnd;
  try{
if(fp.isOpen()){
  if(!nonblocking(fp) && verbose >= DEBUG) writeln("[WARN]   
unable to create nonblocking pipe for command");

  return(fgetc(fp.getFP()));
}
  }catch(Exception e){
writefln("[WARN]   Exception during readpipe command: %s", 
e.msg);

fp.close();
  }
  return(EOF);
}


pStdIn = File(inputfile, "r");
pStdOut = pipe();
pStdErr = pipe();
auto cpid = spawnShell(command, pStdIn, pStdOut.writeEnd, 
pStdErr.writeEnd, null);

while(true){
  ch = readpipe(pStdOut);
  outbuffer.put(cast(char)ch);
  ch = readpipe(pStdErr);
  errbuffer.put(cast(char)ch);
}

Hope this works for your usecase


Re: Process Pipes

2018-10-10 Thread Gorker via Digitalmars-d-learn

On Wednesday, 10 October 2018 at 08:31:36 UTC, Kagamin wrote:
Maybe read them with parallelism? 
http://dpldocs.info/experimental-docs/std.parallelism.parallel.2.html


thanks, but I'd rather avoid having to use threads just for this 
reason.

some other suggestion?



Re: Process Pipes

2018-10-10 Thread Kagamin via Digitalmars-d-learn
Maybe read them with parallelism? 
http://dpldocs.info/experimental-docs/std.parallelism.parallel.2.html


Re: Process Pipes

2018-10-10 Thread Gorker via Digitalmars-d-learn

On Wednesday, 10 October 2018 at 08:02:29 UTC, Kagamin wrote:
stderr buffer is full (it's about 8kb or so) and gcc waits when 
you read from it.


Thank you for your kind reply,

How to just try to read from stdout (not blocking), and then try 
to read from stderr (not blocking)?

I mean, how to check both pipes for data?

As an alternative, how to raise the 8kb limit?




Re: Process Pipes

2018-10-10 Thread Kagamin via Digitalmars-d-learn
stderr buffer is full (it's about 8kb or so) and gcc waits when 
you read from it.