On Thu, 1 Nov 2001, Cory Petkovsek wrote:
>I'm working on creating a centralized logging facility for my NT servers.
>Currently my linux servers email me highlights of their logs using logcheck.
>
>
>Through this process, I discovered some interesting redirection usage that I
>hadn't seen before:
>"eldump -? 2>&1 | more"
>
>I figure this means take standard error(2) and merge it with standard
>out(1). Right?
Right. More precisely, this closes the existing standard error, and
duplicates the standard output file descriptor into the standard error file
descriptor, so that anything written to either one goes to wherever
standard output was originally going.
>Here are some experiments I ran:
>$ prog 1>d 1>j
>
>stdout went to j
>d is empty
Makes sense. You've redirected stdout to "d", and then immediately
redirected it to "j". Since the shell does the redirections before
starting the program, all the program's outpout goes to "j", leaving "d"
empty. (By the way, you can abbreviate "1>" to just ">".)
>$ prog 1>d 2>&1 1>j
>
>stdout went to j
>stderr went to d
Here we see that redirections are performed in the order written. This is
how it breaks down:
after 1>d, stdout goes to "d", stderr goes to console
after 2>&1, stdout goes to "d", stderr goes to "d"
after 1>j, stdout goes to "j", stderr goes to "d"
>How many streams are there? Can I do something like
>$ prog 1>&4 4>&3 1>j 3>&5 5>&2
Normally there are three standard ones: 0 is standard input, 1 is standard
output, and 2 is standard error. But I think the shell will let you open
higher numbers, though numbers higher than 2 have no predefined meanings,
and most programs won't do anything with them.
>Can I fork a stream? Maybe like:
>$ prog 1>&3>&4 3>file1 4>file2
No. You can, however, achieve a similar effect with "tee", for example,
prog | tee file1 > file2
The tee command will let you send output to many files at once, for
example,
prog | tee file1 file2 file3 > file4
>What other tricks can I do with streams? Can I take stdout and route it in
>to stdin, thus making a superconducter inside my shell?
>$ prog 1>&3 <&3
This says "send stdout to file descriptor #3" (which will fail since #3
isn't opened yet), and then "get stdin from descriptor #3" (which will also
fail, for the same reason).
Before you can duplicate a file descriptor with the #>&# syntax, you must
first open the file. The shell automatically opens descriptors 0, 1, and 2
for you; the rest must be opened explicitly, with (for example) the
#>filename syntax.
>or maybe just
>$ prog <&1
>hmm, I just did:
>$ cat <&1
>but it wasn't very interesting. Same as 'cat'
This says "get stdin from stdout". But it doesn't have the desired
effect--normally stdin and stdout are both hooked to /dev/tty??, and after
this redirect they still are.
I don't think there is a way, using only shell redirects, to hook up a
program's standard input to its own standard output. It can be done from
INSIDE a program (see the pipe(), socketpair(), and dup2() system calls),
but I can't think of a practical reason to do so.
>What about pipes?
>$ prog 1|'perl -pe \'s....\'' 2|wc -l....
No, pipes only redirect standard output. But if you want to redirect
standard error down a pipe, you can do it like this:
firstprog 2>&1 | secondprog
(Of course this also sends standard output down the same pipe.)
- Neil Parker, [EMAIL PROTECTED]