Aha! I have found the cause of the problem - on Solaris 10 
( SunOS xxxxx 5.10 Generic_137111-07 sun4u sparc SUNW,SPARC-Enterprise Solaris )
setting the ksh "pipefail" option causes the "jobs" builtin to fail -

$ set -o pipefail
$ ls
...
$ jobs
[1] +  Running                 ls
$ set +o pipefail
$ jobs
$

Also, when "nohup" is run in a ksh shell with the 'pipefail' setting set,
it is often unable to redirect the output of its jobs to nohup.out .

Perhaps the "jobs" built-in should be disabled when "pipefail" is in effect on 
Solaris ?

Regards,
Jason Vas Dias <[EMAIL PROTECTED]>


On Saturday 15 November 2008 18:16:28 Roland Mainz wrote:
> Jason Vas Dias wrote:
> > 
> > I've managed to get both the binary solaris release of ast-ksh
> > (2008-02-??) and today's latest release built from CVS (2008-11-04)
> > into a state where the "jobs" alias consistently returns garbage, and
> > I'm trying to figure out how it gets into this state - any suggestions ?
> [snip]
> > Is there some setting that could account for such behaviour ?
> > 
> > Any ideas anyone ?
> 
> Not ad-hoc... but today I had a ast-ksh.2008-11-04 session where "jobs"
> returned a process which did not exist:
> -- snip --
> $
> jobs                                                                          
>                                                            
> [1] +  Running                 sync
> $ jobs
> -l                                                                            
>                                                       
> [1] + 1117       Running                 sync
> $ ps -ef | fgrep
> 1117                                                                          
>                                             
>  test001  1260   684   0 22:12:50 pts/1       0:00 fgrep 1117
> $ kill -0 %1 ; print
> $?                                                                            
>                                         
> kill: %1: no such job
> -- snip --
> 
> Something weired is going on... ;-(
> 
> ----
> 
> Bye,
> Roland
> 
> -- 
>   __ .  . __
>  (o.\ \/ /.o) [EMAIL PROTECTED]
>   \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
>   /O /==\ O\  TEL +49 641 3992797
>  (;O/ \/ \O;)
> 
> Re: [ast-developers] jobs bug ?
> From:
> Jason Vas Dias <[EMAIL PROTECTED]>
>   To:
> [EMAIL PROTECTED]
>   CC:
> [email protected]
>   Date:
> Friday 17:01:39
>    
> Message was signed by [EMAIL PROTECTED] (Key ID: 0x9A531534FB48BAB7).
> The signature is valid and the key is ultimately trusted.
>   Another twist:
>
> $ jobs
> $ (unset PATH; jobs) # produces bogus output after nohup jobs launched!
> [65] +  Running                 /usr/bin/jobs
> ...
> [1]    Running                 env
> $ /usr/bin/jobs      # No bogus output
> $
>
> Fix:
>
> Put :
>     'alias jobs=/usr/bin/jobs'
>
> in one's .kshrc
>
> Any better fixes ?
>
> Thanks & Regards,
> Jason  
>
> On Fri, 2008-11-14 at 16:37 -0500, Jason Vas Dias wrote:
> > I've managed to get both the binary solaris release of ast-ksh
> > (2008-02-??) and today's latest release built from CVS (2008-11-04)
> > into a state where the "jobs" alias consistently returns garbage, and
> > I'm trying to figure out how it gets into this state - any suggestions ?
> >
> > Here is the problem :
> >
> > Every time I run any command, it gets added to the jobs list, even if no
> > command is actually run, let alone in the background:
> >
> > $ env
> > ...
> > $ josb
> > /home/cmob/bin/sol10.sun4/ksh: josb: not found [No such file or
> > directory]
> > $ jobs
> > [3] +  Running                 josb
> > [2] -  Running                 env
> > [1]    Running                 env
> > $ jobs
> > [4] +  Running                 jobs
> > [3] -  Running                 josb
> > [2]    Running                 env
> > [1]    Running                 env
> > $ job
> > /home/cmob/bin/sol10.sun4/ksh: job: not found [No such file or
> > directory]
> > $ jobs            
> > [6] +  Running                 job
> > [5] -  Running                 jobs
> > [4]    Running                 jobs
> > [3]    Running                 josb
> > [2]    Running                 env
> > [1]    Running                 env
> > $ jobs
> > [7] +  Running                 jobs
> > [6] -  Running                 job
> > [5]    Running                 jobs
> > [4]    Running                 jobs
> > [3]    Running                 josb
> > [2]    Running                 env
> > [1]    Running                 env
> > $ jobs
> > [8] +  Running                 jobs
> > [7] -  Running                 jobs
> > [6]    Running                 job
> > [5]    Running                 jobs
> > [4]    Running                 jobs
> > [3]    Running                 josb
> > [2]    Running                 env
> > [1]    Running                 env
> >
> >
> > It even does this when I move:
> >    $ mv ~/.kshrc .not-kshrc
> >    $ mv ~/.login .not-login
> >    $ mv ~/.profile .not-profile
> >
> > But I'm sure it did not do this before I started playing around with
> > these files. There are several real large 'nohup' jobs going on - it
> > does not get into this state before I initiate large nohup jobs.
> >
> > Once in this state, it does not get out of it until my nohup jobs are
> > complete.
> >
> > Is there some setting that could account for such behaviour ?
> >
> > Any ideas anyone ?
> >
> > Any help on this would be much appreciated .
> >
> > Thanks & Regards,
> >
> > Jason <[EMAIL PROTECTED]>
> >
> >
> > _______________________________________________
> > ast-developers mailing list
> > [email protected]
> > https://mailman.research.att.com/mailman/listinfo/ast-developers
>
>   End of signed message
>   _______________________________________________
> ast-developers mailing list
> [email protected]
> https://mailman.research.att.com/mailman/listinfo/ast-developers



_______________________________________________
ast-developers mailing list
[email protected]
https://mailman.research.att.com/mailman/listinfo/ast-developers

Reply via email to