Re: Potential Bash Script Vulnerability

2024-04-07 Thread Jon Seymour
You do realise that if you allow an untrusted script to run at root, having
it modify itself is the least of your concerns. There are *so* many ways an
untrusted script can cause a problem that do not require your
self-modifying script and for which your proposed mitigation will do
nothing. What's the point in protecting against the 0.01% case if you
have done nothing to protect yourself against system
administrators executing untrusted scripts as root?

On Sun, 7 Apr 2024 at 14:18,  wrote:

> Hello everyone!
>
> I've attached a minimal script which shows the issue, and my recommended
> solution.
>
> Affected for sure:
> System1: 64 bit Ubuntu 22.04.4 LTS - Bash: 5.1.16(1)-release - Hardware:
> HP Pavilion 14-ec0013nq (Ryzen 5 5500u, 32GB RAM, Radeon grapics, nvme
> SSD.)
> System2: 64 bit Ubuntu 20.10 (No longer supported.) - Bash:
> 5.0.17(1)-release - Hardware: DIY (AMD A10-5800k, 32GB RAM, Radeon
> graphics, several SATA drives)
> and probably a lot more...
>
> Not sure whether or not this is a know issue, truth be told I discovered
> it years ago (back around 2016) as I was learning bash scripting, and
> accidentally appended a command to the running script, which got
> executed immediately after the script but back then I didn't find it
> important to report since I considered myself a noob. I figured someone
> more experienced will probably find and fix it, or there must be a
> reason for it. I forgotű it. Now watching a video about clever use of
> shell in XZ stuff I remembered, tested it again and found it still
> unpatched. :S So now I'm reporting it and hope it helps!
>
> Read the code, test it, fix it. More explanation in the comments.
>
> Since it's very old I'd recommend a silent fix before announcement,
> especially since I also found a potentially easy fix.
>
> Kind regards
> Tibor


Re: Should nested case statements within command substitutions work on in bash 3.2.x?

2015-03-22 Thread Jon Seymour
Thanks for reply and the workaround.

jon.

On Sun, Mar 22, 2015 at 4:49 PM, Chris F.A. Johnson
ch...@cfajohnson.com wrote:
 On Sun, 22 Mar 2015, Jon Seymour wrote:

 I was surprised that this didn't work with the OSX version of bash 3.2:

 /bin/bash -c 'echo $(case yes in yes) echo yes; ;; no) echo no; ;;
 esac)'

 /bin/bash: -c: line 0: syntax error near unexpected token `;;'
 /bin/bash: -c: line 0: `echo $(case yes in yes) echo yes; ;; no)
 echo no; ;; esac)'

 It does work with bash 4.x.

 Is this a known issue with 3.2 or is it particular to the OSX
 implementation (which in my case is 3.2.53(1))?


   Balance the parentheses:


 echo $(case yes in (yes) echo yes; ;; (no) echo no; ;; esac)

 --
 Chris F.A. Johnson, http://cfajohnson.com



Should nested case statements within command substitutions work on in bash 3.2.x?

2015-03-21 Thread Jon Seymour
I was surprised that this didn't work with the OSX version of bash 3.2:

 /bin/bash -c 'echo $(case yes in yes) echo yes; ;; no) echo no; ;; esac)'

/bin/bash: -c: line 0: syntax error near unexpected token `;;'
/bin/bash: -c: line 0: `echo $(case yes in yes) echo yes; ;; no)
echo no; ;; esac)'

It does work with bash 4.x.

Is this a known issue with 3.2 or is it particular to the OSX
implementation (which in my case is 3.2.53(1))?

jon.



Re: Bash-4.3 Official Patch 27

2014-09-28 Thread Jon Seymour
To clarify, I am not sure that the presence of a variable called
/tmp/exploit=me represents a huge vuilnerability for at(1) since
anyone who can arrange for this to happen can probably mutate the
user's environment in anyway they choose, but I did want to point out
that atrun will attempt to execute '/tmp/exploit=me' as a /bin/sh
command and should there be a executable file at that path, then an
unexpected execution may result.

I note that my OSX environment currently contains this variable
injected by Chrome:

COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/JONSEYMOUR/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET=/tmp/launch-5VzA1C/ServiceProcessSocket

and attempts to invoke 'at' result in unexpected attempts to execute a
file called:

COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/JONSEYMOUR/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET=/tmp/launch-5VzA1C/ServiceProcessSocket

when 'atrun' runs. Of course, to exploit this, the attacker would have
to be able to create a file of that name on the filesystem and enable
'atrun' (which is apparently disabled by default on OSX).



On Mon, Sep 29, 2014 at 2:10 AM,  becker...@gmail.com wrote:
 On Sunday, September 28, 2014 4:38:24 PM UTC+1, beck...@gmail.com wrote:
 ..
 If I use the Arch linux [testing] bash-4.3.027-1 which is uses this patch 
 then I have a patch against the at(1) source which converts exported 
 functions into something that sh can parse and allows exported functions to 
 be used in the environment that calls at.

 ...

 Jon Seymour asked me if my at patch would fix the following vulnerablity 
 (presumably in at(1))

 echo pwd | env /tmp/exploit=me at tomorrow

 which I presume relies on acceptance of /tmp/exploit=me as a possible 
 command. I'm not sure it does since the current at code writes the variable 
 name out unconditionally (ie no inspection of characters etc etc). I could 
 probably raise an error for bad variable names, but I'm not sure I understand 
 what characters are now illegal or what the lexical definition of bash/sh 
 variable names is now. So I would appreciate advice on that.



Re: Bash-4.3 Official Patch 27

2014-09-28 Thread Jon Seymour
| correction:  variable called /tmp/exploit=me  = a variable called
/tmp/exploit with a value me

On Mon, Sep 29, 2014 at 2:26 AM, Jon Seymour jon.seym...@gmail.com wrote:
 To clarify, I am not sure that the presence of a variable called
 /tmp/exploit=me represents a huge vuilnerability for at(1) since
 anyone who can arrange for this to happen can probably mutate the
 user's environment in anyway they choose, but I did want to point out
 that atrun will attempt to execute '/tmp/exploit=me' as a /bin/sh
 command and should there be a executable file at that path, then an
 unexpected execution may result.

 I note that my OSX environment currently contains this variable
 injected by Chrome:

 COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/JONSEYMOUR/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET=/tmp/launch-5VzA1C/ServiceProcessSocket

 and attempts to invoke 'at' result in unexpected attempts to execute a
 file called:

 COM_GOOGLE_CHROME_FRAMEWORK_SERVICE_PROCESS/USERS/JONSEYMOUR/LIBRARY/APPLICATION_SUPPORT/GOOGLE/CHROME_SOCKET=/tmp/launch-5VzA1C/ServiceProcessSocket

 when 'atrun' runs. Of course, to exploit this, the attacker would have
 to be able to create a file of that name on the filesystem and enable
 'atrun' (which is apparently disabled by default on OSX).



 On Mon, Sep 29, 2014 at 2:10 AM,  becker...@gmail.com wrote:
 On Sunday, September 28, 2014 4:38:24 PM UTC+1, beck...@gmail.com wrote:
 ..
 If I use the Arch linux [testing] bash-4.3.027-1 which is uses this patch 
 then I have a patch against the at(1) source which converts exported 
 functions into something that sh can parse and allows exported functions to 
 be used in the environment that calls at.

 ...

 Jon Seymour asked me if my at patch would fix the following vulnerablity 
 (presumably in at(1))

 echo pwd | env /tmp/exploit=me at tomorrow

 which I presume relies on acceptance of /tmp/exploit=me as a possible 
 command. I'm not sure it does since the current at code writes the variable 
 name out unconditionally (ie no inspection of characters etc etc). I could 
 probably raise an error for bad variable names, but I'm not sure I 
 understand what characters are now illegal or what the lexical definition of 
 bash/sh variable names is now. So I would appreciate advice on that.



Re: handling of test == by BASH's POSIX mode

2012-05-28 Thread Jon Seymour
On Tue, May 29, 2012 at 1:08 AM, Eric Blake ebl...@redhat.com wrote:
 On 05/27/2012 07:09 AM, Jon Seymour wrote:

 I understand that the behaviour is unspecitied by POSIX - I didn't
 know that before, but I know that now - thanks.

 That said, from the point of view of promoting interoperable scripts,
 my view is that it (in an ideal world**) would be better if bash chose
 to fail, while executing in POSIX mode, in this case.

 Sounds like you want debian's 'posh', which is even stricter than
 'dash', and excellent for pointing out use of extensions by failing
 loudly at any encountered extension, including == under test.

Thanks, I'll check it out.

jon.



handling of test == by BASH's POSIX mode

2012-05-27 Thread Jon Seymour
Is there a reason why bash doesn't treat == as an illegal test
operator when running in POSIX mode?

This is problematic because use of test == in scripts that should be
POSIX isn't getting caught when I run them under bash's POSIX mode.
The scripts then fail when run under dash which seems to be stricter
about this.

I have reconfigured my system's default /bin/sh back to /bin/dash to
ensure better POSIX compliance, but it would be nice if I didn't have
to do that.

I am running Ubuntu's distribution of bash, per:

jseymour@ubuntu:~/tracked/git$ uname -a
Linux ubuntu 2.6.32-41-generic #88-Ubuntu SMP Thu Mar 29 13:10:32 UTC
2012 x86_64 GNU/Linux
jseymour@ubuntu:~/tracked/git$ bash --version
GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

jon.



Re: handling of test == by BASH's POSIX mode

2012-05-27 Thread Jon Seymour
On 27/05/2012, at 17:39, Geir Hauge geir.ha...@gmail.com wrote:

 2012/5/27 Jon Seymour jon.seym...@gmail.com:
 Is there a reason why bash doesn't treat == as an illegal test
 operator when running in POSIX mode?
 
 POSIX does not say == is not allowed.
 
 POSIX tells you what the shell should at least be able to do. A POSIX
 compliant shell can have whatever other features it likes, as long as
 the POSIX features are covered.
 

I guess the question is better phrased thus: what use case is usefully served 
by having bash's POSIX mode support a superset of test operators than other 
compliant POSIX shells?  As it stands, I can't use bash's POSIX mode to verify 
the validity or otherwise of a POSIX script because bash won't report these 
kinds of errors - even when running in POSIX mode.

There is an --enable-strict-posix (?) configuration option. Will this do what I 
expect?

 
 This is problematic because use of test == in scripts that should be
 POSIX isn't getting caught when I run them under bash's POSIX mode.
 The scripts then fail when run under dash which seems to be stricter
 about this.
 
 Don't use non-POSIX features in a POSIX script, and you'll be fine.
 http://www.opengroup.org/onlinepubs/9699919799/utilities/contents.html
 

Which is the exactly the point. Practically speaking when I write scripts I 
expect an interpreter that claims to be running in POSIX mode to give me some 
help to flag usage of non POSIX idioms. Yes, I can second guess the interpreter 
by reading the spec, but is this really the most efficient way to catch these 
kinds of errors?

Jon.


Re: handling of test == by BASH's POSIX mode

2012-05-27 Thread Jon Seymour
On Sun, May 27, 2012 at 9:24 PM, Dan Douglas orm...@gmail.com wrote:
 On Sunday, May 27, 2012 08:45:46 PM Jon Seymour wrote:
 On 27/05/2012, at 17:39, Geir Hauge geir.ha...@gmail.com wrote:

 I guess the question is better phrased thus: what use case is usefully
 served by having bash's POSIX mode support a superset of test operators than
 other compliant POSIX shells?  As it stands, I can't use bash's POSIX mode to
 verify the validity or otherwise of a POSIX script because bash won't report
 these kinds of errors - even when running in POSIX mode.


 There are no shells in existence that can do what you want. All major shells
 claiming to be POSIX compatible include some superset that can't be disabled.
 The only shell I have installed not supporting == in [ is dash, and there are
 so many scripts in the wild using == with [ it would be a miracle if your
 system didn't break because of it. Even the coreutils /usr/bin/[ supports ==.


I take your point that this isn't really a bash problem, but a POSIX
problem - POSIX hasn't provided a way to validate whether a script
only uses features that are required to be supported by POSIX
compliant interpreters. An example of this is the failure to prohibit
support for additional test operators which ultimately results in the
creation of interoperability problems between bash and dash - surely
not a good thing.

However, I think my question still remains unanswered - even if bash
is technically compliant with POSIX, what use case is usefully served
by having bash support a superset of the POSIX test operators while
executing in POSIX mode?

Yes, doing allows bash to execute an ill-defined superset of valid
POSIX scripts, but why is that useful - surely POSIX compliant mode
should only be required to execute strictly compliant POSIX scripts
and if we want to execute a superset, then we should explicitly
specify which superset we want to use, not hope that the POSIX
compliant interpreter installed on a machine happens to support that
superset.

You point about scripts failing when running under dash is a good one.
This is exactly my problem: I replaced /bin/sh - dash with /bin/sh -
bash because a 3rd party product installation script failed when dash
was the POSIX shell. Which is good - I fixed my installation
problem. Better would have been if the original developer had never
released the non-POSIX script in the first place, something that might
not have happened if the bash POSIX implementation was more
conservative. The fact that I didn't switch back to dash after I
installed the product eventually caused me to inject a non-POSIX test
== operator into some scripts I was working on for the git project -
again, something that would not have happened if bash's POSIX mode was
more conservative.

 Performing that kind of checking, rigorously, in a shell, would be impossible
 to do statically anyway. Any such lint tool is limited to lexical analysis
 which makes it not very useful for testing unless your script is perfectly
 free of side-efffects. And who writes side-effect free shell scripts?


 How would the shell check for the correctness of:

 $(rm -rf somepath; echo '[') x == x ]

I wasn't claiming that static checking would be viable. In fact, the
impossibility of static checking is precisely why it would be useful
to have real POSIX compliant interpreters that were as conservative
as possible in the syntax and commands they accepted at least in their
so-called POSIX mode. Currently, for example, if I want to test git sh
scripts, I need to ensure that /bin/sh is pointing at dash, because if
I use bash, errors of this kind can slip through. There really isn't a
more effective way for me to fix address this problem than by
abandoning bash as my POSIX shell and using dash instead since it's
POSIX mode does seem to be more conservative.

Given your example:

jseymour@ubuntu:~/tracked/git$ bash --posix -c '$(rm -rf /tmp/foo;
echo [)  x == x ]  echo yes'
yes

jseymour@ubuntu:~/tracked/git$ dash -c '$(rm -rf /tmp/foo; echo [)
 x == x ]  echo yes'
[: 1: x: unexpected operator

bash --posix doesn't give me any clue that == isn't required to be
supported POSIX interpreters, but at least dash does.

jon.



Re: handling of test == by BASH's POSIX mode

2012-05-27 Thread Jon Seymour
On Sun, May 27, 2012 at 9:31 PM, Andreas Schwab sch...@linux-m68k.org wrote:
 Jon Seymour jon.seym...@gmail.com writes:

 As it stands, I can't use bash's POSIX mode to verify the validity or
 otherwise of a POSIX script because bash won't report these kinds of
 errors - even when running in POSIX mode.

 You can't do that anyway: POSIX mode does not disable proper extensions
 to POSIX, only those that conflict with POSIX.  Specifically, in the
 case of the test utility, POSIX makes the behaviour unspecified when a
 three argument invocation does not match the POSIX-defined binary
 operators.

I understand that the behaviour is unspecitied by POSIX - I didn't
know that before, but I know that now - thanks.

That said, from the point of view of promoting interoperable scripts,
my view is that it (in an ideal world**) would be better if bash chose
to fail, while executing in POSIX mode, in this case.

Yes, it would be annoying to the script writer who has to replace ==
with =, but at least the script eco-system as a whole would benefit
from a more conservative understanding of what it means to be a valid
POSIX script. As it stands now, it is well nigh impossible to avoid
undefined behaviour when the interpreter chooses to gracefully glosses
over the fact that the script is utiising.

** I guess I can except that current bash behaviour is, on balance,
the correct pragmatic decision since there would no doubt be
widespread carnage in the scripting universe if bash was suddenly to
become pickier about how it supports POSIX mode. Is there a case, I
wonder, for enabling more conservative interpretation of test
operators via a shell option of some kind?


 There is an --enable-strict-posix (?) configuration option. Will this do
 what I expect?

 That just switches the default for POSIX mode.

Thanks.

jon.



Re: handling of test == by BASH's POSIX mode

2012-05-27 Thread Jon Seymour
On Sun, May 27, 2012 at 11:09 PM, Jon Seymour jon.seym...@gmail.com wrote:
 On Sun, May 27, 2012 at 9:31 PM, Andreas Schwab sch...@linux-m68k.org wrote:
 Jon Seymour jon.seymour@gm
 ** I guess I can except that current bash behaviour is, on balance,

except - accept



Re: handling of test == by BASH's POSIX mode

2012-05-27 Thread Jon Seymour
On Mon, May 28, 2012 at 2:08 AM, Dan Douglas orm...@gmail.com wrote:
 ... Bash
 just modifies conflicting features to the minimal extent necessary to bring it
 into compliance, which seems to be the path of least resistance.


Sure. I understand that this is a reasonable philosophy given that
aiming for complete avoidance of unspecified behaviour in the POSIX
spec would probably lead to a completely unusable shell.

That said, it is a shame from the point of view of interoperable scripts.

Perhaps I should just accept that dash, being more minimal, does a
better job of being that conformance testing shell?


 This would be a big job, I think, and not quite at the top of my wish-list.
 Right now you can increase the number of things that fail by explicitly
 disabling non-POSIX built-ins using the Bash enable builtin.


Thanks for that tip.


 Dash is useful for testing. The Bash answer is [[, which CAN do a lot of
 special error handling on due to it being a compound command. I wrote a bit
 about this here:

 http://mywiki.wooledge.org/BashFAQ/031/#Theory

 In reality, [[ is one of the very most portable non-POSIX features available.
 Most people shouldn't have to worry about avoiding it.


Thanks, I'll have a read.

jon.



Re: bug: return doesn't accept negative numbers

2011-08-11 Thread Jon Seymour
On Mon, Aug 8, 2011 at 8:42 AM, Bob Proulx b...@proulx.com wrote:

 People sometimes read the POSIX standard today and think it is a
 design document.  Let me correct that misunderstanding.  It is not.
 POSIX is an operating system non-proliferation treaty.

Love it!

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-09 Thread Jon Seymour
On Tue, Aug 9, 2011 at 7:29 PM, Bernd Eggink mono...@sudrala.de wrote:
 On 09.08.2011 03:44, Jon Seymour wrote:

 Has anyone ever come across an equivalent to Linux's readlink -f that
 is implemented purely in bash?

 You can find my version here:

        http://sudrala.de/en_d/shell-getlink.html

 As it contains some corrections from Greg Wooledge, it should handle even
 pathological situations. ;)

 Bernd


Thanks for that. ${link##*- } is a neater way to extract the link.

It does seem that a link create like so: ln -sf a - b c is going to
create problems for both your script and mine [ not that I actually
care about such a perverse case :-) ]

jon.



equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
Has anyone ever come across an equivalent to Linux's readlink -f that
is implemented purely in bash?

(I need readlink's function on AIX where it doesn't seem to be available).

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 12:49 PM, Bob Proulx b...@proulx.com wrote:
 Jon Seymour wrote:
 Has anyone ever come across an equivalent to Linux's readlink -f that
 is implemented purely in bash?

 (I need readlink's function on AIX where it doesn't seem to be available).

 Try this:

  ls -l /path/to/some/link | awk '{print$NF}'

 Sure it doesn't handle whitespace in filenames but what classic AIX
 Unix symlink would have whitespace in it?  :-)


readlink -f will fully resolve links in the path itself (rather than
link at the end of the path), which was the behaviour I needed.

It seems cd -P does most of what I need for directories and so
handling things other than directories is a small tweak on that.

Anyway, thanks for that!

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 1:36 PM, Bob Proulx b...@proulx.com wrote:
 Jon Seymour wrote:
 readlink -f will fully resolve links in the path itself (rather than
 link at the end of the path), which was the behaviour I needed.

 Ah, yes, well, as you could tell that was just a partial solution
 anyway.

 It seems cd -P does most of what I need for directories and so
 handling things other than directories is a small tweak on that.

 You might try cd'ing there and then using pwd -P to get the canonical
 directory name.  I am thinking something like this:

  #!/bin/sh
  p=$1
  dir=$(dirname $p)
  base=$(basename $p)
  physdir=$(cd $dir; pwd -P)
  realpath=$(cd $dir; ls -l $base | awk '{print$NF}')
  echo $physdir/$realpath | sed 's|//*|/|g'
  exit 0

 Again, another very quick and partial solution.  But perhaps something
 good enough just the same.

 realpath=$(cd $dir; ls -l $base | awk '{print$NF}')

I always use sed for this purpose, so:

   $(cd $dir; ls -l $base | sed s/.*-//)

But, with pathological linking structures, this isn't quite enough -
particularly if the target of the link itself contains paths, some of
which may contain links :-)

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 2:14 PM, Bob Proulx b...@proulx.com wrote:
 Jon Seymour wrote:
 I always use sed for this purpose, so:

    $(cd $dir; ls -l $base | sed s/.*-//)

 But, with pathological linking structures, this isn't quite enough -
 particularly if the target of the link itself contains paths, some of
 which may contain links :-)

 Agreed!  Symlinks with arbitrary data, such as holding small shopping
 lists in the target value, are so much fun.  I am more concerned that
 arbitrary data such as - might exist in there more so than
 whitespace.  That is why I usually don't use a pattern expression.
 But I agree it is another way to go.  But it is easier to say
 whitespace is bad in filenames than to say whitespace is bad and oh
 yes you can't have - in there either.  :-)


Ok, I think this does it...

readlink_f()
{
local path=$1
test -z $path  echo usage: readlink_f path 12  exit 1;

local dir

if test -L $path
then
local link=$(ls -l $path | sed s/.*- //)
if test $link = ${link#/}
then
# relative link
dir=$(dirname $path)
readlink_f ${dir%/}/$link
else
# absolute link
readlink_f $link
fi
elif test -d $path
then
(cd $path; pwd -P) # normalize it
else
dir=$(cd $(dirname $path); pwd -P)
base=$(basename $path)
echo ${dir%/}/${base}
fi
}



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 2:36 PM, Jon Seymour jon.seym...@gmail.com wrote:
 On Tue, Aug 9, 2011 at 2:14 PM, Bob Proulx b...@proulx.com wrote:
 Jon Seymour wrote:
 I always use sed for this purpose, so:

    $(cd $dir; ls -l $base | sed s/.*-//)

 But, with pathological linking structures, this isn't quite enough -
 particularly if the target of the link itself contains paths, some of
 which may contain links :-)

 Agreed!  Symlinks with arbitrary data, such as holding small shopping
 lists in the target value, are so much fun.  I am more concerned that
 arbitrary data such as - might exist in there more so than
 whitespace.  That is why I usually don't use a pattern expression.
 But I agree it is another way to go.  But it is easier to say
 whitespace is bad in filenames than to say whitespace is bad and oh
 yes you can't have - in there either.  :-)


 Ok, I think this does it...

 readlink_f()
 {
 ...
 }

And I make no claims whatsoever about whether this is vulnerable to
infinite recursion!

jon.



Re: equivalent of Linux readlink -f in pure bash?

2011-08-08 Thread Jon Seymour
On Tue, Aug 9, 2011 at 2:51 PM, Bob Proulx b...@proulx.com wrote:
 Jon Seymour wrote:
 readlink_f()
 {
         local path=$1
         test -z $path  echo usage: readlink_f path 12  exit 1;

 An extra ';' there that doesn't hurt but isn't needed.

         local dir

         if test -L $path
         then
                 local link=$(ls -l $path | sed s/.*- //)

 I would be inclined to also look for a space before the  -  too.
 Because it just is slightly more paranoid.

                local link=$(ls -l $path | sed s/.* - //)

                 if test $link = ${link#/}
                 then
                         # relative link
                         dir=$(dirname $path)

 As an aside you don't need to quote assignments.  They exist inside
 the shell and no word splitting will occur.  It is okay to assign
 without quotes here and I think it reads slightly better without.

                        dir=$(dirname $path)

                         readlink_f ${dir%/}/$link
                 else
                         # absolute link
                         readlink_f $link
                 fi
         elif test -d $path
         then
                 (cd $path; pwd -P) # normalize it
         else
                 dir=$(cd $(dirname $path); pwd -P)
                 base=$(basename $path)

 Same comment here about over-quoting.  If nothing else it means that
 syntax highlighting is different.

                dir=$(cd $(dirname $path); pwd -P)
                base=$(basename $path)

                 echo ${dir%/}/${base}
         fi
 }

 And of course those are just suggestions and nothing more.  Feel free
 to ignore.

Tips appreciated, thanks.

jon.


Can someone explain this?

2011-02-11 Thread Jon Seymour
Can someone explain why this is happening?

#expected
$ bash -c 'cd /tmp; pwd'
/tmp

#expected
$ bash -c 'pwd; cd /tmp; pwd'
/home/jseymour
/tmp

#expected
$ ssh localhost bash -c 'pwd; cd /tmp; pwd'
/home/jseymour
/tmp

#unexpected
$ ssh localhost bash -c 'cd /tmp; pwd'
/home/jseymour

My expectation is that the last command should print:

/tmp

But, instead, the cd command seems to be completely ignored when bash
is run under ssh. I have reproduced this with bash 4.1.5 on Linux and
bash 3.0.0 on AIX.

jon.



Re: Can someone explain this?

2011-02-11 Thread Jon Seymour
Correction - a _leading_ cd command and only a leading cd command,
seems to be completely ignored in the case I described.

Why is this?

jon.

-- Forwarded message --
From: Jon Seymour jon.seym...@gmail.com
Date: Sat, Feb 12, 2011 at 2:18 PM
Subject: Can someone explain this?
To: bug-bash@gnu.org


Can someone explain why this is happening?

#expected
$ bash -c 'cd /tmp; pwd'
/tmp

#expected
$ bash -c 'pwd; cd /tmp; pwd'
/home/jseymour
/tmp

#expected
$ ssh localhost bash -c 'pwd; cd /tmp; pwd'
/home/jseymour
/tmp

#unexpected
$ ssh localhost bash -c 'cd /tmp; pwd'
/home/jseymour

My expectation is that the last command should print:

/tmp

But, instead, the cd command seems to be completely ignored when bash
is run under ssh. I have reproduced this with bash 4.1.5 on Linux and
bash 3.0.0 on AIX.

jon.



Re: Can someone explain this?

2011-02-11 Thread Jon Seymour
Ok, so it relates to how ssh interprets its command argument:

So:

bash -c 'cd /tmp ; pwd'

My expectation was that it would invoke bash with the arguments:

'-c'
'cd /tmp; pwd'

But bash is actually invoked with:

'-c'
'cd'
'/tmp'

and then pwd is invoked, presumably in same shell that invoked bash.

This can be seen with this:

jseymour@ubuntu:~$ ssh localhost bash -c 'echo\ \$\$\ \$PPID' ';' echo '$$'
11553 11552
11552

bash is invoked with:
'-c'
'echo $$ $PPID'

then:

echo $$ runs in the parent shell

jon.


On Sat, Feb 12, 2011 at 2:44 PM, Dennis Williamson
dennistwilliam...@gmail.com wrote:
 On Fri, Feb 11, 2011 at 9:21 PM, Jon Seymour jon.seym...@gmail.com wrote:
 Correction - a _leading_ cd command and only a leading cd command,
 seems to be completely ignored in the case I described.

 Why is this?

 jon.

 -- Forwarded message --
 From: Jon Seymour jon.seym...@gmail.com
 Date: Sat, Feb 12, 2011 at 2:18 PM
 Subject: Can someone explain this?
 To: bug-bash@gnu.org


 Can someone explain why this is happening?

 #expected
 $ bash -c 'cd /tmp; pwd'
 /tmp

 #expected
 $ bash -c 'pwd; cd /tmp; pwd'
 /home/jseymour
 /tmp

 #expected
 $ ssh localhost bash -c 'pwd; cd /tmp; pwd'
 /home/jseymour
 /tmp

 #unexpected
 $ ssh localhost bash -c 'cd /tmp; pwd'
 /home/jseymour

 My expectation is that the last command should print:

 /tmp

 But, instead, the cd command seems to be completely ignored when bash
 is run under ssh. I have reproduced this with bash 4.1.5 on Linux and
 bash 3.0.0 on AIX.

 jon.



 It's not particular to Bash. I can reproduce it in several other shells.




Re: Can someone explain this?

2011-02-11 Thread Jon Seymour
On Sat, Feb 12, 2011 at 4:54 PM, Bob Proulx b...@proulx.com wrote:
 I am a big fan of piping the script to the remote shell.

  $ echo cd /tmp  pwd | ssh example.com bash
  /tmp

 This has two advantages.  One is that you can pick your shell on the
 remote host.  Otherwise it runs as whatever is configured for that
 user in the password file.  If the remote host has csh configured then
 this overrides it and provides a known shell on the remote end.  Two
 is that since this is stdin it avoids having two different shells
 parse the command line.  Quoting is then much simplified.


Makes sense...thanks.

jon.



Re: multi-line commands in the history get split when bash is quit

2011-02-05 Thread Jon Seymour
Here's the format I see in my history.

#1296950184
for i in 1 2
do
echo $i
done
#1296950194
exit

HISTTIMEFORMAT is:

HISTTIMEFORMAT='[%m.%d.%y] %T   '


bash -version is:

GNU bash, version 3.2.25(1)-release (i686-redhat-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.

jon.


On Sun, Feb 6, 2011 at 11:51 AM, Jon Seymour jon.seym...@gmail.com wrote:
 In the version I was using a line that began with # and perhaps a timestamp 
 separated each entry of the history in a way that in principle preserved 
 information about the entry boundary even though this information is not used 
 by bash on the subsequent start.

 jon.

 On 06/02/2011, at 11:24, Michael Witten mfwit...@gmail.com wrote:

 On Sat, Feb 5, 2011 at 18:02, Jon Seymour jon.seym...@gmail.com wrote:
 The version I tried on Linux 3.2.25 does have a .bash_history
 format that could support it, but it still behaved the same way.

 How do you mean?

 I'm running bash version 4.1.9(2)-release on GNU/Linux, and the
 resulting history file doesn't seem like it's storing anything more
 than lines of text naively dumped from the multi-line example.




Re: multi-line commands in the history get split when bash is quit

2011-02-05 Thread Jon Seymour
On Sun, Feb 6, 2011 at 1:07 PM, Michael Witten mfwit...@gmail.com wrote:
 On Sat, Feb 5, 2011 at 20:02, Michael Witten mfwit...@gmail.com wrote:
 So, if you run `history', you'll not only get the commands in the
 history list, but you'll also get the time at which the commands
 were last run (formatted according to $HISTTIMEFORMAT).

 In other words, it's not helpeful in this case.

 Of course, I suppose bash could be taught to build multi-line comments
 from command lines that share the same timestamp, but given the nature
 of how that information is recorded, it seems like it may not be a
 very robust solution.


You don't have to do that - the timestamp is encoded in a comment
line between entries. See the example below. One could simply assume
all lines between two lines beginning with # are part of the one
entry,

#1296950290
pwd
#1296950293
bash -version
#1296950327
for i in 1 2
do
echo $i
done
#1296950337

jon.



Re: Encoding multiple filenames in a single variable

2010-08-30 Thread Jon Seymour
Chris, Andrej and Greg,

Thanks for your helpful replies.

You are quite correct on pointing out that the solution does depend on
how it is to be used

To provide more context:

I am working on an extension to git, and need to store a list of shell
files that can be used to extend the capabilities of the command I am
writing. Most of the time, a variable of the form:

GIT_EXTRA_CONDITION_LIBS=libA.sh libB.sh would work, but technically
speaking, I do need to support spaces in the path (if nothing else,
git's test suite cunningly runs within a directory that contains space
in the name :-).

So, I would like the convenience of using spaces to delimit entries in
the variable since I don't want people to have to define NL variables
in order to extend the variable. On the otherhand, if they do want to
use an inconvenient filename with spaces, there has to be a way to do
it.

In the end, what I have done is make use of git's rev-parse --sq-quote
feature to quote filenames that can contain spaces. That way, if you
really want spaces in the filenames, you can have it, but if you
don't, then you get the convenience of space as a separator.

So, for example:

GIT_EXTRA_CONDITION_LIBS=libA.sh 'lib B.sh' libC.sh

I am lucky in that I can assume the existence of git rev-parse on my
path and I am prepared to write the decoding glue in my script.

Anyway, thank you all for your input.

jon.

On Mon, Aug 30, 2010 at 11:07 PM, Greg Wooledge wool...@eeg.ccf.org wrote:
 On Sun, Aug 29, 2010 at 04:07:23AM -0400, Chris F.A. Johnson wrote:
 On Sun, 29 Aug 2010, Jon Seymour wrote:

 This isn't strictly a bash question, and I'd prefer a POSIX-only
 solution if possible

 Suppose I need to encode a list of filenames in a variable

 POSIX shells won't have arrays (they're allowed to, but they're also
 allowed NOT to, which means you can't count on them being present), but
 you can enlist the positional parameters for use as an array under
 certain conditions.

  set ./*myglob*
  process $@

 Of course you don't get bash's full set of capabilities.  You can't
 use range notation to get a slice (as some people call it) of an array
 (for the rest of us, that means from element M to element N), and
 you can't set or unset individual elements.  Nor can you populate it
 using a loop reading from find -print0 or similar as you can with a
 bash array.

 But if you just need to populate them from a glob, this should suffice.

    Either separate them with newlines, or (non-POSIX) use an array.

 Filenames can also contain newlines, unfortunately.

 A third choice would be to store the list of filenames in a file, rather
 than in a variable.  The advantage of this is that you can store NUL
 bytes in a file (unlike in a variable).  The disadvantage is a need for
 platform-specific utilities to create safe temporary files, or for some
 sort of application-level strategy to earmark a safe place to create them
 using primitive-but-portable means.  And of course you'd need some way
 to read them from the file.

 It all boils down to exactly what you need to do with the filenames once
 you have them.





Re: Encoding multiple filenames in a single variable

2010-08-30 Thread Jon Seymour
On Mon, Aug 30, 2010 at 11:33 PM, Greg Wooledge wool...@eeg.ccf.org wrote:
 On Mon, Aug 30, 2010 at 11:25:00PM +1000, Jon Seymour wrote:
 I am working on an extension to git, and need to store a list of shell
 files that can be used to extend the capabilities of the command I am
 writing. Most of the time, a variable of the form:

 GIT_EXTRA_CONDITION_LIBS=libA.sh libB.sh would work, but technically
 speaking, I do need to support spaces in the path (if nothing else,
 git's test suite cunningly runs within a directory that contains space
 in the name :-).

 If this is an environment variable, then you're screwed.  Environment
 variables can't be arrays, and if they could, they surely wouldn't be
 portable.

 In the end, what I have done is make use of git's rev-parse --sq-quote
 feature to quote filenames that can contain spaces. That way, if you
 really want spaces in the filenames, you can have it, but if you
 don't, then you get the convenience of space as a separator.


 So, for example:

     GIT_EXTRA_CONDITION_LIBS=libA.sh 'lib B.sh' libC.sh

 What does it do with filenames that contain apostrophes?  How do you
 read the filenames back out of this format?  The only ways I can
 think of off the top of my head to parse that kind of input in a
 script are eval and xargs, and those should send you screaming

 There really is no good way to put multiple filenames into a single
 environment variable.  Your best bet is to put them in a file and
 make the environment variable point to that file.  The contents of
 the file would have to be NUL-delimited or newline-delimited.  I'm
 pretty sure that you'll end up going with newline delimiters and just
 saying if your filenames have newlines in them, you lose, so don't
 do that.  Which is not the worst approach in the world.


All good points. I think I'll change tack slighty. git has a
configuration mechanism that can server to store lists that users
might want to configure and I can add an --include option where
required to allow wrapper commands to add their own libraries, as
required, thus eliminating the requirement to inherit such lists from
the environment.

Thanks again.

jon.



What motivates HISTCONTROL=ignorespace ?

2010-02-06 Thread Jon Seymour
I too was unaware of the HISTCONTROL option, but now that I know what
it does, I am intrigued by the rationale for HISTCONTROL=ignorespace?
In other words, what motivated the inclusion of handling for this
option specifically?

Is it to allow users who may have reason to type sensitive commands a
trivial means to suppress such commands from their command history?
Or, is it motivated by some other use case where suppressing commands
with a leading space is useful? What is that use case?

Regards,

jon.




Options for IPC between bash processes under cygwin

2009-12-04 Thread Jon Seymour
I'd  like to dispatch commands from one light-weight bash process to a
longer running bash process which takes longer to initialize [ I have
a _big_ library of bash functions ].

On Linux or any reasonable OS, I could do this remote dispatch easily
with named pipes, but these don't exist on cygwin.

I'd be interested to know if there are any good solutions to this
problem already in existence.

jon.




Re: Options for IPC between bash processes under cygwin

2009-12-04 Thread Jon Seymour
Oh, cool. Thanks for correcting me!

jon.

On Sat, Dec 5, 2009 at 11:54 AM, Eric Blake e...@byu.net wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 According to Jon Seymour on 12/4/2009 4:00 PM:
 On Linux or any reasonable OS, I could do this remote dispatch easily
 with named pipes, but these don't exist on cygwin.

 That's where you're wrong.  Named pipes DO exist on cygwin, although there
 are still some bugs being hammered out when trying to have multiple
 simultaneous writers to a named pipe.  Also, whereas you can do pipe on
 Linux, that won't work on cygwin (where named pipes must be read-only owr
 write-only).

 - --
 Don't work too hard, make some time for fun as well!

 Eric Blake             e...@byu.net
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (Cygwin)
 Comment: Public key at home.comcast.net/~ericblake/eblake.gpg
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

 iEYEARECAAYFAksZr0AACgkQ84KuGfSFAYBnbgCeMS9E6jFwocHi4WbOBKk+OTmH
 G/0AoNZjXu/8oytboSqzbX2VIW9i1UIb
 =PKzk
 -END PGP SIGNATURE-





Re: Is this exploitable?

2009-05-11 Thread Jon Seymour
Yes, I realised that I should have at least used // after I posted,
not that that would have been sufficient. Thanks for the solution.

jon.

On Mon, May 11, 2009 at 10:20 PM, Greg Wooledge wool...@eeg.ccf.org wrote:
 On Mon, May 11, 2009 at 10:35:18AM +1000, Jon Seymour wrote:
 I am trying to parse untrusted strings and represent in a form that
 would be safe to execute.

 printf %q

 cmd=echo
 for a in $@
 do
     cmd=$cmd '${a/\'/''}'
 done
 echo $cmd
 eval $cmd

 http://mywiki.wooledge.org/BashFAQ/050 - I'm trying to put a command in
 a variable, but the complex cases always fail!

 Your escaping is wrong in any event.  You don't escape an apostrophe
 by putting another apostrophe in front of it.  I.e., this is NOT valid
 bash syntax:

  echo 'can''t'

 This is:

  echo 'can'\''t'

 Also, your parameter expansion is only handling the FIRST apostrophe
 in each argument.  That's surely not enough.

 As I said earlier: printf %q

 Is my code safe, or can someone maliciously choose arguments to
 as-echo.sh that could cause it (as-echo.sh) to do something other than
 write to stdout?

 imadev:~$ ./as-echo.sh ls can't';date'
  'ls' 'can''t';date''
 cant not found
 Mon May 11 08:19:33 EDT 2009





Is this exploitable?

2009-05-10 Thread Jon Seymour
I am trying to parse untrusted strings and represent in a form that
would be safe to execute.

So assuming as-echo.sh defined as below for example:

cmd=echo
for a in $@
do
cmd=$cmd '${a/\'/''}'
done
echo $cmd
eval $cmd

Then:

as-echo.sh 'a' '$(foobar)' 'c'

would produce:

   echo 'a' '$b' 'c'
   a $b c

Is my code safe, or can someone maliciously choose arguments to
as-echo.sh that could cause it (as-echo.sh) to do something other than
write to stdout?

Can anyone point me to best practice for this kind of protection in bash?

jon.




Re: bash-4.0 regression: negative return values

2009-02-22 Thread Jon Seymour
On Mon, Feb 23, 2009 at 4:03 PM, Mike Frysinger vap...@gentoo.org wrote:
 previous versions of bash would happily accept negative values ( treated as a
 signed integer and masked with like 0xff), but it seems some changes related
 to option parsing has broken that

 $ f(){ return -1; }; f
 -bash: return: -1: invalid option
 return: usage: return [n]

 POSIX states that the return value is an unsigned decimal integer:
 http://www.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_24_01

 but bash does not say that in the bash(1) man page ...
 -mike


For the record, GNU bash, version 3.2.39(20)-release (i686-pc-cygwin)
behaves this way:

 f() { return -1; }; f; echo $?
255

Which, burnt me the other day. I think I prefer the way bash 4.0
behaves because I would prefer to know that negative values are
illegal than to have them silently co-erced to the corresponding
unsigned integers . The bash 3.0 behaviour played havoc with a binary
search algorithm that I wrote until I realised that -1 had been
coerced to 255.

jon.




Re: No tilde expansion right after a quotation

2009-02-15 Thread Jon Seymour
There may be other ways to do this, but:

 CPATH=${CPATH}${CPATH:+:}$(echo ~usr1/blah/blah)

should work.

jon.


On Mon, Feb 16, 2009 at 9:02 AM, Angel Tsankov fn42...@fmi.uni-sofia.bg wrote:
 Chet Ramey wrote:
 Angel Tsankov wrote:
 Hi,

 Using bash 3.2.48(1)-release, echo ~root  prints ~root instead of
 /root. Is this the expected behaviour?

 Yes.  The tilde is not the first character in the word.  Portions of
 words to be tilde-expanded can't be quoted at all, either.

 I see.  I came to this example from a real-world problem and, in case
 someone can help, here it is.  I'd like to append a path to CPATH
 (separating it from the current contents of CPATH with a colon only if CPATH
 is set and is not empty).  The path-to-append points under usr1's home
 directory.  Also this should work in contexts such as CPATH=... some
 command.  I tried CPATH=${CPATH}${CPATH:+:}~usr1/blah/blah.  (I quote
 expansions just to be on the safe side, though I think home directories may
 not contain spaces.)  Of course, this did not work for the reason pointed
 above.  However, removing the quotes still does not make the
 tilde-expression to expand.  How can I achieve my goal?

 Regards,
 Angel Tsankov










Re: No tilde expansion right after a quotation

2009-02-15 Thread Jon Seymour
On Mon, Feb 16, 2009 at 10:22 AM, Paul Jarc p...@po.cwru.edu wrote:
 Jon Seymour jon.seym...@gmail.com wrote:
 If the builtin echo fails it will be because the bash interpreter has
 suffered a catastrophic failure of some kind [ e.g. run out of memory
 ]. Once that has happened, all bets are off anyway.

 Probably true, but command substitution forks a separate process, so
 that can fail for reasons external to the bash process.

 Here's another possibility:
 CPATH=${CPATH:+$CPATH:}${#+~usr1/blah/blah}


Paul,

Out of interest, how does one derive that outcome from the documented
behaviour of bash? That is, which expansion rules are being invoked?

jon.


 paul





Re: No tilde expansion right after a quotation

2009-02-15 Thread Jon Seymour
On Mon, Feb 16, 2009 at 12:11 PM, Paul Jarc p...@po.cwru.edu wrote:
 Jon Seymour jon.seym...@gmail.com wrote:
 The manual specifies a rule for ${parameter:+word}, but not
 ${parameter+word}.

 It's there, but easy to miss:
   In each of the cases below, word is subject to tilde expansion, parame-
   ter  expansion,  command  substitution, and arithmetic expansion.  When
   not performing substring expansion, bash tests for a parameter that  is
   unset  or null; omitting the colon results in a test only for a parame-
   ter that is unset.


Ah, thank you. I suspect I might have seen it if it had read:

   unset  or null; omitting the colon (:) results in a test only for a 
 parameter

since I don't think my word scanning firmware is capable of doing to
the colon - : transform automatically :-)

jon.




Re: Option -n not working reliably and poorly documented

2009-02-11 Thread Jon Seymour
Not sure this is correct. The ] is parsed by the shell but only if it  
surrounded by whitespace. This is why the -n option reports an error,  
since -n suppresses command execution.


I suspect the behaviour is required by posix or at least historical  
precedent.


jon.

On 12/02/2009, at 7:04, p...@po.cwru.edu (Paul Jarc) wrote:


Ronny Standtke ronny.stand...@fhnw.ch wrote:
The -n option not seem to work. Example with a little stupid  
nonsense

script:
---
ro...@ronny-desktop:/tmp$ cat test.sh
#!/bin/sh
if [ $blah == test]


This sort of error can't be caught by -n, because it's part of a
specific command, not the shell grammar.  Checking for ] is done when
the [ command is executed.  Since -n disables execution of all
commands, [ won't have a chance to check for a matching ].

Another strange thing: The man page of bash does only implicitly  
mention
the -n option in the description of the -D option: This  
implies the

-n option; no commands will be executed.


It's documented under the set builtin.


paul







Re: Option -n not working reliably and poorly documented

2009-02-11 Thread Jon Seymour
On Thu, Feb 12, 2009 at 8:02 AM, Paul Jarc p...@po.cwru.edu wrote:
 Jon Seymour jon.seym...@gmail.com wrote:
 Not sure this is correct. The ] is parsed by the shell

 It's parsed by the [ command.  That happens to be a builtin command,
 so yes, it is done by the shell, but it is not part of the grammar of
 the shell language.

 This is why the -n option reports an error, since -n suppresses
 command execution.

 -n *doesn't* report an error, because it only checks that the script
 satisfies the shell grammar.  It doesn't verify that the arguments of
 builtin commands are meaningful.


I stand corrected.

jon.




Re: Aliases in subbshell does not work as expected

2009-01-11 Thread Jon Seymour
G'day,

This working as documented.

The relevant part of the manual is, I think:

Bash always reads at  least  one  complete  line  of  input  before
executing  any  of  the  commands  on  that  line.  Aliases are
expanded when a command is read, not when it is  executed.

If aaa is not already defined, the actual behaviour is:

$ alias aaa='echo aaa'; ( alias aaa='echo bbb'; aaa ; )
-bash: aaa: command not found

Which is consistent with the manual page. If aaa is already defined, then

$ alias aaa='echo aaa'; ( alias aaa='echo bbb'; aaa ; )
aaa

which is what you observed.

Use unalias aaa and then you will get:

$ alias aaa='echo aaa'; ( alias aaa='echo bbb'; aaa ; )
-bash: aaa: command not found

jon.

On 11/01/2009, at 23:18, Коренберг Марк
socketp...@gmail.com wrote:

 Configuration Information [Automatically generated, do not change]:
 Machine: i486
 OS: linux-gnu
 Compiler: gcc
 Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i486' -
 DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i486-pc-linux-gnu' -
 DCONF_VENDOR='pc' -DLOCALED $uname output: Linux mmarkk-desktop
 2.6.27-11-generic #1 SMP Thu Jan 8 08:38:33 UTC 2009 i686 GNU/Linux
 Machine Type: i486-pc-linux-gnu

 Bash Version: 3.2
 Patch Level: 39
 Release Status: release

 Description:
   See Repeat-by section.

 Repeat-By:
   alias aaa='echo aaa'; ( alias aaa='echo bbb'; aaa ; )
   Will print 'aaa' instead of 'bbb' as I expect.




Re: why does bash not execute .bashrc with ssh -t ?

2008-10-15 Thread Jon Seymour
Chet,

Thanks for that info.

Due to the circumstances, recompiling bash isn't really an option for me, so
I decided to deal with it by having ssh invoke a script that could guarantee
~/.bashrc was sourced.

Regards,

jon seymour.

On Wed, Oct 15, 2008 at 1:24 PM, Chet Ramey [EMAIL PROTECTED] wrote:

 Jon Seymour wrote:

  Bash attempts to determine when it is being run by the remote shell
  daemon, usually rshd.  If bash determines it is being run by rshd, it
  reads  and  executes
  commands  from  ~/.bashrc,  if that file exists and is readable.  It
  will not do this if invoked as sh.  The --norc option may be used to
 inhibit
  this behavior, and
  the --rcfile option may be used to force another file to be read, but
  rshd does not generally invoke the shell with those options or allow them
 to
  be specified.
 
  However, when I use the ssh -t option, it would seem that allocation of a
  pseudo-tty is causing bash to assume that it is not being invoked by a
  remote shell daemon.

 Correct.  One of the criteria bash uses to decide whether it's being
 invoked by rshd or sshd is that its stdin is a socket.  Allocating a
 pseudo-tty makes that false.

 You can force bash to source .bashrc when it finds the various ssh
 variables in its startup environment by defining SSH_SOURCE_BASHRC
 in config-top.h and rebuilding bash.  That will cause .bashrc to be
 sourced more times than it should, but it will catch the cases you
 are interested in.

 Chet

 
  Is there any way I can have an ssh pseudo-tty and get bash to execute
  ~/.bashrc?
 
  jon seymour.


 --
 ``The lyf so short, the craft so long to lerne.'' - Chaucer

 Chet Ramey, ITS, CWRU[EMAIL PROTECTED]
 http://cnswww.cns.cwru.edu/~chet/ http://cnswww.cns.cwru.edu/%7Echet/



why does bash not execute .bashrc with ssh -t ?

2008-10-14 Thread Jon Seymour
I am trying to work out why .bashrc is not executing when I invoke ssh with
the -t option and _does_ execute when I invoke ssh without the -t option.

ssh -qt remote-host  which rsync  # indicates ~/.bashrc has not executed on
remote host
ssh -q remote-host   which rsync  # indicates ~/.bashrc has executed on
remote host

ssh -qt remote-host tty # reports /dev/pts/1
ssh -q remote-host tty # reports not a tty

ssh -qt remote-host echo '$-'  # reports hBc
ssh -q remote-host echo '$-'  # reports hBc

ssh -q remote-host ps -o pid,ppid,args -u xjsrs
  PID  PPID COMMAND
8704  8702 sshd: [EMAIL PROTECTED]
 8705  8704 ps -o pid,ppid,args -u xjsrs

ssh -qt remote-host ps -o pid,ppid,args -u xjsrs
  PID  PPID COMMAND
8733  8731 sshd: [EMAIL PROTECTED]/1
 8734  8733 ps -o pid,ppid,args -u xjsrs

According to echo '$-' neither shell is interactive.Yet, the one that is
started without a pseudo terminal does have .bashrc executed.

I added an debug statements to .bash_profile and it not getting executed in
either case.

There is no /etc/sshrc file and I don't have a ~/.ssh/rc file.

The bash version is:

GNU bash, version 3.00.15(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2004 Free Software Foundation, Inc.

Upon reading the manual, the rule that bash seems to be using to decide that
.bashrc should be executed if -t is not specified, is this one:

Bash attempts to determine when it is being run by the remote shell
daemon, usually rshd.  If bash determines it is being run by rshd, it
reads  and  executes
commands  from  ~/.bashrc,  if that file exists and is readable.  It
will not do this if invoked as sh.  The --norc option may be used to inhibit
this behavior, and
the --rcfile option may be used to force another file to be read, but
rshd does not generally invoke the shell with those options or allow them to
be specified.

However, when I use the ssh -t option, it would seem that allocation of a
pseudo-tty is causing bash to assume that it is not being invoked by a
remote shell daemon.

Is there any way I can have an ssh pseudo-tty and get bash to execute
~/.bashrc?

jon seymour.


Re: Differentiating false from fatal

2008-09-10 Thread Jon Seymour
On Wed, Sep 10, 2008 at 9:32 AM, Dan Stromberg [EMAIL PROTECTED]wrote:


 Say you have a shell script, and you need it to be bulletproof.

 As you write it, you throw in error checking all over the place.

 But say you have a function that needs to return a boolean result in some
 way - call the function bool_foo for the sake of discussion.  Further
 assume that bool_foo will sometimes fail to return a result, and it's
 defined with:

 function bool_foo
 (
 xyz
 )

 ...and not function bool_foo
 {
 xyz
 }

 ...so that bool_foo's variables don't mess with those of other functions,
 but also making it so it cannot just exit 1 to terminate the program
 directly.


It seems to me that the mechanism already exists - use the curly brace form
of the function definition and a make a habit of using local variable
definitions within the implementation of your bool_foo's. That way bool_foo
can exit if it needs to and return a boolean otherwise.

If bool_foo delegates to other functions whose use of global variables is
unspecified and potentially undesirable, then guard the call to the those
functions with enclosing subshells as required - that's a choice you make in
bool_foo.

I agree a formal exception mechanism would be nice, but I have found that
use of exit and the subshell feature does allow most exception handling
patterns to be emulated reasonably well.

jon seymour.


Module systems for bash?

2008-08-24 Thread Jon Seymour
Forgive me if there is a more appropriate forum for discussing this
topic, but it wasn't obvious from a cursory glance at the web pages
that there was one.

I was wondering if anyone has ever developed a module system for bash,
similar to the system that Perl has.

I have been been using a rudimentary, home-grown module system myself
and I find it to be quite an effective way of developing, maintaining
and distributing useful libraries of bash functions within the
confines of a project team.  The next logical step would be make the
system truly scalable so that it was practical to import modules
developed independently by third parties.

I think bash is a particularly useful language to develop libraries of
utility functions. In my view, it is superior to perl because it is a
shell language - any constructs used in the shell can be used in the
script library and vice versa without any impedance mismatch.

The way I achieve this is to expose each function in the script
library as an alias in the shell. The alias invokes the wrapper script
that loads the script library and then dispatches a call to the
function as the same name as the alias. In this way, all the functions
of the script library are directly accessible from the command line
allowing them to be composed at will in useful ways.

This leads to a very functionally rich shell and minimal redundancy
since function implementations can themselves re-use other functions
in the library.

My initial implementation has been a little too successful in that I
have thousands of small functions in my library and load times are
starting to become noticeable because I haven't paid enough attention
to the scalability aspects of the module system. The next generation
will fix this, but before I spend too much effort re-inventing the
wheel, I'd be interested in learning of other efforts in this area.

jon seymour.




export arrays does not work. not documented as not working

2008-08-14 Thread Jon Seymour
Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: cygwin
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash.exe' -DCONF_HOSTTYPE='i686'
-DCONF_OSTYPE='cygwin' -DCONF_MACHTYPE='i686-pc-cygwin'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKA\
GE='bash' -DSHELL -DHAVE_CONFIG_H -DRECYCLES_PIDS   -I.
-I/home/eblake/bash-3.2.39-19/src/bash-3.2
-I/home/eblake/bash-3.2.39-19/src/bash-3.2/include
-I/home/eblake/bash-3.2.39-1\
9/src/bash-3.2/lib   -O2 -pipe
uname output: CYGWIN_NT-5.1 nnorson3ws189 1.5.25(0.156/4/2) 2008-04-17
12:11 i686 Cygwin
Machine Type: i686-pc-cygwin

Bash Version: 3.2
Patch Level: 39
Release Status: release

Description:
Bash does not export arrays. Documentation does not list this
as a restriction.

Repeat-By:

 $ A=(1 2 3)
 $ B='foobar'
 $ export A
 $ export B
 $ typeset -p A
 declare -ax A='([0]=1 [1]=2 [2]=3)'
 $ typeset -p B
 declare -x B=foobar
 $ bash
 $ typeset -p A
 bash: typeset: A: not found  # unexpected
 $ typeset -p B
 declare -x B=foobar

Fix:
   Unknown




Re: export arrays does not work. not documented as not working

2008-08-14 Thread Jon Seymour
Apologies, I see that is true.

jon.

On Fri, Aug 15, 2008 at 3:36 PM, Pierre Gaston [EMAIL PROTECTED] wrote:
 It's listed in the BUGS section of my man page (last line of the page):
 Array variables may not (yet) be exported.