Re: unfinished command executed when TTY is closed

2014-12-17 Thread Steve Simmons
Advance apologies if I'm misunderstanding, but the described bug looks like 
reasonable behavior to me.

When a ssh connection drops, which side notices and when depends on the I/O 
being done on either side, the state of any keepalive settings, and the 
timeouts involved. If there's a long time between keepalives and the session is 
not generating I/O on the server side, I've seen sequences like this:

From host A, ssh via wireless to host B.
Start command on host B which does not generate network I/O.
Abrupt (unplanned) wireless connection lost on host A.

In such a case, host B does not yet know the connection has dropped because 
there is no network activity going on between host A and host B. So it will 
happily complete the job running on B, then disconnect when the shell on B 
attempts to send out the next command prompt and it fails. Host A may have 
noticed the drop well before this, as the user may have attempted type-ahead 
after the drop or because the ethX interface is lost completely when wireless 
is lost.

Steve


On Dec 17, 2014, at 10:54 AM, Chet Ramey chet.ra...@case.edu wrote:

 On 12/17/14, 8:34 AM, Jiri Kukacka wrote:
 
 I understand that this is due to handling EOF from closed TTY as \n, thus
 executing the command, and this is standard behavior of readline, but I
 think the problem is quite serious, so I have to fix it, and I hope that
 you would like this to have fixed as well.
 
 So, my current suggested fix is attached bellow (created for Bash 4.2),
 thanks for any comments to it.
 
 I would be interested in knowing what happens when the existing code is
 executed.  In bash-4.2/readline-6.2, rl_getc returns READERR if read(2)
 returns -1/errno!=EINTR and readline returns immediately without adding
 a newline.  It may be that bash executes a command that does not end in
 a newline, but in this case it doesn't seem like the code you modified
 should be executed.
 
 
 -- 
 ``The lyf so short, the craft so long to lerne.'' - Chaucer
``Ars longa, vita brevis'' - Hippocrates
 Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
 




Re: unfinished command executed when TTY is closed

2014-12-17 Thread Steve Simmons

On Dec 17, 2014, at 3:23 PM, Greg Wooledge wool...@eeg.ccf.org wrote:

 On Wed, Dec 17, 2014 at 03:16:53PM -0500, Steve Simmons wrote:
 Advance apologies if I'm misunderstanding, but the described bug looks like 
 reasonable behavior to me.
 
 It would be more reasonable for bash (or ssh, I'm not sure at what level
 this handling should occur) to discard the partially typed line.  Not
 to execute it.

You're right - I'd missed the fact that the user hadn't completed typing the 
command when the session dropped. My error.




Re: Problem with if [ -d in bash 4.3.30

2014-12-09 Thread Steve Simmons

On Dec 9, 2014, at 9:47 AM, Stephane Chazelas stephane.chaze...@gmail.com 
wrote:

 It's a bit confusing that ${VAR:-} should be treated
 differently from ${VAR:=}. Was there a rationale for changing
 the behaviour other than strict POSIX conformance? AFAICT, ksh
 and mksh behave differently (from bash and from each other), so
 I can't say the change helps much with portability here.

One tests and sets the variable involved, one tests but does not set the 
variable involved. It's nice to have both behaviors available, I use both.


Re: Bash updating for preventing from shellshock

2014-12-02 Thread Steve Simmons

On Dec 2, 2014, at 4:24 AM, bijay pant bijaypa...@gmail.com wrote:

 From: root
 
 Configuration Information [Automatically generated, do not change]:
 Machine: x86_64
 OS: linux-gnu
 Compiler: gcc
 Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
 -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-redhat-linux-gnu' 
 -DCONF_VENDOR='redhat' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' 
 -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -D_GNU_SOURCE 
 -DRECYCLES_PIDS  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
 -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fwrapv
 uname output: Linux localhost 2.6.32-71.el6.x86_64 #1 SMP Wed Sep 1 01:33:01 
 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
 Machine Type: x86_64-redhat-linux-gnu
 
 Bash Version: 4.1
 Patch Level: 2
 Release Status: release

I assume you're asking if you're vulnerable. Yes, you are. Upgrade to patch 
level 17. Source at ftp://ftp.gnu.org/gnu/bash. 

Steve


Re: to add .bash/ along with .bashrc as the default init dir.

2014-11-24 Thread Steve Simmons

On Nov 23, 2014, at 2:08 PM, Chet Ramey chet.ra...@case.edu wrote:

 On 11/23/14 5:54 AM, Xie Yuheng wrote:
 we should add .bash/ along with .bashrc as the default init dir.
 this will make things more flexible, and will not break any existed code.
 to be default is important, people who right simple makefile can use
 this, only when it is default.
 
 Is this of general enough use to add?  There are Linux distributions
 that have added shell code similar to what Piotr posted to add this
 functionality.  I don't think there's enough reason to make the change.

I hate the number of dotfiles that exist in $HOME - not bash's (well, bash's 
too), but every freaking application that does it. I hate not knowing what app 
installed the dotfiles. I hate that when I go to see what dotfiles exist for 
$APP, I see all the dotfiles for all the apps without know which ones belong 
with which.

My preference would be that the search order for any bash dotfiles are 
~/.bash/file, then ~/.file. It's backwards-compatible, and for those of us who 
have $HOME in a single filesystem across many machines, a symbolic or hard 
links from ~/.bashdotfile - ~/.bash/foodotfile.







Re: Shellshock-vulnerable version still most obvious on ftp.gnu.org

2014-11-06 Thread Steve Simmons
On Nov 6, 2014, at 10:14 AM, Ian Jackson ijack...@chiark.greenend.org.uk 
wrote:

 Chet Ramey writes (Re: Shellshock-vulnerable version still most obvious on 
 ftp.gnu.org):
 On 11/6/14, 7:47 AM, Ian Jackson wrote:
 But in the current environment it's looking rather quaint.  We could
 probably provide a full tarball for each patch release.
 
 That is supposed to be one of the advantages of using git.  You can always
 get a tarball of the latest release with all patches applied using
 
 http://git.savannah.gnu.org/cgit/bash.git/snapshot/bash-master.tar.gz
 
 Right.  That's great.  But that's not the official primary
 distribution channel for bash, as I understand it.
 
 Thanks,
 Ian.

Don't get me wrong, I love git and it's my mechanism of choice for updates. But 
that requires folks to be pretty up-to-date themselves on how to do stuff. As 
we were doing the shellshock updates here, I found it a helluva lot easier to 
deal with legacy system owners who couldn't do much more than cut and paste of
  gunzip bash-N.M.P.tgz
  tar xpf bash-N.M.P.tar ; cd bash-N.M.P
 ./configure  make  make install
They've never run patch, and in some cases don't even have a patch command. 
Luckily those folks have legacy admins like me.

For them I built up-to-date tarballs of all the bash-N.M.P versions. Not only 
was it a big win for them, it also turned out to be useful for me when trying 
to install onto hosts that didn't have git or reasonably recent autoconf chains.

There are a lot of systems out there with custom device drivers for ten- and 
twenty-year-old equipment that are monitoring satellites nobody ever thought 
would stay up this long, or controlling custom-built devices that need to run 
for another 5 years to finish their longitudinal surveys. We're lucky that most 
of them at least have a cc and make that works, and we for damned sure don't 
have the money to go rebuild them in place with up-to-the-minute tool chains. 
Making those folks happy and secure makes my life happier and more secure.

In short, current tarballs are a win, both for the relatively naive admin and 
for the old guys. I'm fer it.

Steve


Re: Issue with Bash-4.3 Official Patch 27

2014-10-15 Thread Steve Simmons
On Oct 15, 2014, at 9:38 AM, lorenz.bucher@rohde-schwarz.com wrote:

 Hello,
 in refer to 
 http://lists.gnu.org/archive/html/bug-bash/2014-09/msg00278.html variables 
 with suffix %% can't be set/exported.
 This makes problems restoring environments which where saved by external 
 programs like printenv (see example below)
 
 I saw this issue on Ubuntu 12.04 with
 bash version GNU bash, version 4.2.25(1)-release (x86_64-pc-linux-gnu)
 
 examples:
 
 $ foo() { echo bar; }
 $ export -f foo
 $ echo export BASH_FUNC_foo%%='$(printenv BASH_FUNC_foo%%)' | tee ./env
 export BASH_FUNC_foo%%='() {  echo bar
 }'
 $ source ./env
 bash: export: `BASH_FUNC_foo%%=() {  echo bar
 }': not a valid identifier
 
 
 $ export BASH_FUNC_foo%%='() {  echo bar; }'
 bash: export: `BASH_FUNC_foo%%=() {  echo bar; }': not a valid 
 identifier

Given the changes made for shellshock, I doubt the above will ever work again. 
Try

   env | sed 's/^BASH_FUNC_\([^%]*\)%%=/\1/'

or

   echo export BASH_FUNC_foo%%='$(printenv BASH_FUNC_foo%%)' | sed 
's/^BASH_FUNC_\([^%]*\)%%=/\1/' | tee ./env

to strip function descriptors.




Re: Cannot build bash-4.2 with Patch 53

2014-10-09 Thread Steve Simmons

On Oct 9, 2014, at 9:34 PM, TODD TRIMMER todd.trim...@gmail.com wrote:

 If I compile from bash-4.2 from source, cumulatively applying patches through 
 52, things work fine. If I start from scratch and apply through 53, it errors 
 out:
 
 gcc -L.. . .
 ./builtins/libbuiltins.a(evalstring.o): In function `parse_and_execute':
 /home/ttrimmer/depot/ext/bash/patch/src/builtins/evalstring.c:274: undefined 
 reference to `parser_remaining_input'
 collect2: ld returned 1 exit status
 make: *** [bash] Error 1
 
 
 I can see parser_remaining_input patched in parse.y and 
 builtins/evalstring.c. However, it will not compile.

Sounds like y.tab.[ch] never got (re)built from parse.y. Try renaming them to 
-old and give the 'make' command again. If you don't have yacc or bison, 
that'll fail. Get and install bison and try again.


Re: Issues with exported functions

2014-09-27 Thread Steve Simmons

On Sep 27, 2014, at 2:19 AM, Eric Blake ebl...@redhat.com wrote:

 The prefix is nice for quick identification, but what is ESSENTIAL is
 something that puts shell functions in a namespace that is untouchable
 by normal shell variables (the () suffix in Florian's patch).  If all
 you do is add a prefix, but still leave the environment containing what
 can still collide with a shell variable name, you are still vulnerable.

Repeated for truth.



Re: Bash security issue

2014-09-27 Thread Steve Simmons

On Sep 27, 2014, at 6:51 PM, Eric Blake ebl...@redhat.com wrote:

 On 09/27/2014 04:21 PM, Chet Ramey wrote:
 
 2) build a 'real' /bin/sh without those compiled in. This begs the 
 definition of 'real', but IMHO if it's not in POSIX, it shouldn't be in 
 'real' /bin/sh
 
 This is dash's niche.
 
 If you want a truly minimalist shell that will loudly complain at
 attempts to use extensions, use 'posh' instead of 'dash'.
 
 But Chet's point remains - there's no need to dumb down bash to serve as
 a minimalist shell, because that's a maintenance burden, and there are
 already other projects that have decided to take on that role.

Noted. Thanks.

Steve


Re: Issues with exported functions

2014-09-25 Thread Steve Simmons

On Sep 25, 2014, at 2:47 PM, lolilolicon loliloli...@gmail.com wrote:

 On Fri, Sep 26, 2014 at 2:28 AM, Ángel González an...@16bits.net wrote:
 [...]
 On the other hand, this approach would be much more interesting if bash
 delayed parsing of exported functions until they are used (ie. check
 
 This is what function autoload is for in zsh. It's indeed a better
 approach. It was also suggested by Dan Douglas in this thread (FPATH
 mechanism).

Autoload has plusses and minuses. Shell startup is initially faster because 
there's a lot less processing, but each not-yet-loaded function requires 
traversal of another search path. And if you have multiple levels of shell 
(who, me do :sh in vi?) any newly spawned shells in the same session don't get 
the previously loaded functions. Unless, of course, autoloaded functions could 
be exported. But I'm not gonna think about that right now, got a lot of work to 
do.




Re: CERT/NIST reveal level 10 bash alert today, 24 September 2014

2014-09-25 Thread Steve Simmons

On Sep 25, 2014, at 5:42 PM, Alexandre FERRIEUX - SOFT/LAN 
alexandre.ferri...@orange.com wrote:

 On 25/09/2014 22:51, Eric Blake wrote:
 On 09/25/2014 08:48 AM, Alexandre Ferrieux wrote:
 Is the response (workarounds and patch) being discussed elsewhere ?
 
 Thanks. Like thousands of people I guess, I have never imagined before today 
 that a bash bug could exist, so I'm new to this list, and did not realized 
 its archive was lagging a bit. Sorry about re-asking.
 
 
 (2) Workaround
 
 Privileged mode skips the import of functions from the environment, hence
 #! /bin/bash -p is a quick fix.
 I assume that 99.9% of uses would be unaffected by the other side-effects
 of -p.
 Am I missing something ?
 Yes.  Among others, system(3) and popen(3) call /bin/sh, if /bin/sh is
 bash, there is no way for you to pass -p into that child.
 Argh indeed.
 
 Out of curiosity, may I ask what purpose 'export -f' serves ? In 20+ years of 
 unix (admittedly sticking to /bin/sh for lack of a compelling need of 
 anything else), I have never felt the need to share function across a 
 fork/exec (across a fork, of course, in subshells; but not a fork/exec). So 
 what is that use-case that motivated that tricky feature ?

Check the archive; there was just a long discussion on this and it should be in 
there by now. Look for the subject 'Issues with exported functions'.


Re: Issues with exported functions

2014-09-24 Thread Steve Simmons

On Sep 24, 2014, at 4:06 PM, lolilolicon loliloli...@gmail.com wrote:

 On Thu, Sep 25, 2014 at 3:53 AM, Greg Wooledge wool...@eeg.ccf.org wrote:
 
 So, if Chet removes the feature, it would probably break something that
 someone cares about.  Maybe there could be a compile-time option to
 disable it.  Maybe there already is -- I didn't look.

Many bash completion libraries also rely on function exports.

 I don't expect more than a dozen who rely on this... but bash
 programmers can be quite the perverts, so...

A significant number of us have actually read the manual and rely on the 
ability of bash to export functions. I have literally hundreds of exported 
functions in my environment. Some are defined in a setup file '~/.bash_once', 
some get built on the fly by that file. As you might guess, ~/.bash_once is 
invokes only once per session by my .bashrc .bash_login with something like:

if [[ 0 == ${SET_ONCE:=0} ]] ; then
if [[ -f ~/.bash_once ]] ; then
. ~/.bash_once
else
echo No ~/.bash_once file for '~/.bashrc' to invoke. 2
fi
fi

.bash_once defines SET_ONCE and loads literally hundreds of environment 
variables and exports many shell functions that would otherwise have to be 
defined in .bashrc and processed on every freaking run. .bash_once is about 50 
times larger than .bashrc and .bash_login. Fast. Very fast. But without 
exportable functions, it wouldn't work at all.

As an exercise for the student, consider the utility of this simplified excerpt:

for SYSTEM in \
{foo,bar,baz}.dec.school.edu \
{alligator,snake-skin,lizard}.reptiles.work.com \
{misery,serenity,frailty}.films.home.org \
; do # Strip off domain, use dash-less name as function name
export FNAME=${SYSTEM%%.*}
export FNAME=${FNAME//-/}
eval $(echo $FNAME() { ssh $SYSTEM '$@;};' export -f $FNAME)
unset SYSTEM FNAME
done

Hint - source those lines, then give the command 'builtin type snakeskin'.

It's probably too much overhead for every bash invocation, but if you only do 
it once per session, it's damned useful. 

Consider this one vote against removing function exports.

Steve




Re: in error messages, do not output raw non-printable characters to the terminal

2014-09-10 Thread Steve Simmons

On Sep 10, 2014, at 4:58 AM, Vincent Lefevre vinc...@vinc17.net wrote:

 In error messages, raw non-printable characters from arguments should
 not be output without transformation, at least if this is on a terminal.
 If stderr has been redirected, this is more a matter of choice.
 
 An example: type cd /^M where ^M is a CR character (e.g. obtained by
 typing ^V ^M). One gets on the terminal:
 
 : No such file or directory
 
 which is not very informative...

One of many arguments why command error errors should be handled, and
why interpreted data in messages should be surrounded by quotes (not
quoted, but surrounded by quotes). As an example, this is much more
effective for ugly values of $LOG like control-M or  dir name:

   # Empty the log directory 

   if ! cd $LOG ; then
   echo Directory '$LOG' doesn't exist or not accessible, halting.
   exit 1
   fi
   # take actions here...
   rm *

is a helluva lot more sensible than

   cd $LOG
   # take actions here...
   rm *

 IMHO, in this case, bash should do like zsh, which replaces the CR
 character by the character sequence ^M.

This doesn't seem like a good idea. At our site it leads our zsh users
to send us complaints that they don't have a file with the two-character
name ^M.

Beyond there, there are several drawbacks. I'd hate to see the built-in
echo diverge from system /bin/echo (or diverge further, as the case may
be). It would also break this current behavior:

   CLEAR_SCREEN=$(tput clear)
   echo $CLEAR_SCREEN

By comparison, checking returns from commands like cd and surrounding
echoed/printed parameters by quotes will work for pretty much all legacy
bash/ksh/sh shells, maybe zsh as well. 

I believe that perl has some sort of quote function that takes a string
with non-printable chars and converts them to ^M, \033, etc. That, used
judiciously, strikes me as more sensible. Of course, you'd still have to
use and quote it properly. For $LOG values like ' dir name' or control-M
or not defined, consider the ways this can go wrong:

   if ! cd $LOG ; then
  echo Directory $(quoteme $LOG) does not exist or not accessible.
  exit 1
   fi
   rm *

vs this:

   if ! cd $LOG ; then
  print Directory '%s' does not exist.\n $(quoteme $LOG)
  exit 1
   fi
   rm *

Steve



Re: SEGFAULT if bash script make source for itself

2014-08-28 Thread Steve Simmons
On Aug 28, 2014, at 12:37 PM, Chris Down ch...@chrisdown.name wrote:

 I really don't understand -- why is this unexpected? It's exactly what I'd 
 expect to happen if you try to do something like that. It should not be 
 disallowed to source yourself, that prevents people from doing things when 
 *sensibly* sourcing their own script.

Agree. It's perfectly valid for a sourced script to check for an error 
condition, fix the error, then re-source itself. Or for scripts to re-source 
each other, each time with new parameters. Recursion is a win.



Re: Feature request - ganged file test switches

2014-08-13 Thread Steve Simmons

On Aug 12, 2014, at 4:36 PM, Chet Ramey chet.ra...@case.edu wrote:

 On 8/9/14, 7:07 AM, Steve Simmons wrote:
 
 It would be nice to have ganged file test switches. As an example, to test 
 that a directory exists and is properly accessible one could do
 
  if [[ -d foo ]]  [[ -r foo ]]  [[ -x foo ]] ; then . . .
 
 but
 
  if [[ -drx foo ]] ; then . . .
 
 is a lot easier.
 
 Sure, but it's only syntactic sugar.

Knew that going in :-). Other discussion points out how limited it is; I'm 
perfectly happy pulling back. My thoughts on how to do this more flexibly boil 
down to the capabilities gnu find has w/r/t file types and modes. Unfortunately 
we have a few systems which lack gnu find and are vendor supported appliances 
(eyeroll) and we're unable to add new software beyond simple scripts. Which 
also means that any new bash feature would probably be unavailable for years, 
so it's not like this is a big loss.

If others have no interest in this syntactic sugar I see little point to adding 
it; a broader and more flexible solution is just to use find as above.


Re: Feature request - ganged file test switches

2014-08-13 Thread Steve Simmons
On Aug 13, 2014, at 2:31 PM, Ken Irving ken.irv...@alaska.edu wrote:

 I like the idea, but switch negation would need to be supported, and
 I don't think that's been covered sufficiently.  Using ! as a switch
 modifier might be possible, and I like it, but would then also apply to
 single filetest switches, e.g., -!e foo would be the same as ! -e foo.
 Maybe that's possible, but it seems a fairly major addition to the syntax.

Agree on all.


 I'm a little confused about the 'before' example:
 
if [[ -d foo ]]  [[ -r foo ]]  [[ -x foo ]] ; then . . .
 
 I thought that  could be used reliably within the [[ ]] construct,
 including short-circuiting the tests, so this could be:
 
if [[ -d foo  -r foo  -x foo ]] ; then . . .
 
 I don't see how the bundled switces could be ambiguous, so must be 
 missing something.

Both forms work, which I didn't expect. Learn something new every day; thanks.




Feature request - ganged file test switches

2014-08-09 Thread Steve Simmons
Advance apologies if this has already been discussed and rejected.

It would be nice to have ganged file test switches. As an example, to test that 
a directory exists and is properly accessible one could do

  if [[ -d foo ]]  [[ -r foo ]]  [[ -x foo ]] ; then . . .

but

  if [[ -drx foo ]] ; then . . .

is a lot easier.

Best,

Steve


Re: Feature request - ganged file test switches

2014-08-09 Thread Steve Simmons
On Aug 9, 2014, at 11:16 AM, Andreas Schwab sch...@linux-m68k.org wrote:

 Steve Simmons s...@umich.edu writes:
 
 Advance apologies if this has already been discussed and rejected.
 
 It would be nice to have ganged file test switches. As an example, to test 
 that a directory exists and is properly accessible one could do
 
  if [[ -d foo ]]  [[ -r foo ]]  [[ -x foo ]] ; then . . .
 
 but
 
  if [[ -drx foo ]] ; then . . .
 
 is a lot easier.
 
 But it is ambigous.  Does it mean adjuntion or conjunction?

Good point. I'd intended conjunction. And then of course, there's the negation 
issue. Something like 
  [[ -dw!x foo ]]
for writable directory but not executable is terse and quick to write, but 
that way probably lies madness. Nope, I'm sticking to it being equiv to the 
larger expression above. As a possible alternative syntax with more 
flexibility, maybe
  [[ -d -a ( -r -o ! -x ) foo ]]
which is true for a directory that's either readable or not executable. What 
I'm looking for is a way to do a lot of file tests out of a single stat() call 
with a clear, simple, and terse syntax. I'm tired of writing shell functions 
like
  is_writable_dir() {
  [[ -d $1 ]]  [[ -w $1 ]]
  return $?
  }