Re: Integer Overflow in braces

2015-08-18 Thread John McKown
On Mon, Aug 17, 2015 at 6:15 PM, Eric Blake ebl...@redhat.com wrote:
​snip


 Fix your script to not do stupid things, like trying an insanely-large
 brace expansion, or trying an 'eval' (or similar) on untrusted user
 input. But don't call it a bash security hole that bash allows you to
 write stupid scripts.


​Good point. And, not meaning to be nasty, the security hole would be in
the head of the person who allowed such a programmer to write mission
critical code.

I will assume that the OP was actually in a learning mode while doing
unusual things which he knew better than to do, just to see what happens.
Of course, reporting it as a bug wasn't really the right thing to do.

Reminds me of a bug(?) in an online system which, when triggered, would
cause the system to update the user's login password with an untypeable
character.​ One clever programmer used this bug to punish people who ran
his program without authorization.




 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org




-- 

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! 
John McKown


Re: Integer Overflow in braces

2015-08-18 Thread Dan Douglas
On Monday, August 17, 2015 04:15:50 PM Eric Blake wrote:
 On 08/17/2015 09:58 AM, Pasha K wrote:
  Hey Greg,
  
  I wasn't particularly trying to actually generate that large amount of
  strings in memory, I wa purposely trying to overflow the integer variable
  nelemhoping to get Code Execution. This could potentially be a security
  risk as shell shock was just more of a denial of service rather than
  straight up code execution. However, just because I wasn't able to gain
  control of the registers doesn't mean someone else with more skill can't.
 
 This is not a security risk.
 
 Shell shock was a security hole because the shell could be coerced into
 executing user-supplied code WITHOUT a way for a script to intervene.
 
 Any poorly-written shell script can do stupid things, including crashing
 bash because it overflows the heap by trying to allocate memory for such
 a stupidly large expansion.  But unless the problem can be triggered
 without a script (the way shell shock executed user code before even
 starting to parse a script), then you can't exploit the problem to gain
 any more access to the system than you already have by being able to run
 a script in the first place.
 
 Fix your script to not do stupid things, like trying an insanely-large
 brace expansion, or trying an 'eval' (or similar) on untrusted user
 input. But don't call it a bash security hole that bash allows you to
 write stupid scripts.
 
 

IMHO the issue of whether the integer is allowed to overflow is separate from 
the question of whether the resulting expansion is too big. Code that does 
an `eval blah{0..$n}` is reasonably common and not necessarily stupid. 
-- 
Dan Douglas

signature.asc
Description: This is a digitally signed message part.


Re: Integer Overflow in braces

2015-08-18 Thread Eric Blake
On 08/17/2015 09:58 AM, Pasha K wrote:
 Hey Greg,
 
 I wasn't particularly trying to actually generate that large amount of
 strings in memory, I wa purposely trying to overflow the integer variable
 nelemhoping to get Code Execution. This could potentially be a security
 risk as shell shock was just more of a denial of service rather than
 straight up code execution. However, just because I wasn't able to gain
 control of the registers doesn't mean someone else with more skill can't.

This is not a security risk.

Shell shock was a security hole because the shell could be coerced into
executing user-supplied code WITHOUT a way for a script to intervene.

Any poorly-written shell script can do stupid things, including crashing
bash because it overflows the heap by trying to allocate memory for such
a stupidly large expansion.  But unless the problem can be triggered
without a script (the way shell shock executed user code before even
starting to parse a script), then you can't exploit the problem to gain
any more access to the system than you already have by being able to run
a script in the first place.

Fix your script to not do stupid things, like trying an insanely-large
brace expansion, or trying an 'eval' (or similar) on untrusted user
input. But don't call it a bash security hole that bash allows you to
write stupid scripts.

-- 
Eric Blake   eblake redhat com+1-919-301-3266
Libvirt virtualization library http://libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: Integer Overflow in braces

2015-08-18 Thread Greg Wooledge
On Tue, Aug 18, 2015 at 07:54:48AM -0500, Dan Douglas wrote:
 IMHO the issue of whether the integer is allowed to overflow is separate from 
 the question of whether the resulting expansion is too big. Code that does 
 an `eval blah{0..$n}` is reasonably common and not necessarily stupid. 

Yes, that's fine.  But I don't actually understand what kind of overflow
Pasha K was actually trying to test for.  He/she mentioned nelem, which
only appears in two places in the bash source code: once in indexed
arrays, and once in associative arrays.  But there were no arrays in
the script being executed.

{0..} should produce an error because it runs out of
memory.  So I would expect to see a malloc failure, or something similar.
If Pasha is saying that an integer overflow occurs before the malloc
failure, then that may or may not be interesting to Chet.  If it crashes
bash, then it's not interesting to me, because the inevitable malloc
failure would have crashed it if the overflow didn't.  It only becomes
interesting to me if the integer overflow causes some weird behavior to
happen BEFORE bash crashes.



Bash-4.3 Official Patch 41

2015-08-18 Thread Chet Ramey
 BASH PATCH REPORT
 =

Bash-Release:   4.3
Patch-ID:   bash43-041

Bug-Reported-by:Hanno Böck ha...@hboeck.de
Bug-Reference-ID:   20150623131106.6f111da9@pc1, 
20150707004640.0e61d2f9@pc1
Bug-Reference-URL:  
http://lists.gnu.org/archive/html/bug-bash/2015-06/msg00089.html,

http://lists.gnu.org/archive/html/bug-bash/2015-07/msg00018.html

Bug-Description:

There are several out-of-bounds read errors that occur when completing command
lines where assignment statements appear before the command name.  The first
two appear only when programmable completion is enabled; the last one only
happens when listing possible completions.

Patch (apply with `patch -p0'):

*** ../bash-4.3.40/bashline.c   2014-12-29 14:39:43.0 -0500
--- bashline.c  2015-08-12 10:21:58.0 -0400
***
*** 1469,1476 
--- 1469,1489 
os = start;
n = 0;
+   was_assignment = 0;
s = find_cmd_start (os);
e = find_cmd_end (end);
do
{
+ /* Don't read past the end of rl_line_buffer */
+ if (s  rl_end)
+   {
+ s1 = s = e1;
+ break;
+   }
+ /* Or past point if point is within an assignment statement */
+ else if (was_assignment  s  rl_point)
+   {
+ s1 = s = e1;
+ break;
+   }
  /* Skip over assignment statements preceding a command name.  If we
 don't find a command name at all, we can perform command name
*** ../bash-4.3.40/lib/readline/complete.c  2013-10-14 09:27:10.0 
-0400
--- lib/readline/complete.c 2015-07-31 09:34:39.0 -0400
***
*** 690,693 
--- 690,695 
if (temp == 0 || *temp == '\0')
  return (pathname);
+   else if (temp[1] == 0  temp == pathname)
+ return (pathname);
/* If the basename is NULL, we might have a pathname like '/usr/src/'.
   Look for a previous slash and, if one is found, return the portion
*** ../bash-4.3/patchlevel.h2012-12-29 10:47:57.0 -0500
--- patchlevel.h2014-03-20 20:01:28.0 -0400
***
*** 26,30 
 looks for to find the patch level (for the sccs version string). */
  
! #define PATCHLEVEL 40
  
  #endif /* _PATCHLEVEL_H_ */
--- 26,30 
 looks for to find the patch level (for the sccs version string). */
  
! #define PATCHLEVEL 41
  
  #endif /* _PATCHLEVEL_H_ */

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: quoted compound array assignment deprecated

2015-08-18 Thread isabella parakiss
On 8/18/15, Chet Ramey chet.ra...@case.edu wrote:
 On 8/17/15 4:19 AM, isabella parakiss wrote:
 Quoting is necessary in a few cases:

 $ var=foo; declare -A arr$var=([x]=y)
 bash: warning: arrfoo=([x]=y): quoted compound array assignment
 deprecated
 $ var=foo; declare -A arr$var=([x]=y)
 bash: syntax error near unexpected token `('
 $ var=foo; declare -A arr$var=([x]=y)
 bash: syntax error near unexpected token `('

 I don't think this should be the default behaiour...

 This is exactly the case for which the warning is intended.  If you want
 to construct variable names on the fly, use `eval' or don't mix
 declarations of constructed variable names with compound assignment.

 You can read the extensive discussion starting at
 http://lists.gnu.org/archive/html/bug-bash/2014-12/msg00028.html.

 http://lists.gnu.org/archive/html/bug-bash/2014-12/msg00115.html is the
 newest proposal.

 Chet
 --
 ``The lyf so short, the craft so long to lerne.'' - Chaucer
``Ars longa, vita brevis'' - Hippocrates
 Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/


Sorry for being both pedantic and late for that discussion but what's the
point of this warning?  From my understanding, the code is still valid, so
it doesn't stop a possible attacker and it only annoys regular users.

Using eval requires an extra level of escaping on everything else, I'd
rather use declare 2/dev/null to suppress the warning than eval...

Idea: display the warnings in -n mode, like ksh.
This way bash wouldn't produce unexpected results on existing scripts, it
wouldn't even require a new compatibility level and shopt.
What do you think about it?


---
xoxo iza



Re: Parameter Expansion: Case Modification: ${var~} not documented

2015-08-18 Thread Dan Douglas
On Tuesday, August 18, 2015 9:54:55 AM CDT Isaac Good wrote:
 Would you mind sharing the rational behind having it undocumented?

Since I like guessing: the syntax for parameter expansion operators is 
currently non-extensible, so the namespace of terse operators is in limited 
supply. New syntax should be extensible to suit future needs while keeping the 
language minimal. This is new syntax that adds one function that will be 
rarely used. I can think of better ways to use that operator.

The operators in use currently are already a disaster. We *really* could use a 
solution for the circumfix operators `!var[@]` and `!var[*]` that collide with 
the prefix `!` operator, and for reasons unknown don't interoperate with any 
of the other expansions such as array slicing / subscripting. I wouldn't want 
to add new (pointless) syntax before the fundamental problems are addressed.

-- 
Dan Douglas



Re: quoted compound array assignment deprecated

2015-08-18 Thread Dan Douglas
Sorry I meant to reply to that thread but ran out of time. I think Stephane's 
eventual proposal was pretty close to what I had in mind but expressed badly. 
I'm not sure why it was eventually decided to deprecate the current system 
entirely but I'm not opposed to the idea - especially if it provides no 
functionality for which there aren't easy workarounds.

The only thing I'm actively abusing this for at the moment in scripts I 
actually use is as a way of encoding 2D arrays. It's very much a read-only 
datastructure too.

~ $ ( key1=foo key2=bar; declare -A a=([foo]='([bar]=baz)') b=${a[$key1]}
typeset -p a b; echo ${b[$key2]} )
declare -A a='([foo]=([bar]=baz) )'
declare -A b='([bar]=baz )'
baz

Any change will likely break this property but I think wrapping it in eval 
gives the same result.

-- 
Dan Douglas



Re: -e does not take effects in subshell

2015-08-18 Thread Andreas Schwab
Linda Walsh b...@tlinx.org writes:

 Ex: rmx -fr (alias to rm --one-file-system -fr, since rm lacks the
 -x switch like 'find, cp, mv, et al.) no longer works to clean
 out a directory  stay on *one* file system.

 Now rm will delete things on any number of file systems, as long
 as they correspond to a cmdline argument.

That's the only sensible way to implement it.  Which, incidentally,
works exactly like find -xdev.

Now please explain what this has anything to do with POSIX.

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
And now for something completely different.



Re: quoted compound array assignment deprecated

2015-08-18 Thread Chet Ramey
On 8/18/15 1:52 PM, isabella parakiss wrote:

 Sorry for being both pedantic and late for that discussion but what's the
 point of this warning?  From my understanding, the code is still valid, so
 it doesn't stop a possible attacker and it only annoys regular users.

It's meant as an indication that this form of assignment will not be
treated as a compound array assignment in the future.  The idea is that
you give users plenty of warning and plenty of opportunity to change
their scripts without impacting function or breaking existing scripts
on a minor version upgrade.

There are legitimate security concerns with having declare treat an
expanded variable as specifying a compound array assignment.  Stephane
did a nice job of going through them, and the discussion is illuminating.

 Using eval requires an extra level of escaping on everything else, I'd
 rather use declare 2/dev/null to suppress the warning than eval...

Your choice, of course.

 Idea: display the warnings in -n mode, like ksh.
 This way bash wouldn't produce unexpected results on existing scripts, it
 wouldn't even require a new compatibility level and shopt.
 What do you think about it?

Very few people run bash -n.  It's a nice idea, but it wouldn't have the
reach I'm looking for.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: -e does not take effects in subshell

2015-08-18 Thread Greg Wooledge
On Tue, Aug 18, 2015 at 01:49:53PM -0700, Linda Walsh wrote:
 Ex: rmx -fr (alias to rm --one-file-system -fr, since rm lacks the
 -x switch like 'find, cp, mv, et al.) no longer works to clean
 out a directory  stay on *one* file system.

When did POSIX or any historical Unix rm have a --one-file-system option?
You say no longer works as if it had EVER worked in the past.

And yes, the standard way to do this (the only way with traditional
tools) would use find ... -xdev ... -exec rm {} +



Re: -e does not take effects in subshell

2015-08-18 Thread Linda Walsh



Andreas Schwab wrote:

Linda Walsh b...@tlinx.org writes:


Ex: rmx -fr (alias to rm --one-file-system -fr, since rm lacks the
-x switch like 'find, cp, mv, et al.) no longer works to clean
out a directory  stay on *one* file system.

Now rm will delete things on any number of file systems, as long
as they correspond to a cmdline argument.


That's the only sensible way to implement it.  Which, incidentally,
works exactly like find -xdev.

---
with 'find', you can specify -xdev with a starting path of .

with 'rm' functionality to remove '/' '.' and '..' was prohibited
by POSIX, though the coreutils version still allows the choice 
of the more dangerous removal of '/' with with the --[no-]preserve-root.


But the more useful rm -fr . or the variant rm -fr dir/. so you 
know you are removing the contents of dir, no matter where or what

dir is... with find, that doesn't work -- if 'dir' is a symlink
to /tmp/dir/.., find won't remove anything.  Any other solution 
from POSIX added complication.  I was told by a BSD fanatic that 'rm'

was changed because after the SysV companies left POSIX (as most of
them had disappeared), BSD'ers gained a majority and could redirect
the standard as they pleased.  Disallowing students playing around with
rm -fr {/,dir/,}{.,..} apparently was a big thing @Berkeley.  Being
able to force the removal of such options from everyone's rm was
an huge win, they considered (this is from a discussion w/one fanatic, 
but boy, was it memorable).  Disallowing any option or ENV(POSIX_CORRECTLY)

option to re-allow the feature has been continuously shot down by
'rm' maintainers (even though they keep in their own alias-able switches
to all removal of '/'). 


Now please explain what this has anything to do with POSIX.



 It apparently was the POSIX 2008 standard that prohibited the historical
behavior (on linux -- removed dir contents, and failed on current-dir
because it made no sense -- but did so *quietly* and after following
the depth first design.



Re: remaking bash, trying static, glibc refuses static...?

2015-08-18 Thread Mike Frysinger
On 18 Aug 2015 13:34, Linda Walsh wrote:
 Then can you give any technical reason why a static
 lib that uses no network services (i.e. running
 on a mini-root ) couldn't be made available for
 the various calls that currently claim dynamic library
 support is necessary.

(1) http://www.akkadia.org/drepper/no_static_linking.html
(2) it's using the nss system which lets people drop modules into the system
at anytime and change the overall lookups to use that.  statically linking a
specific subset would block that ability.  which means people using providers
like ldap would be stuck with static binaries that don't work.
https://www.gnu.org/software/libc/manual/html_node/Name-Service-Switch.html

i'm not going to debate the relevance of such a system nowadays as i don't
care.  purely pointing out that it's not a political issue (nor have you
provided any references to back up your specious claim).

 Seems simple enough to provide such a widely asked for 
 feature -- even if it has to be less functional/flexible
 than the dynamic version (i.e. Gnu would have done the best
 they could under the circumstances). 

it's already been provided.  build glibc w/--enable-static-nss.
-mike


signature.asc
Description: Digital signature


Re: quoted compound array assignment deprecated

2015-08-18 Thread Mike Frysinger
On 18 Aug 2015 10:51, Chet Ramey wrote:
 On 8/17/15 4:19 AM, isabella parakiss wrote:
  Quoting is necessary in a few cases:
  
  $ var=foo; declare -A arr$var=([x]=y)
  bash: warning: arrfoo=([x]=y): quoted compound array assignment deprecated
  $ var=foo; declare -A arr$var=([x]=y)
  bash: syntax error near unexpected token `('
  $ var=foo; declare -A arr$var=([x]=y)
  bash: syntax error near unexpected token `('
  
  I don't think this should be the default behaiour...
 
 This is exactly the case for which the warning is intended.  If you want
 to construct variable names on the fly, use `eval' or don't mix
 declarations of constructed variable names with compound assignment.
 
 You can read the extensive discussion starting at
 http://lists.gnu.org/archive/html/bug-bash/2014-12/msg00028.html.
 
 http://lists.gnu.org/archive/html/bug-bash/2014-12/msg00115.html is the
 newest proposal.

just to double check, the warning from this code is expected ?

$ bash-4.3 -c 'declare -a foo=(a b c); export foo; declare -p foo'
declare -ax foo='([0]=a [1]=b [2]=c)'
$ bash-4.4 -c declare -a foo='(a b c)'
bash-4.4: warning: foo=(a b c): quoted compound array assignment deprecated

we see this in Gentoo because we save/restore build envs via bash.  so all
builds done w/bash-4.3 and older use the quoted syntax, so updating with
bash-4.4 in the system triggers these warnings.  we can adjust our tooling
to handle it, but would be nice if older bash didn't do it either.  maybe
send out a 4.3-p43 ? ;)
-mike


signature.asc
Description: Digital signature


Re: remaking bash, trying static, glibc refuses static...?

2015-08-18 Thread Linda Walsh



Mike Frysinger wrote:

On 18 Aug 2015 13:34, Linda Walsh wrote:

Then can you give any technical reason why a static
lib that uses no network services (i.e. running
on a mini-root ) couldn't be made available for
the various calls that currently claim dynamic library
support is necessary.


(1) http://www.akkadia.org/drepper/no_static_linking.html

---
	I've seen this -- much of it not applicable to a 
miniroot recovery situation.  However one of his issues is

he lists as a benefit is 'Security'.  With a statically linked binary
you have no hope of waking up to a non-working system because new
shared libraries have changed all the behaviors on your system.  Just
ask how many MS users have this problem.

He mentions security fixes -- If my system is in single user, it's not
likely that I need them.  

I *like[d]* the old-school method of putting static binaries in /bin 
and and using /usr/[s]bin alternatives after boot -- and after /usr 
is mounted.  But now, with the benefits of shared libraries for 
/bin/mount and /sbin/xfs_restore being in /usr/lib64, when the system 
boots it can't mount anything.  Lovely -- shared binaries -- hate them
all the more with making mount use shared libs only located in /usr.  
Brilliant.  Oh, yes, I can copy them to /lib64 -- but if they wanted to

do a root and /usr (bin+lib) merge, why not put them in /[s]bin  /lib[64]
and put the compatibility symlinks in the /usr dirs pointing at their
corresponding root dirs.  But with dynamic linking, they are putting binaries
and libraries in /usr/ while leaving symlinks in the /bin  /lib dirs.  


Yup... dynamic linking -- beautiful concept being used by in all the wrong
ways.



(2) it's using the nss system which lets people drop modules into the system
at anytime and change the overall lookups to use that.  statically linking a
specific subset would block that ability.

---
The linux kernel is a perfect example of a statically linked program that
can dynamically load plugins to provide authorization data from external
sources.  Static doesn't mean you can't support 3rd party plugins/libs --
like LDAP.


Tools like keyboard monitors, and background auditing would no longer work
without LD_PRELOAD/PROFILE/AUDIT.  Gee, now that's a shame.  Most
of my binaries I build shared -- I don't really care, but having a set
of core binaries on a rescue partition makes sense.  





which means people using providers

like ldap would be stuck with static binaries that don't work.


	Wrong -- the linux kernel is statically linked.  It can use 
3rd party security plugins for user auth  privs.



https://www.gnu.org/software/libc/manual/html_node/Name-Service-Switch.html


It's a description of how they did it to be opaque to users -- so
developers, admins, hackers, crackers or law enforcement can easily put in
new shivs in the next dynamic-lib update.  Lovely.  It Has happened with
MS, you can't tell me it can't happen w/linux.



it's already been provided.  build glibc w/--enable-static-nss.

---
Funny, my distro must have forgotten that option...

I wonder if glibc is as easy to build as the kernel?




Re: remaking bash, trying static, glibc refuses static...?

2015-08-18 Thread Linda Walsh



Mike Frysinger wrote:

it is not political, nor is it related to bash at all
-mike


Then can you give any technical reason why a static
lib that uses no network services (i.e. running
on a mini-root ) couldn't be made available for
the various calls that currently claim dynamic library
support is necessary.

I know it is not just 'bash'.  Googling for the subject
shows it's a problem for many projects, so I find it
very odd that such a static lib couldn't be provided.

If an upstream DB provider (NSS, say), refuses to provide
a static lib, then the static lib Gnu provided would exclude
them, stating the reason why.

Seems simple enough to provide such a widely asked for 
feature -- even if it has to be less functional/flexible

than the dynamic version (i.e. Gnu would have done the best
they could under the circumstances). 


But the bash option for static even lists the reason for
such -- but with no way to actually use the option.  *sigh*.





Re: Parameter Expansion: Case Modification: ${var~} not documented

2015-08-18 Thread Chet Ramey
On 8/18/15 1:43 PM, Dan Douglas wrote:
 On Tuesday, August 18, 2015 9:54:55 AM CDT Isaac Good wrote:
 Would you mind sharing the rational behind having it undocumented?
 
 Since I like guessing: the syntax for parameter expansion operators is 
 currently non-extensible, so the namespace of terse operators is in limited 
 supply. New syntax should be extensible to suit future needs while keeping 
 the 
 language minimal. This is new syntax that adds one function that will be 
 rarely used. I can think of better ways to use that operator.

This is true, and I would prefer to not set the ~ `operator' in stone until
I'm more satisfied with how it works.

Bash-4.4 has the ${param@operator} family of expansions (inspired by a
similar feature in mksh) as the extensible syntax you're asking for.


 The operators in use currently are already a disaster. We *really* could use 
 a 
 solution for the circumfix operators `!var[@]` and `!var[*]` that collide 
 with 
 the prefix `!` operator, and for reasons unknown don't interoperate with any 
 of the other expansions such as array slicing / subscripting. I wouldn't want 
 to add new (pointless) syntax before the fundamental problems are addressed.

I assume you mean the difference betweeen ${!param[@]/followed/bysomething}
and ${!param[@]}.  The latter I picked up from ksh93 as is.  The former is
the generalization of indirect evaluation I sort of picked up from ksh93,
because I didn't think you needed namerefs to have indirect evaluation.
The first examples doesn't work as well as I hoped it would because I
chose to use the `parameter' as used in other expansions (param[@]) as the
unit of  indirection instead of just the name (param).

Chet



-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: -e does not take effects in subshell

2015-08-18 Thread Linda Walsh



Eric Blake wrote:

Like it or not, it is the historical behavior standardized by POSIX.


This is not true.  POSIX no longer documents historical behavior,
but now dictates new, historically-incompatible behaviors for a
variety of features in a variety of products (not just BASH).

As such, since the original mission statement of POSIX was to be
*descriptive* of what was (so a compatible standard could be provided),
and that is NOT what the new POSIX (post 2001-2003) has as a mission
statement, I assert the new POSIX is simply a new organization that
got the rights to use the name but use it as a club to force 
products to their new, dumbed-down and maladaptive behaviors.


Ex: rmx -fr (alias to rm --one-file-system -fr, since rm lacks the
-x switch like 'find, cp, mv, et al.) no longer works to clean
out a directory  stay on *one* file system.

Now rm will delete things on any number of file systems, as long
as they correspond to a cmdline argument.  Many people said to use
rm -xfr * to delete contents... but each object in 'rm' can be
on a different file system.  Worse rm -xfr **.

The workaround -- to use non-posix options of 'find' (or have find
call 'rm' for each qualified object.  


Please don't spread the lies that the *current* POSIX specs only
reflect historical behavior because it is not true.



It
is NOT intuitive, and our advice is DON'T USE set -e - IT WON'T DO WHAT
YOU WANT.  We can't change the behavior, because it would break scripts
that rely on the POSIX-specified behavior.

===
	I used the old behavior for over 10 years in various 
SH-compat shells, and it *WAS* useful.  POSIX changed it to be unuseful.








No support for spaces in for loop

2015-08-18 Thread Yan Pashkovsky

Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' 
-DSHELL -DHAVE_CONFIG_H   -I.  -I../. -I.././include -I.././lib 
-D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector --param=ssp-buffer-size=4 
-Wformat -Werror=format-security -Wall
uname output: Linux mint-desktop 3.16.0-43-generic #58~14.04.1-Ubuntu 
SMP Mon Jun 22 10:21:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.3
Patch Level: 11
Release Status: release

Description:
for file in * doesn't correctly parse files containing spaces 
in their filenames. For example file an image of duck.jpg will be 
interpreted as 4 files an image of duck.jpg


Repeat-By:




Re: -e does not take effects in subshell

2015-08-18 Thread Linda Walsh



Greg Wooledge wrote:

On Tue, Aug 18, 2015 at 01:49:53PM -0700, Linda Walsh wrote:

Ex: rmx -fr (alias to rm --one-file-system -fr, since rm lacks the
-x switch like 'find, cp, mv, et al.) no longer works to clean
out a directory  stay on *one* file system.


When did POSIX or any historical Unix rm have a --one-file-system option?
You say no longer works as if it had EVER worked in the past.

---
	Historically, linux had it going back to early 2000's 
(linux being a *nix platform) -- but historically, it wasn't

so easy to have features like 'bind/rbind', snapshots, multiple
virtual machines that need their own root (or chroot), etc.  If
you go back far enough symlinks weren't even around.

I'm only talking about POSIX ~2001 or before.  After that
it started changing.  So it depends on how historical you are talking.
POSIX cmd language started with POSIX.2 in 1992, before that it
was purely a programming API.  It started including the cmd's as
a way of providing portable shell scripts.  Not as a way of
restricting users.

While POSIX changed the 'rm' algorithm to no longer do
depth-first removal (now it's 2-pass, depth-first permissions check,
then depth-first removal).  But that's not the behavior of the historical
'rm'.

Various one-file-system cp -x, find -x, du -x were added after
it became common to allow more complicated mount structures.

I remember an early version of cygwin-coreutils-rm on Win7 that
didn't recognize symlinks or mountpoints (linkd/junctions) wandering
up out of the C:\recycle bin over to a documents folder on another
computer...

Daily-backups do come in handy.

And yes, the standard way to do this (the only way with traditional
tools) would use find ... -xdev ... -exec rm {} +

---
Which won't reliably work if your starting path is pathname/.

but would with an rm -frx (or rmx -fr path/.).





Re: No support for spaces in for loop

2015-08-18 Thread Chris F.A. Johnson

On Wed, 19 Aug 2015, Yan Pashkovsky wrote:


Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' -DSHELL 
-DHAVE_CONFIG_H   -I.  -I../. -I.././include -I.././lib -D_FORTIFY_SOURCE=2 
-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat 
-Werror=format-security -Wall
uname output: Linux mint-desktop 3.16.0-43-generic #58~14.04.1-Ubuntu SMP Mon 
Jun 22 10:21:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.3
Patch Level: 11
Release Status: release

Description:
   for file in * doesn't correctly parse files containing spaces in 
their filenames. For example file an image of duck.jpg will be interpreted 
as 4 files an image of duck.jpg


  Yes, it does.

  Your problem is (probably, since you didn't include an example) that
  you omitted quotes around its expansion, e.g.:

printf '%s\n' $file

  That should be:

printf '%s\n' $file


Repeat-By:





--
Chris F.A. Johnson, http://cfajohnson.com



Re: Parameter Expansion: Case Modification: ${var~} not documented

2015-08-18 Thread Greg Wooledge
On Tue, Aug 18, 2015 at 09:22:07AM -0500, Dan Douglas wrote:
 The `~` is obviously inspired by the vim 
 movement to toggle caps.

~ is standard vi, not a vim extension.



Bash-4.3 Official Patch 40

2015-08-18 Thread Chet Ramey
 BASH PATCH REPORT
 =

Bash-Release:   4.3
Patch-ID:   bash43-040

Bug-Reported-by:Jean Delvare jdelv...@suse.de
Bug-Reference-ID:   20150609180231.5f463695@endymion.delvare
Bug-Reference-URL:  
http://lists.gnu.org/archive/html/bug-bash/2015-06/msg00033.html

Bug-Description:

There is a memory leak that occurs when bash expands an array reference on
the rhs of an assignment statement.

Patch (apply with `patch -p0'):

*** ../bash-4.3-patched/subst.c 2014-10-01 12:57:47.0 -0400
--- subst.c 2015-06-22 09:16:53.0 -0400
***
*** 5783,5787 
if (pflags  PF_ASSIGNRHS)
  {
!   temp = array_variable_name (name, tt, (int *)0);
if (ALL_ELEMENT_SUB (tt[0])  tt[1] == ']')
temp = array_value (name, quoted|Q_DOUBLE_QUOTES, 0, atype, ind);
--- 5783,5787 
if (pflags  PF_ASSIGNRHS)
  {
!   var = array_variable_part (name, tt, (int *)0);
if (ALL_ELEMENT_SUB (tt[0])  tt[1] == ']')
temp = array_value (name, quoted|Q_DOUBLE_QUOTES, 0, atype, ind);
*** ../bash-4.3/patchlevel.h2012-12-29 10:47:57.0 -0500
--- patchlevel.h2014-03-20 20:01:28.0 -0400
***
*** 26,30 
 looks for to find the patch level (for the sccs version string). */
  
! #define PATCHLEVEL 39
  
  #endif /* _PATCHLEVEL_H_ */
--- 26,30 
 looks for to find the patch level (for the sccs version string). */
  
! #define PATCHLEVEL 40
  
  #endif /* _PATCHLEVEL_H_ */

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: Integer Overflow in braces

2015-08-18 Thread Chet Ramey
On 8/18/15 9:12 AM, Dan Douglas wrote:

 Actually I think I spoke too soon. There's already some considerable logic in 
 braces.c to check for overflow (e.g. around braces.c:390 shortly after 
 declaration of the int). Looks like there were some changes in this code last 
 year to beef it up a bit. (see commit 
 67440bc5959a639359bf1dd7d655915bf6e9e7f1). I suspect this is probably fixed 
 in 
 devel.

Well, `fixed' is a tricky thing.  There is code in bash-4.4 to use malloc
instead of xmalloc -- which just aborts on failure -- but there is only so
much you can do to protect someone from himself.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: quoted compound array assignment deprecated

2015-08-18 Thread Chet Ramey
On 8/17/15 4:19 AM, isabella parakiss wrote:
 Quoting is necessary in a few cases:
 
 $ var=foo; declare -A arr$var=([x]=y)
 bash: warning: arrfoo=([x]=y): quoted compound array assignment deprecated
 $ var=foo; declare -A arr$var=([x]=y)
 bash: syntax error near unexpected token `('
 $ var=foo; declare -A arr$var=([x]=y)
 bash: syntax error near unexpected token `('
 
 I don't think this should be the default behaiour...

This is exactly the case for which the warning is intended.  If you want
to construct variable names on the fly, use `eval' or don't mix
declarations of constructed variable names with compound assignment.

You can read the extensive discussion starting at
http://lists.gnu.org/archive/html/bug-bash/2014-12/msg00028.html.

http://lists.gnu.org/archive/html/bug-bash/2014-12/msg00115.html is the
newest proposal.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Parameter Expansion: Case Modification: ${var~} not documented

2015-08-18 Thread Isaac Good
Configuration Information [Automatically generated, do not change]:
Machine: i686
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='i686'
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='i686-pc-linux-gnu'
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/  locale' -DPACKAGE='bash'
-DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib  -
 D_FORTIFY_SOURCE=2 -march=i686 -mtune=generic -O2 -pipe
-fstack-protector-strong --   param=ssp-buffer-size=4
-DDEFAULT_PATH_VALUE='/usr/local/sbin:/usr/local/bin:/usr/bin'
-DSTANDARD_UTILS_PATH='/usr/bin' -DSYS_BASHRC='/etc/bash.bashrc' -
  DSYS_BASH_LOGOUT='/etc/bash.bash_logout'
uname output: Linux netbook 3.16.1-1-ARCH #1 SMP PREEMPT Thu Aug 14
07:48:39 CEST 2014 i686 GNU/Linux
Machine Type: i686-pc-linux-gnu

Bash Version: 4.3
Patch Level: 39
Release Status: release

Description:
The man page fails to document the ${var~} and ${var~~} case inversion
expansion.
It got the upper and lower, ie ${var^} and ${var,} but not invert.

Fix:
More documentation.


Re: Parameter Expansion: Case Modification: ${var~} not documented

2015-08-18 Thread Chet Ramey
On 8/18/15 9:50 AM, Dan Douglas wrote:

 Description:
 The man page fails to document the ${var~} and ${var~~} case inversion
 expansion.
 It got the upper and lower, ie ${var^} and ${var,} but not invert.

 Fix:
 More documentation.
 
 I'm pretty sure that's intentional. The corresponding `declare -c` has never 
 been documented either.

Correct; it's undocumented.


-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: Integer Overflow in braces

2015-08-18 Thread Dan Douglas
On Tuesday, August 18, 2015 09:04:33 AM Greg Wooledge wrote:
 On Tue, Aug 18, 2015 at 07:54:48AM -0500, Dan Douglas wrote:
  IMHO the issue of whether the integer is allowed to overflow is separate 
from 
  the question of whether the resulting expansion is too big. Code that 
does 
  an `eval blah{0..$n}` is reasonably common and not necessarily stupid. 
 
 Yes, that's fine.  But I don't actually understand what kind of overflow
 Pasha K was actually trying to test for.  He/she mentioned nelem, which
 only appears in two places in the bash source code: once in indexed
 arrays, and once in associative arrays.  But there were no arrays in
 the script being executed.
 
 {0..} should produce an error because it runs out of
 memory.  So I would expect to see a malloc failure, or something similar.
 If Pasha is saying that an integer overflow occurs before the malloc
 failure, then that may or may not be interesting to Chet.  If it crashes
 bash, then it's not interesting to me, because the inevitable malloc
 failure would have crashed it if the overflow didn't.  It only becomes
 interesting to me if the integer overflow causes some weird behavior to
 happen BEFORE bash crashes.
 

Actually I think I spoke too soon. There's already some considerable logic in 
braces.c to check for overflow (e.g. around braces.c:390 shortly after 
declaration of the int). Looks like there were some changes in this code last 
year to beef it up a bit. (see commit 
67440bc5959a639359bf1dd7d655915bf6e9e7f1). I suspect this is probably fixed in 
devel.
-- 
Dan Douglas



Re: Parameter Expansion: Case Modification: ${var~} not documented

2015-08-18 Thread Dan Douglas
On Tuesday, August 18, 2015 08:50:51 AM Dan Douglas wrote:
 I'm pretty sure that's intentional. The corresponding `declare -c` has never 
 been documented either.
 

Hrm, it doesn't correspond actually. declare -c just capitalizes the first 
letter of the string.

Another thing about the ${var~} expansions is I wonder why it isn't just built 
in to the substitution expansion. The `~` is obviously inspired by the vim 
movement to toggle caps. Given `foobarbaz`, vim can also do `:s/foo\zs\(bar\)
\zebaz/\U\1/` and yield `fooBARbaz`. This is much more powerful, though it 
requires bash to start supporting backrefs in substitutions.

There's also this ksh feature I've never found a use for:

$ ksh -c 'x=foobarbaz; typeset -M toupper x; echo $x'
FOOBARBAZ

I don't know, the only purpose is to replace `typeset -l/-u` and allow for 
other towctrans operations.

-- 
Dan Douglas



Bash-4.3 Official Patch 42

2015-08-18 Thread Chet Ramey
 BASH PATCH REPORT
 =

Bash-Release:   4.3
Patch-ID:   bash43-042

Bug-Reported-by:Nathan Neulinger nn...@neulinger.org
Bug-Reference-ID:   558efdf2.7060...@neulinger.org
Bug-Reference-URL:  
http://lists.gnu.org/archive/html/bug-bash/2015-06/msg00096.html

Bug-Description:

There is a problem when parsing command substitutions containing `case'
commands within pipelines that causes the parser to not correctly identify
the end of the command substitution.

Patch (apply with `patch -p0'):

*** ../bash-4.3-patched/parse.y 2015-05-18 19:27:05.0 -0400
--- parse.y 2015-06-29 10:59:27.0 -0400
***
*** 3709,3712 
--- 3709,3714 
  tflags |= LEX_INWORD;
  lex_wlen = 0;
+ if (tflags  LEX_RESWDOK)
+   lex_rwlen = 0;
}
}
*** ../bash-4.3-patched/parse.y 2015-05-18 19:27:05.0 -0400
--- y.tab.c 2015-06-29 10:59:27.0 -0400
***
*** 6021,6024 
--- 6021,6026 
  tflags |= LEX_INWORD;
  lex_wlen = 0;
+ if (tflags  LEX_RESWDOK)
+   lex_rwlen = 0;
}
}
*** ../bash-4.3/patchlevel.h2012-12-29 10:47:57.0 -0500
--- patchlevel.h2014-03-20 20:01:28.0 -0400
***
*** 26,30 
 looks for to find the patch level (for the sccs version string). */
  
! #define PATCHLEVEL 41
  
  #endif /* _PATCHLEVEL_H_ */
--- 26,30 
 looks for to find the patch level (for the sccs version string). */
  
! #define PATCHLEVEL 42
  
  #endif /* _PATCHLEVEL_H_ */

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: bash-4.3_p39 Segfaults in array_flush at array.c:111 after incorrect conversion from indexed to associative array

2015-08-18 Thread Chet Ramey
On 8/15/15 9:02 PM, Sergey Tselikh wrote:

 Description:
 An incorrect conversion from indexed to associative array in bash script leads
 bash interpreter to segfault (bash still gives a useful error report in this 
 situation,
 which is good).
 
 As seen in the output of GDB, bash terminates in array_flush function:
 
 Core was generated by `../untars/bash-43-39/bash-4.3/root/bin/bash -x repro'.
 Program terminated with signal SIGSEGV, Segmentation fault.
 #0  0x00470879 in array_flush (a=0x19de728) at array.c:111
 111 for (r = element_forw(a-head); r != a-head; ) {

Thanks for the report.  The problem was incomplete error propagation.  It
will be fixed for the next release of bash.

I've attached a patch for folks to experiment with; your line numbers will
vary wildly.


Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
*** /fs2/chet/bash/bash-20150813/subst.c	2015-08-13 11:32:54.0 -0400
--- subst.c	2015-08-18 10:13:59.0 -0400
***
*** 10139,10143 
  	  opts[opti] = '\0';
  	  if (opti  0)
! 	make_internal_declare (tlist-word-word, opts);
  
  	  t = do_word_assignment (tlist-word, 0);
--- 10139,10150 
  	  opts[opti] = '\0';
  	  if (opti  0)
! 	{
! 	  t = make_internal_declare (tlist-word-word, opts);
! 	  if (t != EXECUTION_SUCCESS)
! 		{
! 		  last_command_exit_value = t;
! 		  exp_jump_to_top_level (DISCARD);
! 		}
! 	}
  
  	  t = do_word_assignment (tlist-word, 0);


Re: Parameter Expansion: Case Modification: ${var~} not documented

2015-08-18 Thread Isaac Good
Would you mind sharing the rational behind having it undocumented?

On Tue, Aug 18, 2015 at 7:38 AM, Chet Ramey chet.ra...@case.edu wrote:

 On 8/18/15 9:50 AM, Dan Douglas wrote:

  Description:
  The man page fails to document the ${var~} and ${var~~} case inversion
  expansion.
  It got the upper and lower, ie ${var^} and ${var,} but not invert.
 
  Fix:
  More documentation.
 
  I'm pretty sure that's intentional. The corresponding `declare -c` has
 never
  been documented either.

 Correct; it's undocumented.


 --
 ``The lyf so short, the craft so long to lerne.'' - Chaucer
  ``Ars longa, vita brevis'' - Hippocrates
 Chet Ramey, ITS, CWRUc...@case.edu
 http://cnswww.cns.cwru.edu/~chet/



Re: why is 'direxpand' converting relative paths to absolute?

2015-08-18 Thread Linda Walsh



Clark Wang wrote:



I had the same problem months ago. See Chet's answer: 
http://lists.gnu.org/archive/html/bug-bash/2014-03/msg00069.html

===
Yep though I'm not sure about the reasoning in providing
different behaviors based on default-dir-expansion == convert
all relative paths to absolute.

Thanks for the explanation though..

linda



Re: quoted compound array assignment deprecated

2015-08-18 Thread Stephane Chazelas
2015-08-17 10:19:00 +0200, isabella parakiss:
 Quoting is necessary in a few cases:
 
 $ var=foo; declare -A arr$var=([x]=y)
 bash: warning: arrfoo=([x]=y): quoted compound array assignment deprecated
 $ var=foo; declare -A arr$var=([x]=y)
 bash: syntax error near unexpected token `('
 $ var=foo; declare -A arr$var=([x]=y)
 bash: syntax error near unexpected token `('
 
 I don't think this should be the default behaiour...
[...]

This typically requires two levels of evaluation. The syntax of
declare is now more consistent with that of bare assignments
and there are fewer cases where declare ends up evaluating
code that it's not meant to.

Here, I'd do:

declare -A arr$var

eval arr$var'=([x]=y)'


By using eval (which has the reputation of being dangerous),
you're making it clear that there is that second level of
evaluation that one should be careful around (and which is there
as well with declare but less obvious).

The way you're expecting declare to work is just a disguised
eval, it's not any safer than eval. To me, variable declaration
should be separate from evaluating code. Ideally, I'd rather
declare didn't do assignments either (note that it was ksh
breaking export by allowing assignments and causing confusions
between simple commands and assignments that was not there in
the Bourne shell).

-- 
Stephane