(Just making sure this went through.) Bug: Bash is not memory safe.

2024-05-01 Thread Joshua Brown
$ bash --version | head -n1
GNU bash, version 5.2.15(1)-release (x86_64-pc-linux-gnu)

$ valgrind --leak-check=full \
 --track-origins=yes \
 --verbose \
 --log-file=valgrind-out-bash.txt \
 /bin/bash



==2762== Memcheck, a memory error detector
==2762== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al.
==2762== Using Valgrind-3.19.0-8d3c8034b8-20220411 and LibVEX; rerun
with -h for copyright info ==2762== Command: /bin/bash
==2762== Parent PID: 2251
==2762== 
--2762-- 
--2762-- Valgrind options:
--2762----leak-check=full
--2762----track-origins=yes
--2762----verbose
--2762----log-file=valgrind-out-bash.txt
--2762-- Contents of /proc/version:
--2762--   Linux version 6.1.0-20-amd64
(debian-ker...@lists.debian.org) (gcc-12 (Debian 12.2.0-14) 12.2.0, GNU
ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian
6.1.85-1 (2024-04-11) --2762-- --2762-- Arch and hwcaps: AMD64,
LittleEndian,
amd64-cx16-lzcnt-rdtscp-sse3-ssse3-avx-avx2-bmi-f16c-rdrand-rdseed
--2762-- Page sizes: currently 4096, max supported 4096 --2762--
Valgrind library directory: /usr/libexec/valgrind --2762-- Reading syms
from /bin/bash --2762--object doesn't have a symbol table --2762--
Reading syms from /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 --2762--
Considering
/usr/lib/debug/.build-id/80/8ccde0c755608358b0915a6615c19401fe6bf6.debug
.. --2762--   .. build-id is valid --2762-- Reading syms from
/usr/libexec/valgrind/memcheck-amd64-linux --2762--   Considering
/usr/lib/debug/.build-id/82/26c2aa6b808ebd5a6fafb694a7fb3287f33590.debug
.. --2762--   .. build-id is valid --2762--object doesn't have a
dynamic symbol table --2762-- Scheduler: using generic scheduler lock
implementation. --2762-- Reading suppressions file:
/usr/libexec/valgrind/default.supp ==2762== embedded gdbserver: reading
from /tmp/vgdb-pipe-from-vgdb-to-2762-by-user-on-??? ==2762== embedded
gdbserver: writing to   /tmp/vgdb-pipe-to-vgdb-from-2762-by-user-on-???
==2762== embedded gdbserver: shared mem
/tmp/vgdb-pipe-shared-mem-vgdb-2762-by-user-on-??? ==2762== ==2762== TO
CONTROL THIS PROCESS USING vgdb (which you probably ==2762== don't want
to do, unless you know exactly what you're doing, ==2762== or are doing
some strange experiment): ==2762==   /usr/bin/vgdb --pid=2762
...command... ==2762== ==2762== TO DEBUG THIS PROCESS USING GDB: start
GDB like this ==2762==   /path/to/gdb /bin/bash ==2762== and then give
GDB the following command ==2762==   target remote | /usr/bin/vgdb
--pid=2762 ==2762== --pid is optional if only one valgrind process is
running ==2762== 
--2762-- REDIR: 0x40238e0 (ld-linux-x86-64.so.2:strlen) redirected to
0x580bb0e2 (vgPlain_amd64_linux_REDIR_FOR_strlen) --2762-- REDIR:
0x40220c0 (ld-linux-x86-64.so.2:index) redirected to 0x580bb0fc
(vgPlain_amd64_linux_REDIR_FOR_index) --2762-- Reading syms from
/usr/libexec/valgrind/vgpreload_core-amd64-linux.so --2762--
Considering
/usr/lib/debug/.build-id/ad/f1388be4d8781737b0c83fe111a5a9c6e930aa.debug
.. --2762--   .. build-id is valid --2762-- Reading syms from
/usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so --2762--
Considering
/usr/lib/debug/.build-id/d8/ec66cffcb23a75c3f15940674d6028709121f8.debug
.. --2762--   .. build-id is valid ==2762== WARNING: new redirection
conflicts with existing -- ignoring it --2762-- old: 0x040238e0
(strlen  ) R-> (.0) 0x580bb0e2
vgPlain_amd64_linux_REDIR_FOR_strlen --2762-- new: 0x040238e0
(strlen  ) R-> (2007.0) 0x048468a0 strlen --2762-- REDIR:
0x40222e0 (ld-linux-x86-64.so.2:strcmp) redirected to 0x4847780
(strcmp) --2762-- REDIR: 0x4021550 (ld-linux-x86-64.so.2:mempcpy)
redirected to 0x484b1a0 (mempcpy) --2762-- Reading syms from
/usr/lib/x86_64-linux-gnu/libgtk3-nocsd.so.0 --2762--object doesn't
have a symbol table --2762-- Reading syms from
/lib/x86_64-linux-gnu/libtinfo.so.6.4 --2762--object doesn't have a
symbol table --2762-- Reading syms from /lib/x86_64-linux-gnu/libc.so.6
--2762--   Considering
/usr/lib/debug/.build-id/ee/3145ecaaff87a133daea77fbc3eecd458fa0d1.debug
.. --2762--   .. build-id is valid ==2762== WARNING: new redirection
conflicts with existing -- ignoring it --2762-- old: 0x0493b540
(memalign) R-> (1011.0) 0x04845bc0 memalign --2762--
new: 0x0493b540 (memalign) R-> (1017.0) 0x04845b90
aligned_alloc ==2762== WARNING: new redirection conflicts with existing
-- ignoring it --2762-- old: 0x0493b540 (memalign) R->
(1011.0) 0x04845bc0 memalign --2762-- new: 0x0493b540 (memalign
   ) R-> (1017.0) 0x04845b60 aligned_alloc --2762-- Reading syms
from /lib/x86_64-linux-gnu/libdl.so.2 --2762--   Considering
/usr/lib/debug/.build-id/53/eaa845e9ca621f159b0622daae7387cdea1e97.debug
.. --2762--   .. build-id is valid --2762-- Reading syms from
/lib/x86_64-linux-gnu/libpthread.so.0 --2762--   Considering
/usr/lib/debug/.build-id/26/820458adaf5d95718fb502d170fe374ae3ee70.debug
.. 

Re: Possible bug Bash v5.22.6/5.1.16

2024-02-17 Thread Chet Ramey

On 2/17/24 6:40 AM, John Larew wrote:

This is a portion of a script that appears to be problematic. Each of these 
attempts appear to be valid; none of them work.
The issue is apparent with bash in both termux v0.118.0/5.22.6 and Ubuntu 
v22.04.3 LTS/5.1.16  (see attached).


The clue is in the error message (and its subsequent variants):

fg: current: no such job

which means that there are no background jobs in the execution environment
where `fg' is executed.

The background job in constructs like

( sleep 15s; set -m; fg %+; exit) &
( sleep 15s; set -m; fg %%; exit) &

is in the parent shell's execution environment; the subshell has no
background jobs of its own.

In constructs like

( sleep 15s; set -m; fg $$; exit) &
( sleep 15s; set -m; fg $!; exit) &

`fg' doesn't take PID arguments. (The reason you get the `current' in the
error message is that $! expands to nothing if there haven't been any
background jobs created, and `fg' without arguments defaults to the current
job.)

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



OpenPGP_signature.asc
Description: OpenPGP digital signature


Re: [Bug][Bash Posix] Inquiry about Contributing to the Project

2024-01-17 Thread Chet Ramey

On 1/16/24 7:11 PM, Emre Ulusoy wrote:

Dear Bash Maintainers,

I hope this message finds you well. I am writing to inquire about the 
possibility of contributing to your project.

Recently, I discovered a potential bug in the 'bash --posix' terminal and I 
believe I have a fix that could resolve this issue. Before proceeding, I wanted 
to confirm if this is an open-source project where external contributions via 
pull requests are welcomed.


If you believe you have found a bug in bash, you are welcome to report it
right here.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: [Bug][Bash Posix] Inquiry about Contributing to the Project

2024-01-16 Thread Lawrence Velázquez
On Tue, Jan 16, 2024, at 7:11 PM, Emre Ulusoy wrote:
> Recently, I discovered a potential bug in the 'bash --posix' terminal 
> and I believe I have a fix that could resolve this issue. Before 
> proceeding, I wanted to confirm if this is an open-source project where 
> external contributions via pull requests are welcomed.

Yes, contributions are welcome, although this project does not use
pull requests.

https://git.savannah.gnu.org/cgit/bash.git/tree/README?h=bash-5.2#n51

-- 
vq



[Bug][Bash Posix] Inquiry about Contributing to the Project

2024-01-16 Thread Emre Ulusoy
Dear Bash Maintainers,

I hope this message finds you well. I am writing to inquire about the 
possibility of contributing to your project.

Recently, I discovered a potential bug in the 'bash --posix' terminal and I 
believe I have a fix that could resolve this issue. Before proceeding, I wanted 
to confirm if this is an open-source project where external contributions via 
pull requests are welcomed.

I am eager to contribute to the improvement of this project and would 
appreciate any guidance you could provide on how to proceed.

Thank you for your time and consideration. I look forward to your response.

Best regards,
Emre ULUSOY
EPITA Student


Re: [bug-bash] Bash-5.2 Patch 22

2024-01-16 Thread Chet Ramey

On 1/16/24 10:09 AM, Dr. Werner Fink wrote:

On 2024/01/16 09:27:19 -0500, Chet Ramey wrote:

On 1/16/24 4:00 AM, Dr. Werner Fink wrote:


what is with the readline82-008, readline82-009, and readline82-010
patches?


What about them?


Should those be part also of trhe bash52 patches as well?


Bash-5.2 doesn't have prototypes either, so patch 8 doesn't really matter.
That will all get cleaned up in the next release. Patch 9 doesn't apply
since bash uses the directory rewrite hook and directory completion hook.
Patch 10 will be part of the next set of bash patches.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: [bug-bash] Bash-5.2 Patch 22

2024-01-16 Thread Dr. Werner Fink
On 2024/01/16 09:27:19 -0500, Chet Ramey wrote:
> On 1/16/24 4:00 AM, Dr. Werner Fink wrote:
> 
> > what is with the readline82-008, readline82-009, and readline82-010
> > patches?
> 
> What about them?

Should those be part also of trhe bash52 patches as well?

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr


signature.asc
Description: PGP signature


Re: [bug-bash] Bash-5.2 Patch 22

2024-01-16 Thread Chet Ramey

On 1/16/24 4:00 AM, Dr. Werner Fink wrote:


what is with the readline82-008, readline82-009, and readline82-010
patches?


What about them?

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: [bug-bash] Bash-5.2 Patch 22

2024-01-16 Thread Dr. Werner Fink
On 2024/01/14 13:34:06 -0500, Chet Ramey wrote:
>BASH PATCH REPORT
>=
> 
> Bash-Release: 5.2
> Patch-ID: bash52-022
> 
> Bug-Reported-by:  srobert...@peratonlabs.com
> Bug-Reference-ID:
> Bug-Reference-URL:
> https://lists.gnu.org/archive/html/bug-bash/2022-09/msg00049.html
> 
> Bug-Description:
> 
> It's possible for readline to try to zero out a line that's not null-
> terminated, leading to a memory fault.
> 
> Patch (apply with `patch -p0'):
> 
> *** ../bash-5.2-patched/lib/readline/display.c2022-04-05 
> 10:47:31.0 -0400
> --- lib/readline/display.c2022-12-13 13:11:22.0 -0500
> ***
> *** 2684,2692 
>   
> if (visible_line)
> ! {
> !   temp = visible_line;
> !   while (*temp)
> ! *temp++ = '\0';
> ! }
> rl_on_new_line ();
> forced_display++;
> --- 2735,2740 
>   
> if (visible_line)
> ! memset (visible_line, 0, line_size);
> ! 
> rl_on_new_line ();
> forced_display++;
> 
> *** ../bash-5.2/patchlevel.h  2020-06-22 14:51:03.0 -0400
> --- patchlevel.h  2020-10-01 11:01:28.0 -0400
> ***
> *** 26,30 
>  looks for to find the patch level (for the sccs version string). */
>   
> ! #define PATCHLEVEL 21
>   
>   #endif /* _PATCHLEVEL_H_ */
> --- 26,30 
>  looks for to find the patch level (for the sccs version string). */
>   
> ! #define PATCHLEVEL 22
>   
>   #endif /* _PATCHLEVEL_H_ */
> 

Hi,

what is with the readline82-008, readline82-009, and readline82-010
patches?

Werner

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr


signature.asc
Description: PGP signature


Re: [bug-bash] segfault in for(()) loop

2023-07-25 Thread Chet Ramey

On 7/25/23 5:31 AM, Dr. Werner Fink wrote:


Thanks for the report. This was fixed several months ago.


OK ... last official patch for 5.2 is still bash52-015 :)


https://lists.gnu.org/archive/html/help-bash/2023-07/msg00078.html

But the patch is simple enough to attach.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/
*** ../bash-20230222/execute_cmd.c  Thu Feb 23 14:15:05 2023
--- execute_cmd.c   Mon Feb 27 17:53:08 2023
***
*** 2995,2999 
if (l->next)
  free (expr);
!   new = make_word_list (make_word (temp), (WORD_LIST *)NULL);
free (temp);
  
--- 2995,2999 
if (l->next)
  free (expr);
!   new = make_word_list (make_word (temp ? temp : ""), (WORD_LIST *)NULL);
free (temp);
  


Re: [bug-bash] segfault in for(()) loop

2023-07-25 Thread Dr. Werner Fink
On 2023/07/24 13:16:23 -0400, Chet Ramey wrote:
> On 7/24/23 11:58 AM, vc--- via Bug reports for the GNU Bourne Again SHell
> wrote:
> 
> > Bash Version: 5.2
> > Patch Level: 15
> > Release Status: release
> > 
> > Description:
> > Segmentation fault in 'for ((...))' loop
> > 
> > Repeat-By:
> > z='';for((;$z;));do echo;done
> > without spaces in ;$z;
> > in bash 5.1.4 -- works ok, in bash 5.2.15 -- segmentation fault
> 
> Thanks for the report. This was fixed several months ago.

OK ... last official patch for 5.2 is still bash52-015 :)

Werner

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr


signature.asc
Description: PGP signature


Re: Supporting structured data (was: Re: bug-bash Digest, Vol 238, Issue 2)

2022-09-12 Thread Martin D Kealey
On Wed, 7 Sept 2022 at 18:13, Yair Lenga  wrote:

> Thanks for providing feedback and expanding with new ideas.
>
> I believe the summary is:
>
> ${a.key1.key2} - Static fields
> ${a.key1.$key2} - Mixed dynamic/static,  simple substitution.
> ${a.key1.{complex.$key2}} - For complex keys that may contain anything
> ${a.key1[expr].key2] - expr is evaluated in numeric context
>
> Did I get it right ?
>

Yes, that's exactly what I had in mind.

I would initially limit the "simple" keys to strings that are valid
identifiers, or a single variable expansion without {}; we can see later
whether that can be relaxed.

That means that ${var.$key} and var.$key=value would be valid, but
${var.${key}} and var.${key}=value would not. The reason is to simplify the
lexer, which has to recognize assignments early in the parsing process, and
then to make expansions use the same form as assignments. Requiring the
'.{' pair when recursion *is* necessary makes it less likely to break
existing code.


Re: Supporting structured data (was: Re: bug-bash Digest, Vol 238, Issue 2)

2022-09-07 Thread Yair Lenga
Another comment:

While it’s important to use “natural” access, I believe it is ok to have a 
command to set values inside the h-value. It does not have to be supported as 
part of …=… , which has lot of history, rule, interaction with env var, etc. I 
think something like:

hset var.foo.bar=value
hset var.{complex.$x}=value

Are ok. Does not have to be hset - just borrowed it from redis. :-). Having a 
separate command can simplify implementation - less risk to break existing code.

Yair

Sent from my iPad

> On Sep 7, 2022, at 3:19 AM, Martin D Kealey  wrote:
> 
> So may I suggest a compromise syntax: take the ${var.field} notation from 
> Javascript, and the {var.$key} as above, but also extend it to allow 
> ${var.{WORD}} (which mimics the pattern of allowing both $var and ${var}) 
> with the WORD treated as if double-quoted. Then we can write 
> ${var.{complex.$key/$expansion}} and var.{complex.$key/$expansion}=value, 
> which are much more reasonable propositions for parsing and reading



Re: Supporting structured data (was: Re: bug-bash Digest, Vol 238, Issue 2)

2022-09-07 Thread Yair Lenga
Thanks for providing feedback and expanding with new ideas.

I believe the summary is:

${a.key1.key2} - Static fields
${a.key1.$key2} - Mixed dynamic/static,  simple substitution.
${a.key1.{complex.$key2}} - For complex keys that may contain anything
${a.key1[expr].key2] - expr is evaluated in numeric context

Did I get it right ?

Yair

On Wed, Sep 7, 2022 at 3:19 AM Martin D Kealey 
wrote:

> Some things do indeed come down to personal preference, where there are no
> right answers. Then Chet or his successor gets to pick.
>
> Keep in mind that most or all of my suggestions are gated on not being in
> backwards-compatibility mode, and that compat mode itself would be
> lexically scoped. With that in mind, I consider that we're free to *stop*
> requiring existing antipatterns that are harmful to comprehension or
> stability.
>
> I would choose to make parsing numeric expressions happen at the same time
> as parsing whole statements, not as a secondary parser that's always
> deferred until runtime. This would improve unit testing and debugging,
> starting with bash -n being able to complain about syntax errors in
> expressions. (Yes that precludes $(( x $op y )) unless you're in compat
> mode.)
>
> On Mon, 5 Sept 2022 at 19:55, Yair Lenga  wrote:
>
>> Personally, I think adopting Javascript/Python like approach (${a.b.c} )
>> is preferred over using Perl approach ( ${a{b}{c}} ), or sticking with the
>> existing bash approach. The main reason is that it is better to have a
>> syntax similar/compatible with current/future directions, and not the past.
>>
>
> By having var['key'] and var.key as synonyms, Javascript already sets the
> precedent of allowing multiple ways to do the same thing.
>
> PPS: I'm under no illusions that it will take a LOT of work to move Bash
> this far. But we'll never get there if we keep taking steps in the opposite
> direction, piling on ever more stuff that has to be accounted for in
> "compat" mode.
>


Supporting structured data (was: Re: bug-bash Digest, Vol 238, Issue 2)

2022-09-07 Thread Martin D Kealey
Some things do indeed come down to personal preference, where there are no
right answers. Then Chet or his successor gets to pick.

Keep in mind that most or all of my suggestions are gated on not being in
backwards-compatibility mode, and that compat mode itself would be
lexically scoped. With that in mind, I consider that we're free to *stop*
requiring existing antipatterns that are harmful to comprehension or
stability.

I would choose to make parsing numeric expressions happen at the same time
as parsing whole statements, not as a secondary parser that's always
deferred until runtime. This would improve unit testing and debugging,
starting with bash -n being able to complain about syntax errors in
expressions. (Yes that precludes $(( x $op y )) unless you're in compat
mode.)

On Mon, 5 Sept 2022 at 19:55, Yair Lenga  wrote:

> Personally, I think adopting Javascript/Python like approach (${a.b.c} )
> is preferred over using Perl approach ( ${a{b}{c}} ), or sticking with the
> existing bash approach. The main reason is that it is better to have a
> syntax similar/compatible with current/future directions, and not the past.
>

By having var['key'] and var.key as synonyms, Javascript already sets the
precedent of allowing multiple ways to do the same thing.

But if you look closely, there's a difference in Javascript between
var['key'] and var[key], which cannot be replicated at a syntactic level in
Bash. Instead we have to rely on a run-time lookup of  'var' to determine
whether it's an associative map or a normal array.

That leads to bugs involving delayed and randomized surprises, whereas an
unexpected syntax goes "bang" right away, when the coder is looking. (It
might surprise them, but it won't surprise their customers.)

I believe that "helping people to avoid writing bugs" trumps "matching the
syntax suggested by common practice in other languages", and so I conclude
that it's preferable, from a code resilience point of view, to have a
syntactic difference between a numeric expression used for indexing and a
string used as a key in a map lookup.

IMHO ${var[$key]} is incompatible with *good* "future directions";
${var[stuff]} should be reserved for numeric indexing and slicing (once
backward compat is turned off).

Bottom line - IHMO - no point in joining a losing camp (Perl), or having a
> "bash" camp.
>

If we need to institute a "bash camp" to improve resiliency, I wouldn't
lose sleep over it.

I'm not particularly wedded to the Perl var{key} syntax, and indeed for
"fields" I would prefer to avoid it, as the javascript syntax *is* nicer
and more widely understood.

But I think that *only *having "var.key" (and reserving "var[expression]"
for numeric indexing, as above) would lead to weirdness in other ways.
Consider if we allow ${var.KEY} where KEY is a shell word (minus unquoted
'.').

I agree, it's quite nice to write ${var.$var_holding_key} or even
${var.${var_holding_key:-default_key}}.

But then when you want to deal with more complex keys, we get things like
${var.""'} for an empty key, or ${var."$key.$with/$multiple:$parts"}.

Those might look obvious enough to anyone who's used the shell long enough,
but let's be honest, shell quotes are already hard for newcomers to
understand, and nested quoting are a nightmare.

But it's a horrendous mess when you want to assign:
var."some/complex/key"=$newvalue. IMO that over-extends the syntax for
assignment, making the rest of the parser intolerably complicated.

So may I suggest a compromise syntax: take the ${var.field} notation from
Javascript, and the {var.$key} as above, but also extend it to allow
${var.{WORD}} (which mimics the pattern of allowing both $var and ${var})
with the WORD treated as if double-quoted. Then we can write
${var.{complex.$key/$expansion}} and var.{complex.$key/$expansion}=value,
which are much more reasonable propositions for parsing and reading.

That leaves [] for strictly numeric indexing and slicing, so we don't have
to resort to run-time lookup to figure out whether it should have been
parsed as a numeric range expression (after we've already done so).

And it leaves space in the syntax of dot-things to add operators we've
haven't considered yet; perhaps operators to make globbing and
word-splitting opt-in rather than opt-out?

-Martin

PS: I mention using var[expression] for *slicing*; I want to be able to
write var[start..end] or var[start:count] and be sure that ${var[x]} and
${var[x..x]} and ${var[x:1]} all give the same thing, save for perhaps
giving an empty list if the element is unset.

And unlike ${var[@]:x:1}, which gives an unwelcome surprise if ${var[x]} is
unset. (This is one of the antipatterns inherited from ksh that we should
avoid; I would even argue for disabling it when not in compat mode.)

PPS: I'm under no illusions that it will take a LOT of work to move Bash
this far. But we'll never get there if we keep taking steps in the opposite
direction, piling on ever more stuff that has to be 

Re: bug-bash Digest, Vol 238, Issue 2

2022-09-05 Thread Yair Lenga
Martin brings up several good points, and I think it's worth figuring out
the direction of the implementation. Bash currently does not have good
syntax for H-values, so a new one is needed. It does not make sense to have
a completely new one, as there are few accepted syntax - python,
JavaScript, Perl, Java to name a few. Ideally, bash will decide on a
direction, and then apply bash-style changes.

Wanted to emphasize - those are my own preferences. There is NO right
answer here.

Personally, I think adopting Javascript/Python like approach (${a.b.c} ) is
preferred over using Perl approach ( ${a{b}{c}} ), or sticking with the
existing bash approach. The main reason is that it is better to have a
syntax similar/compatible with current/future directions, and not the past.
JavaScript/Python knowledge is much more common nowadays vs Perl or Bash
arrays (regular, and associative). Using '.' is also in line with many
scripting/compiled languages - C/C++/Java/Groovy/lua - all support '.' for
choosing fields from structure - developers with this background will feel
comfortable with this grammar.

I believe supporting bracket notation, can be a plus. One tricky issue - Is
the content inside the bracket (${a[index_expression}} is evaluated or not.
Martin highlight some background compatibility issues, My vote is going to
evaluating the expressions.

Bottom line - IHMO - no point in joining a losing camp (Perl), or having a
"bash" camp.

That brings the second question of bash adaptations. My own opinion is that
it will be great to support multiple approaches:
* a.$b.$c - bash style substitution, should include ANY substitution,
including command, arithmetic, ...
* a[expr1][expr2] - will be nice alternative, where b and c are
evaluated/interpolated.
* a.$b[$expr] - mixed usage ?

As far as supporting '.', in the key, I do not see this as a major issue.
For me, the main goal is to support light-weight structure-like values. Not
deep hash tables. Bash will not be the preferred solution for complex
processing of data structure, even if it will be able to support '.' in the
key.

Needless to say - those are my own preferences. There is NO right answer
here.

Yair

On Mon, Sep 5, 2022 at 4:15 AM Martin D Kealey 
wrote:

> Rather than var[i1.i2.i3] I suggest a C-like var[i1][i2][i3] as that
> avoids ambiguity for associative arrays whose keys might include ".", and
> makes it simpler to add floating point arithmetic later.
>
> I would like to allow space in the syntax to (eventually) distinguish
> between an object with a fairly fixed set of fields and a map with an
> arbitrary set of keys. Any C string - including the empty string - should
> be a valid key, but a field name should have the same structure as a
> variable name.
>
> Moreover I'm not so keen on ${var.$key}; I would rather switch the
> preferred syntax for associative arrays (maps) to a Perl-like ${var{key}}
> so that it's clear from the syntax that arithmetic evaluation should not
> occur.
>
> Then we can write ${var[index_expression].field{$key}:-$default}.
>
> Retaining var[key] for associative arrays would be one of the backwards
> compatibility options that's only available for old-style (single-level)
> lookups.
>
> These might seem like frivolous amendments, but they deeply affect future
> features; I can only highly a few things here.
>
> Taken together they enable expression parsing to be a continuation of the
> rest of the parser, rather than a separate subsystem that has to be
> switched into and out of, and so bash -n will be able to tell you about
> syntax errors in your numeric expressions.
>
> There there won't be separate code paths for "parse and evaluate" and
> "skip" when handling conditionals; instead there will be just have "parse",
> with "evaluate" as a separate (bypassable) step. That improves reliability
> and simplifies maintenance. And caching of the parse tree could improve
> performance, if that matters.
>
> Backwards compatibility mode would attempt to parse expressions but also
> keep the literal text, so that when it later turns out that the variable is
> an assoc array, it can use that rather than the expression tree. This would
> of course suppress reporting expression syntax errors using bash -n.
>
> -Martin
>
> On Mon, 5 Sep 2022, 05:49 Yair Lenga,  wrote:
>
>> Putting aside the effort to implement, it might be important to think on
>> how the h-data structure will be used by users. For me, the common use
>> case
>> will be to implement a simple, small "record" like structure to make it
>> easier to write readable code. Bash will never be able to compete with
>> Python/Node for large scale jobs, or for performance critical services,
>> etc. However, there are many devops/cloud tasks where bash + cloud CLI
>> (aws/google/azure) could be a good solution, eliminating the need to build
>> "hybrids". In that context, being able to consume, process and produce
>> data
>> structures relevant to those tasks can be useful. Clearly, 

Re: bug-bash Digest, Vol 238, Issue 2

2022-09-05 Thread Martin D Kealey
Rather than var[i1.i2.i3] I suggest a C-like var[i1][i2][i3] as that avoids
ambiguity for associative arrays whose keys might include ".", and makes it
simpler to add floating point arithmetic later.

I would like to allow space in the syntax to (eventually) distinguish
between an object with a fairly fixed set of fields and a map with an
arbitrary set of keys. Any C string - including the empty string - should
be a valid key, but a field name should have the same structure as a
variable name.

Moreover I'm not so keen on ${var.$key}; I would rather switch the
preferred syntax for associative arrays (maps) to a Perl-like ${var{key}}
so that it's clear from the syntax that arithmetic evaluation should not
occur.

Then we can write ${var[index_expression].field{$key}:-$default}.

Retaining var[key] for associative arrays would be one of the backwards
compatibility options that's only available for old-style (single-level)
lookups.

These might seem like frivolous amendments, but they deeply affect future
features; I can only highly a few things here.

Taken together they enable expression parsing to be a continuation of the
rest of the parser, rather than a separate subsystem that has to be
switched into and out of, and so bash -n will be able to tell you about
syntax errors in your numeric expressions.

There there won't be separate code paths for "parse and evaluate" and
"skip" when handling conditionals; instead there will be just have "parse",
with "evaluate" as a separate (bypassable) step. That improves reliability
and simplifies maintenance. And caching of the parse tree could improve
performance, if that matters.

Backwards compatibility mode would attempt to parse expressions but also
keep the literal text, so that when it later turns out that the variable is
an assoc array, it can use that rather than the expression tree. This would
of course suppress reporting expression syntax errors using bash -n.

-Martin

On Mon, 5 Sep 2022, 05:49 Yair Lenga,  wrote:

> Putting aside the effort to implement, it might be important to think on
> how the h-data structure will be used by users. For me, the common use case
> will be to implement a simple, small "record" like structure to make it
> easier to write readable code. Bash will never be able to compete with
> Python/Node for large scale jobs, or for performance critical services,
> etc. However, there are many devops/cloud tasks where bash + cloud CLI
> (aws/google/azure) could be a good solution, eliminating the need to build
> "hybrids". In that context, being able to consume, process and produce data
> structures relevant to those tasks can be useful. Clearly, JSON and YAML
> are the most relevant formats.
>
> As a theoretical exercise, looking for feedback for the following, assuming
> that implementation can be done. Suggesting the following:
> * ${var.k1.k2.k3}  -> value   # Should lookup an item via h-data,
> supporting the regular modifiers ('-', for default values, '+' for
> alternate, ...)
> * var[k1.k2.k3]=value   # Set a specific key, replacing
> sub-documents, if any - e.g. removing any var[.k1.k2.k3.*]
> * var[k1.k2.k3]=(h-value)  # set a specific key to a new
> h-value
> * ${var.k1.k2.k3.*}   -> h->value   # extract h-value string that represent
> the sub-document k1.k2.k3
>
> The 'h-value' representation may be the same format that is currently used
> by the associative array. No need to reinvent here.
>
> Assuming the above are implemented, the missing pieces are "converters" to
> common formats: json, yaml, and possibly XML (yet, there is still a lot of
> those). In theory, following the 'printf' styles:
> * printjson [-v var] h-value
> * readjson var # or event var.k1
> * printyaml [-v var] h-value
> * readyaml var # or event var.k1
>
> To summarize:
> * Using '.' to identify the hierarchy of the h-data - extension to bash
> syntax.
> * Allow setting a "node" to new value, or new sub-document - may be
> extension
> * Converters to/from standard formats - can be extensions
>
> Looking for feedback
> Yair
>


Re: bug-bash Digest, Vol 238, Issue 2

2022-09-04 Thread Yair Lenga
Putting aside the effort to implement, it might be important to think on
how the h-data structure will be used by users. For me, the common use case
will be to implement a simple, small "record" like structure to make it
easier to write readable code. Bash will never be able to compete with
Python/Node for large scale jobs, or for performance critical services,
etc. However, there are many devops/cloud tasks where bash + cloud CLI
(aws/google/azure) could be a good solution, eliminating the need to build
"hybrids". In that context, being able to consume, process and produce data
structures relevant to those tasks can be useful. Clearly, JSON and YAML
are the most relevant formats.

As a theoretical exercise, looking for feedback for the following, assuming
that implementation can be done. Suggesting the following:
* ${var.k1.k2.k3}  -> value   # Should lookup an item via h-data,
supporting the regular modifiers ('-', for default values, '+' for
alternate, ...)
* var[k1.k2.k3]=value   # Set a specific key, replacing
sub-documents, if any - e.g. removing any var[.k1.k2.k3.*]
* var[k1.k2.k3]=(h-value)  # set a specific key to a new h-value
* ${var.k1.k2.k3.*}   -> h->value   # extract h-value string that represent
the sub-document k1.k2.k3

The 'h-value' representation may be the same format that is currently used
by the associative array. No need to reinvent here.

Assuming the above are implemented, the missing pieces are "converters" to
common formats: json, yaml, and possibly XML (yet, there is still a lot of
those). In theory, following the 'printf' styles:
* printjson [-v var] h-value
* readjson var # or event var.k1
* printyaml [-v var] h-value
* readyaml var # or event var.k1

To summarize:
* Using '.' to identify the hierarchy of the h-data - extension to bash
syntax.
* Allow setting a "node" to new value, or new sub-document - may be
extension
* Converters to/from standard formats - can be extensions

Looking for feedback
Yair


Date: Fri, 2 Sep 2022 09:38:35 +1000
From: Chris Dunlop 
To: Chet Ramey 
Cc: tetsu...@scope-eye.net, bug-bash@gnu.org
Subject: Hierarchical data (was: Light weight support for JSON)
Message-ID: <20220901233835.ga2826...@onthe.net.au>
Content-Type: text/plain; charset=us-ascii; format=flowed

On Wed, Aug 31, 2022 at 11:11:26AM -0400, Chet Ramey wrote:
> On 8/29/22 2:03 PM, tetsu...@scope-eye.net wrote:
>> It would also help greatly if the shell could internally handle
>> hierarchical data in variables.
>
> That's a fundamental change. There would have to be a better reason to
> make it than handling JSON.

I've only a little interest in handling JSON natively in bash (jq usually
gets me there), but I have a strong interest in handling hierarchical data
(h-data) in bash.

I admit I've only had a few cases where I've been jumping through hoops to
manage h-data in bash, but that's because, once it's clear h-data is a
natural way to manage an issue, I would normally handle the problem in
perl rather than trying to force clunky constructs into a bash script. In
perl I use h-data all the time. I'm sure if h-data were available in bash
I'd be using it all the time there as well.

Chris


>
>


Re: bug-bash Digest, Vol 237, Issue 30

2022-08-29 Thread Chet Ramey

On 8/28/22 1:17 PM, Yair Lenga wrote:

Yes, you are correct - (most/all of) of those examples "K".

However, given bash's important role in modern computing - isn't it time to
take advantage of new language features ? this can make code more readable,
efficient and reliable.


There's no actual evidence for this assertion.


I doubt that
many users are trying to install a new bash in a system that was
built/configured 15 years ago.


You might be surprised. I have corresponded with folks who maintain and
distribute 4.3 BSD systems and want to use bash on them. (There are many
bigger problems with that than using prototypes in source code, without a
doubt.)

The oldest I've ever personally built a `modern' bash version on is
Openstep 4.2, and the most recent version I've built there is 5.0.2. That's
not nearly as big a headache as something like 4.3 BSD.



Many Java/python/C++ projects that want to move forward do it as part of
the "major" release, in which they indicate Java 7 (or java java 8) support
will be phased out. Same for C++ and python.


Bash has a different set of dependencies.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/




Re: bug-bash Digest, Vol 237, Issue 30

2022-08-28 Thread Lawrence Velázquez
On Sun, Aug 28, 2022, at 1:17 PM, Yair Lenga wrote:
> Yes, you are correct - (most/all of) of those examples "K".
>
> However, given bash's important role in modern computing - isn't it time to
> take advantage of new language features ?

Why?  What benefit would that actually provide?

> this can make code more readable,
> efficient and reliable.

In practice, there's only person who really interacts with bash
code.  If he doesn't think there's a problem with K style, then
it's not going to change.

> I doubt that
> many users are trying to install a new bash in a system that was
> built/configured 15 years ago.

You might be surprised.

-- 
vq



Re: bug-bash Digest, Vol 237, Issue 30

2022-08-28 Thread Yair Lenga
Yes, you are correct - (most/all of) of those examples "K".

However, given bash's important role in modern computing - isn't it time to
take advantage of new language features ? this can make code more readable,
efficient and reliable. Users who are using  old platforms are most likely
using a "snapshot" of tools - e.g., old gcc, make, ... etc. I doubt that
many users are trying to install a new bash in a system that was
built/configured 15 years ago.

Many Java/python/C++ projects that want to move forward do it as part of
the "major" release, in which they indicate Java 7 (or java java 8) support
will be phased out. Same for C++ and python.


On Sun, Aug 28, 2022 at 12:00 PM  wrote:

> Send bug-bash mailing list submissions to
> bug-bash@gnu.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://lists.gnu.org/mailman/listinfo/bug-bash
> or, via email, send a message with subject or body 'help' to
> bug-bash-requ...@gnu.org
>
> You can reach the person managing the list at
> bug-bash-ow...@gnu.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of bug-bash digest..."
>
>
> Today's Topics:
>
>1. Bash Coding style - Adopting C99 declarations (Yair Lenga)
>2. Re: Light weight support for JSON (Yair Lenga)
>3. Re: Bash Coding style - Adopting C99 declarations (Greg Wooledge)
>
>
> ----------
>
> Message: 1
> Date: Sun, 28 Aug 2022 10:47:38 -0400
> From: Yair Lenga 
> To: bug-bash 
> Subject: Bash Coding style - Adopting C99 declarations
> Message-ID:
>  io-antwp9vxaxjvac0elnp2tm4...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> I've noticed Bash code uses "old-style" C89 declarations:
> * Parameters are separated from the prototype
> * Variables declared only at the beginning of the function
> * No mixed declaration/statements
> * No block local variables
>
> intmax_t
> evalexp (expr, flags, validp)
>  char *expr;
>  int flags;
>  int *validp;
> {
>   intmax_t val;
>   int c;
>   procenv_t oevalbuf;
>
>   val = 0;
>   noeval = 0;
>   already_expanded = (flags_EXPANDED);
>
>
> ---
> Curious as to the motivation of sticking to this standard for new
> development/features. Specifically, is there a requirement to keep bash
> compatible with C89 ? I believe some of those practices are discouraged
> nowadays.
>
> Yair
>
>
> --
>
> Message: 2
> Date: Sun, 28 Aug 2022 10:51:33 -0400
> From: Yair Lenga 
> To: Alex fxmbsw7 Ratchev 
> Cc: bug-bash 
> Subject: Re: Light weight support for JSON
> Message-ID:
> <
> cak3_kppv5xnwbctxacmktvgqahegubm1y7bowa7j6ygpvwo...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Interesting point. Using (optional) separate array can also address the
> problem of "types" - knowing which values are quoted, and which one are
> not. This can also provide enough metadata to convert modified associative
> table back to JSON.
>
> On Sun, Aug 28, 2022 at 9:51 AM Alex fxmbsw7 Ratchev 
> wrote:
>
> >
> >
> > On Sun, Aug 28, 2022, 15:46 Yair Lenga  wrote:
> >
> >> Sorry for not being clear. I'm looking for feedback. The solution that I
> >> have is using python to read the JSON, and generate the commands to
> build
> >> the associative array. Will have to rewrite in "C"/submit if there is
> >> positive feedback from others readers. Yair.
> >>
> >
> > ah, cool
> > i just have a suggestion, .. to store the keys in a separate array, space
> > safe
> >
> > On Sun, Aug 28, 2022 at 9:42 AM Alex fxmbsw7 Ratchev 
> >> wrote:
> >>
> >>>
> >>>
> >>> On Sun, Aug 28, 2022, 15:25 Yair Lenga  wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> Over the last few years, JSON data becomes a integral part of
> >>>> processing.
> >>>> In many cases, I find myself having to automate tasks that require
> >>>> inspection of JSON response, and in few cases, construction of JSON.
> So
> >>>> far, I've taken one of two approaches:
> >>>> * For simple parsing, using 'jq' to extract elements of the JSON
> >>>> * For more complex tasks, switching to python or Javascript.
> >>>>
> >>>> Wanted to get feedback about the following "extensions" to bash that
> >>

Re: bug-bash Digest, Vol 236, Issue 8

2022-07-05 Thread Yair Lenga
Hi.

I agree that the bash local variables are less than ideal (dynamic scope vs
local scope). However, we got to use what we have. In that context, using
'main' has lot of value - documentation, declarative, etc.

In my projects, we use "named" main to create reusable code (e.g. date
calculator can expose date_calc_main, which can be called as a function
after file being sourced. Ideal ? No, productive? Yes, fewer global/bugs:
yes.

One day I will be rewriting code to python :-), probably this  day will be
never. Will be interesting to look alternatives, at a different thread.
Until then, Bash is my tool.

As stated before, I will extend the errfail to the top level, as not
everyone uses bash in the same way (with respect to (not) placing logic at
the top level.


Yair




>
> Message: 5
> Date: Wed, 6 Jul 2022 13:23:14 +1000
> From: Martin D Kealey 
> To: Yair Lenga 
> Cc: Lawrence Velázquez , Martin D Kealey
> , bug-bash 
> Subject: Re: Revisiting Error handling (errexit)
> Message-ID:
>  0dd4vdyl...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> On Wed, 6 Jul 2022 at 08:34, Yair Lenga  wrote:
>
> > in general, for complex scripts, I prefer to move the ‘main’ logic into a
> > function (‘main’, ‘run’,…). This make it possible to keep clean name
> space.
> > Otherwise, all the main variables are becoming global: see below for ‘x’.
> > With many variables, it can be ugly.
> >
> > Function main () {
> > Local x.   # x is local
> > For x in a b ; do process $x ; done
> > }
> >
> > Vs.
> > # x is global, all function will see it.
> > For x in a b ; do process $x ; done
> >
>
> Unfortunately that's not how "local" works. In both cases the variable "x"
> is visible to (and modifiable by) everything that is called from "main"; so
> everything is inside the "main" function, "local" there is quite
> indistinguishable from true global.
>
> -Martin
>
>
> ----------
>
> Subject: Digest Footer
>
> ___
> bug-bash mailing list
> bug-bash@gnu.org
> https://lists.gnu.org/mailman/listinfo/bug-bash
>
>
> --
>
> End of bug-bash Digest, Vol 236, Issue 8
> 
>


Re: bug-bash Digest, Vol 236, Issue 5

2022-07-05 Thread Yair Lenga
Greg,

I agree with you 100%. Not trying to fix errexit behavior. The new errfail (if 
accepted) will provide better error handling (via opt-in) without breaking 
existing code.

Yair.

Sent from my iPad

> On Jul 4, 2022, at 10:00 PM, bug-bash-requ...@gnu.org wrote:
> 
> From: Greg Wooledge 
> To: bug-bash@gnu.org
> Subject: Re: Revisiting Error handling (errexit)
> Message-ID: 
> Content-Type: text/plain; charset=us-ascii
> 
>> On Mon, Jul 04, 2022 at 09:33:28PM +0300, Yair Lenga wrote:
>> Thanks for taking the time to review my post. I do not want to start a
>> thread about the problems with ERREXIT. Instead, I'm trying to advocate for
>> a minimal solution.
> 
> Oh?  Then I have excellent news.  The minimal solution for dealing with
> the insurmountable problems of errexit is: do not use errexit.
> 
> It exists only because POSIX mandates it.  And POSIX mandates it only
> because it has been used historically, and historical script would
> break if it were to be removed or changed.



Re: bug-bash Digest, Vol 232, Issue 27

2022-04-01 Thread Chet Ramey

On 3/31/22 4:44 PM, Jeremy Gurr wrote:

I have put together my own bash debugger (I like it better than the
others I've seen), and wanted to have variable name auto completion in
the 'read' built-in, just like it is in the base command line. Is
there a reason that bash uses a readline that is differently
configured in the 'read' builtin versus the full featured
autocompletion available in readline at the command line? Would this
be a difficult thing to implement?


The completion available to the `read' builtin is readline's default
completion, which is what's appropriate in the vast majority of cases.

The fix/enhancement is to add an option to `read', similar to `-e', so that
using it would result in `read' using bash's command line completion
mechanism instead of readline's default.

This is the second request for something like this, and I'm looking at it
for the next release after bash-5.2 (which is currently in alpha testing).

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: bug-bash Digest, Vol 232, Issue 27

2022-03-31 Thread Jeremy Gurr
I have put together my own bash debugger (I like it better than the
others I've seen), and wanted to have variable name auto completion in
the 'read' built-in, just like it is in the base command line. Is
there a reason that bash uses a readline that is differently
configured in the 'read' builtin versus the full featured
autocompletion available in readline at the command line? Would this
be a difficult thing to implement?

-- Jeremy



Re: bug-bash Digest, Vol 218, Issue 13

2021-01-14 Thread txm

On 1/14/21 9:58 PM, Chet Ramey wrote:

On 1/11/21 11:00 AM, Thomas Mellman wrote:


But here's a bug for you, in readline:

- edit a line

- go to some character

- replace that character with another, using the "r" command.

- cruise further down the line to another character

- hit the "." repeat command

The replace operation will not be executed, but rather the "x"
operation.

This has actually improved over the years.  A while back, repeating an
earlier operation like that would get characters tangled up. Now, it
seems at least to be deterministic.


I can't reproduce this on bash-5.0 or bash-5.1.




Thank you for your response.

Perhaps I have some weird configuration error:

$ echo $BASH_VERSION
5.0.18(1)-release
$ ecxo $BSH_VERSION

In this example, I changed the "h" of echo to "x", then moved right to
the "A" of BASH and hit "."





Re: bug-bash Digest, Vol 218, Issue 13

2021-01-14 Thread Chet Ramey

On 1/14/21 4:01 PM, txm wrote:

On 1/14/21 9:58 PM, Chet Ramey wrote:

On 1/11/21 11:00 AM, Thomas Mellman wrote:


But here's a bug for you, in readline:

- edit a line

- go to some character

- replace that character with another, using the "r" command.

- cruise further down the line to another character

- hit the "." repeat command

The replace operation will not be executed, but rather the "x"
operation.

This has actually improved over the years.  A while back, repeating an
earlier operation like that would get characters tangled up. Now, it
seems at least to be deterministic.


I can't reproduce this on bash-5.0 or bash-5.1.




Thank you for your response.

Perhaps I have some weird configuration error:

$ echo $BASH_VERSION
5.0.18(1)-release
$ ecxo $BSH_VERSION

In this example, I changed the "h" of echo to "x", then moved right to
the "A" of BASH and hit "."


I can't reproduce that:

$ ./bash
$ set -o vi
$ echo $BASH_VERSION
5.0.18(9)-release
$ exho $BxSH_VERSION
bash: exho: command not found

I used a similar set of editing commands. I see the same results with bash-5.1.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: bug-bash Digest, Vol 218, Issue 13

2021-01-14 Thread Chet Ramey

On 1/11/21 11:00 AM, Thomas Mellman wrote:


But here's a bug for you, in readline:

- edit a line

- go to some character

- replace that character with another, using the "r" command.

- cruise further down the line to another character

- hit the "." repeat command

The replace operation will not be executed, but rather the "x" operation.

This has actually improved over the years.  A while back, repeating an
earlier operation like that would get characters tangled up.   Now, it
seems at least to be deterministic.


I can't reproduce this on bash-5.0 or bash-5.1.


--
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: bug-bash Digest, Vol 218, Issue 19

2021-01-13 Thread n952162



On 1/13/21 6:00 PM, bug-bash-requ...@gnu.org wrote:

and then (inevitably)

simply reports an error, because its such files aren't  executable.

But it is not inevitable. Using 'cp' as an example.  Assuming
you have /usr/bin in your PATH, but ~/bin is in your PATH before
/usr/bin, then try:
"touch ~/bin/cp", then
"hash -u" (to clear the hash lookup), then type
"cp", you will find that it returns the
value in /usr/bin, ignoring the non-executable file that was
first in your PATH.  So if an executable is in your PATH, it will
return that in preference to a non-executable.  Only when it can't
find an executable does it return the non-executable.



Okay, that's probably the situation I was in ... I have a cover script
for ftp, that I'd disabled.  I discovered that I had to install ftp -
once I'd done that, the problem wasn't reproducible.



As for why this is useful?  Perhaps someone just created a
script script 'foo' in "~/bin", but forgot to toggle
the execution bit. Then they know that they forgot to
toggle the execution bit.

So it only reports the non-executable when there is no other
option -- not 'inevitably', which is useful because it reminds
people they need to toggle the 'x' bit.

Make sense?



Maybe.  But I think the original comment is relevant here:

"Supporting users who are too lazy to chmod a
file ought to be less important than supporting users who want
fine-grain control over what's executable and what's not."

But I concede that the method implemented is probably an acceptable compromise.




Re: bug-bash Digest, Vol 218, Issue 13

2021-01-12 Thread n952162

On 1/10/21 6:00 PM, bug-bash-requ...@gnu.org wrote:

Message: 3
Date: Sun, 10 Jan 2021 16:49:50 +0100
From: Ángel 
To: bug-bash@gnu.org
Subject: Re: non-executable files in $PATH cause errors
Message-ID:
<94646752576f053515ac2ba4656fe0c895f348ce.ca...@16bits.net>
Content-Type: text/plain; charset="ISO-8859-15"

On 2021-01-10 at 08:52 +0100, n952162 wrote:

Hello,

I consider it a bug that bash (and its hash functionality) includes
non-executable files in its execution look-up and then (inevitably)
simply reports an error, because its such files aren't executable.

Perhaps it's there to support PATH look up for arguments to the bash
command. That would also be a bug. Why should it be okay to execute
a
non-executable script? Supporting users who are too lazy to chmod a
file ought to be less important than supporting users who want
fine-grain control over what's executable and what's not.

Hello

I can't reproduce what you report.

$ mkdir foo bar
$ printf '#!/bin/sh\necho Program "$0"\n' > foo/program
$ printf '#!/bin/sh\necho Program "$0"\n' > bar/program
$ PATH="$PATH:$PWD/foo:$PWD/bar"
$ chmod +x bar/program
$ program

It is executing bar/program, not foo/program which is earlier in the
path, but not executable.

Maybe you just made the earlier program not executable, and the old
path is still being remembered? You should run hash -r after
making executable changes that will make an already-executed command
find a different program in the path (in the example above, making
foo/program executable, or removing again its +x bit).

Best regards



I unfortunately can't reproduce it, either, right now.  I can't remember
if I reconfigured something or was doing something special.  When I
encounter it again, I'll investigate it better.


But here's a bug for you, in readline:

- edit a line

- go to some character

- replace that character with another, using the "r" command.

- cruise further down the line to another character

- hit the "." repeat command

The replace operation will not be executed, but rather the "x" operation.

This has actually improved over the years.  A while back, repeating an
earlier operation like that would get characters tangled up. Now, it
seems at least to be deterministic.






Re: bug-bash Digest, Vol 218, Issue 13

2021-01-11 Thread Thomas Mellman



On 1/10/21 6:00 PM, bug-bash-requ...@gnu.org wrote:

Message: 3
Date: Sun, 10 Jan 2021 16:49:50 +0100
From: Ángel 
To: bug-bash@gnu.org
Subject: Re: non-executable files in $PATH cause errors
Message-ID:
<94646752576f053515ac2ba4656fe0c895f348ce.ca...@16bits.net>
Content-Type: text/plain; charset="ISO-8859-15"

On 2021-01-10 at 08:52 +0100, n952162 wrote:

Hello,

I consider it a bug that bash (and its hash functionality) includes
non-executable files in its execution look-up and then (inevitably)
simply reports an error, because its such files aren't  executable.

Perhaps it's there to support PATH look up for arguments to the bash
command.  That would also be a bug.  Why should it be okay to execute
a
non-executable script?  Supporting users who are too lazy to chmod a
file ought to be less important than supporting users who want
fine-grain control over what's executable and what's not.

Hello

I can't reproduce what you report.

$ mkdir foo bar
$ printf '#!/bin/sh\necho Program "$0"\n' > foo/program
$ printf '#!/bin/sh\necho Program "$0"\n' > bar/program
$ PATH="$PATH:$PWD/foo:$PWD/bar"
$ chmod +x bar/program
$ program

It is executing bar/program, not foo/program which is earlier in the
path, but not executable.

Maybe you just made the earlier program not executable, and the old
path is still being remembered? You should run  hash -r  after
making executable changes that will make an already-executed command
find a different program in the path (in the example above, making
foo/program executable, or removing again its +x bit).

Best regards



I unfortunately can't reproduce it, either.  I can't remember if I
reconfigured something or was doing something special.  When I encounter
it again, I'll investigate it better.


But here's a bug for you, in readline:

- edit a line

- go to some character

- replace that character with another, using the "r" command.

- cruise further down the line to another character

- hit the "." repeat command

The replace operation will not be executed, but rather the "x" operation.

This has actually improved over the years.  A while back, repeating an
earlier operation like that would get characters tangled up.   Now, it
seems at least to be deterministic.





Re: bug-bash Digest, Vol 215, Issue 9

2020-10-11 Thread Robert Elz
Date:Sun, 11 Oct 2020 16:26:58 +0700
From:Budi 
Message-ID:  


  | set -n not work as its supposed job to check validity of a command

That is not what it does.   When -n is set, commands are not executed,
simply parsed.

  | $ set -n 'echo HI' & Y
  | Y

What that does is turn on the -n option, and also set $1 to 'echo HI'
which is not what you intended I think.   Since the set command succeeds
the "echo Y" then runs.

It isn't really clear anywhere when -n (when set) takes effect (it
is usually only ever used on the command line as in

bash -n script

to have the script parsed, but not executed.   That's what the
"check validity" is about - and note that it only checks for
syntax errors, so something like

  | $ set -n 'eco HI' & Y

even if it was done properly, couldn't work, as the shell does not
try to execute the 'eco' command when -n is in effect (assuming this
was rewritten so that an attempt would be made) so it never discovers
that there is no such command.

  | won't do the check, how to solve ?

Depends on what you're really trying to do, which cannot possibly be
to discover whether "eco" is a known command or not, or at least I
hope not.

kre




Re: bug-bash Digest, Vol 215, Issue 9

2020-10-11 Thread Budi
set -n not work as its supposed job to check validity of a command
using Bash command inside a script ?
for echo command checking

$ set -n 'echo HI' & Y
Y

$ set -n 'eco HI' & Y
Y

won't do the check, how to solve ?

On 10/10/20, bug-bash-requ...@gnu.org  wrote:
> Send bug-bash mailing list submissions to
>   bug-bash@gnu.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>   https://lists.gnu.org/mailman/listinfo/bug-bash
> or, via email, send a message with subject or body 'help' to
>   bug-bash-requ...@gnu.org
>
> You can reach the person managing the list at
>   bug-bash-ow...@gnu.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of bug-bash digest..."
>
>
> Today's Topics:
>
>1. Subject: Pressing Ctrl-C during any subshell evaluation
>   terminates the shell (Daniel Farina)
>
>
> --
>
> Message: 1
> Date: Fri, 9 Oct 2020 16:23:23 -0700
> From: Daniel Farina 
> To: bug-bash@gnu.org
> Subject: Subject: Pressing Ctrl-C during any subshell evaluation
>   terminates the shell
> Message-ID:
>   
> Content-Type: text/plain; charset="UTF-8"
>
> Configuration Informatio:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -O2 -g -pipe -Wall -Werror=format-security
> -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions
> -fstack-protector-strong -grecord-gcc-switches
> -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
> -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
> -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection
> -Wno-parentheses -Wno-format-security
> uname output: Linux shrike 5.8.9-200.fc32.x86_64 #1 SMP Mon Sep 14 18:28:45
> UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-redhat-linux-gnu
>
> Bash Version: 5.0
> Patch Level: 17
> Release Status: release
>
> Description:
>
> Pressing Ctrl-C during any subshell evaluation terminates the shell.  I
> noticed this when using direnv and emacs together: Ctrl-C would not cancel
> a subprocess started by the shell, but would exit the entire shell.
>
> Relaying this from: https://github.com/direnv/direnv/issues/627
>
> Repeat-By:
>
> Per https://github.com/direnv/direnv/issues/627#issuecomment-635611930
>
> $ cat bash.rc
> eval "$(direnv hook bash)"
>
> $ bash --rcfile bash.rc
> bash$ echo $PROMPT_COMMAND
> _direnv_hook
> bash$ $(sleep 10) # pressing ^C during those 10 seconds will terminate the
> shell
> ^C
> $ # inner shell terminated
>
> Fix:
>
> No known good fix.  It does seem zsh manages the situation normally without
> much difference in approach. Direnv 2.20.0 does not have this bug, but it
> also has deficits in how it traps signals.
>
>
> --
>
> Subject: Digest Footer
>
> ___
> bug-bash mailing list
> bug-bash@gnu.org
> https://lists.gnu.org/mailman/listinfo/bug-bash
>
>
> --
>
> End of bug-bash Digest, Vol 215, Issue 9
> 
>



Re: [bug-bash] Unexpected sourcing of ~/.bashrc under ssh

2019-10-28 Thread Dr. Werner Fink
On 2019/10/24 10:47:52 -0400, Greg Wooledge wrote:
> On Thu, Oct 24, 2019 at 09:01:07AM +0200, francis.montag...@inria.fr wrote:
> >   When logged on a machine with ssh, executing a simple command CMD1
> >   that spawn a "/bin/bash -c some other command" do not source
> >   ~/.bashrc: normal behaviour.
> > 
> >   When executing "CMD1 | CMD2", the ~/.bashrc is sourced: wrong  .
> 
> Bash can be built with a compile-time option that causes it to try to
> detect when it's the non-interactive child of an ssh session, and source
> the user's ~/.bashrc under those conditions.
> 
> Many Linux distributions enable this option, because they believe that
> their users expect this behavior.

That is what bugzilla had told the last few years here, most users and customers
expect that their bash bahaves (non-)interactive local and remote the same way

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr


signature.asc
Description: PGP signature


Re: Bug? Bash manual not indexable by search engines

2019-05-25 Thread Eduardo A . Bustamante López
On Sat, May 25, 2019 at 02:56:43PM -0400, Richard Marmorstein wrote:
> There was discussion on Twitter today
> (https://twitter.com/PttPrgrmmr/status/1132351142938185728) about how the
> Bash manual appears to not be indexable by search engines.
> 
> https://www.gnu.org/software/bash/manual/bashref.html
> redirects to
> https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html, and
> www.gnu.org/robots.txt
> has
> "Disallow: /savannah-checkouts/"
> 
> We reasoned that this probably wasn't deliberate and wanted to report it.

Hmmm, interesting. How did you get to 
?

I went to: , which is the "landing page" (?)
for Bash. That has:

> Documentation
> 
> *Documentation for Bash* is available online, as is documentation for most GNU
> software. You may also find more information about Bash by running info bash 
> or
> man bash, or by looking at /usr/doc/bash/, /usr/local/doc/bash/, or similar
> directories on your system. A brief summary is available by running bash 
> --help.

The "Documentation for Bash" text includes a link to:
, which then links to:
 (i.e. it's "bash.html", 
not "bashref.html").

Furthermore, if I search for "bash manual" in Google (i.e.
), the top three results (for me)
are:

1. 
2. 
3. 

So, it looks like the manual IS indexable?


I searched for "https://www.gnu.org/software/bash/manual/bashref.html; in Google
too, and I can see it's referenced from a couple of 

user submitted posts, but that's it.



Bug? Bash manual not indexable by search engines

2019-05-25 Thread Richard Marmorstein
There was discussion on Twitter today
(https://twitter.com/PttPrgrmmr/status/1132351142938185728) about how the
Bash manual appears to not be indexable by search engines.

https://www.gnu.org/software/bash/manual/bashref.html
redirects to
https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html, and
www.gnu.org/robots.txt
has
"Disallow: /savannah-checkouts/"

We reasoned that this probably wasn't deliberate and wanted to report it.


Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-12 Thread Greg Wooledge
On Mon, Mar 11, 2019 at 02:26:20PM -0700, L A Walsh wrote:
> How would that break compatibility?

The same way shellshock did.  A function exported by a parent bash
process using format A could not be read by a child bash process expecting
format B.

Now, you may be thinking, "This makes no sense!  The parent bash and
the child bash should be the same shell, using the same format!"  And
most of the time, that's true.  But sometimes the child environment
is different from the parent environment.  For example, if someone has
a chrooted operating system(*) which is older/newer than the main OS,
there may be two different versions of bash between the outer and inner
environments.  A function exported from the outer environment might not
be readable by the chrooted bash.

This happened in real life, to many people.

Also, the non-traditional environment variable names broke some
implementations of at(1) which expected the output of env(1) to be
readable by a shell.  (Hint: it's not.)

I have no objection to bash adding the feature you're requesting,
although I think you may be the only person who actually cares about it.
I just want to make sure you're aware of the ramifications.

Personally I think a patch to allow exporting arrays would be of wider
interest, but I am not going to write that either.

(*) Or Docker container, etc.  The specific case that I saw more than
once was in Debian, where one release of Debian adopted the FIRST set
of shellshock patches that used BASH_FUNC_funcname() as the variable
name, while another release of Debian adopted the SECOND set of
shellshock patches that used BASH_FUNC_funcname%% as the variable name.
People with Debian 7 chrooted inside Debian 8 (or vice versa) ran into
the problem that the two patched versions of bash were not compatible
with each other.

Observe:

ebase@ebase-fla:~$ cat /etc/debian_version 
7.11
ebase@ebase-fla:~$ bash -c 'f() { :; }; export -f f; env | grep BASH_FUNC'
BASH_FUNC_f()=() {  :

root@meglin2:~# cat /etc/debian_version 
8.11
root@meglin2:~# bash -c 'f() { :; }; export -f f; env | grep BASH_FUNC'
BASH_FUNC_f%%=() {  :

The same applied to anyone who compiled their own (patched) version of
bash on a system whose /bin/bash or /usr/bin/bash used a different set
of patches.  A script running under #!/usr/local/bin/bash which exported
a function and then called a script running under #!/bin/bash might not
have the exported function working.



Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-11 Thread Chet Ramey
On 3/11/19 4:15 PM, L A Walsh wrote:
> 
> 
> On 3/6/2019 7:18 AM, Chet Ramey wrote:
>>> Except that  the bash debugger gets lost on files that don't
>>> have a real source file name. Environment is not the name of the file
>>> containing the function -- it is a nebulous, ephemeral area of a
>>> process -- but it certainly is not the repository for source files
>>> that configure bash's behavior.
>>> 
>>
>> If you don't want functions to appear in the environment, don't put them
>> in the environment. Bash is reporting accurately where it read the
>> definition for `addnums'.
>>   
> 1) Where is it documented that if you export a function, the original
> source location is thrown away by bash? 

It's unreasonable to expect that a shell that reads a function definition
from the environment retains any original file and line information. The
only thing the documentation guarantees is that functions may be exported
so subshells have them defined. There's nothing in there that says a
function's file and line number information goes along with that.

> 
> 2). Ok, so it currently gets lost.  Why shouldn't it be fixed?


> Either always keep around source+line or just when some option is set.
> To minimize impact, I'd probably store all of them in 1 place so I can
> do text compression and encode in some storeable format -- at worse BASE64.

So do a sample implementation and see how it works. You can only use the
environment, since that's the only mechanism guaranteed to get information
from one process to another.


-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-11 Thread L A Walsh
On 3/11/2019 1:34 PM, Greg Wooledge wrote:
> It's not documented so much as blatantly obvious by looking at how it's
> implemented.
>   
---
Undocumented features are subject to change at will.  Those are called
'internals'.  How they are implemented is not necessarily pertinent to
what documented features will be supported.  I've never seen
a project guarantee their internal implementation of something that
has some sort of external spec or requirements.
>
> wooledg:~$ export -f title
> wooledg:~$ env | grep -A2 title
> BASH_FUNC_title%%=() {  local IFS=' ';
>  printf '\e]2;%s\a' "$*"
> }
>
>
> There's nowhere in that variable to store metadata such as original source
> file or line number.  You'd need a second variable at the very least.
>   

The 2nd variable is what I suggested:

  To minimize impact, I'd probably store all of them in 1 place so I can
  do text compression and encode in some storeable format -- at worse BASE64.





> It would require a modification of the function export/import code, and
> this would break compatibility *again* (just like the shellshock patches
> did when they changed the implementation).
>   
  
Nope.  If it is separate, and if it is only enabled upon an option. then
child instances would not change their behavior (in this case, by
displaying filename+line for funcs with -F) unless the debug block was
written out by BASH and the child bash was able to read it and process
the information.  If the information is not there or corrupted/unreadable,
the child just ignores it as random stuff set in the environment (by
default).

How would that break compatibility?  Compatibility was broken for show.  The
real bug was bash reading past the end of the function foo(){;} BADCODE
-- at
least according to the recent writeup I read, but users can still
insert arbitrary functions into the environment and compatibility need
not have
been broken.  For that matter a child bash could have been configurable with
some option to read the old format functions as well as the new or only
the new.

Compatibility breakage was purely optional, but such is the case
when outside people start running around yelling about "the sky", er,
"security
is falling!".  Rational-thought exits as the project is held up in the light
for examination, scrutiny and often, ridicule. 

Have seen that happen on more than one other project and the over-reaction
generally results in more long term damage than a simple fix of the problem.

In this case, the exploit would have required them to have physical
access to
my in-memory cache.  Not impossible -- but improbable.  If they do have
such access, a bash exploit is likely the least of my worries.





Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-11 Thread Greg Wooledge
On Mon, Mar 11, 2019 at 01:15:16PM -0700, L A Walsh wrote:
> 1) Where is it documented that if you export a function, the original
> source location is thrown away by bash?

It's not documented so much as blatantly obvious by looking at how it's
implemented.


wooledg:~$ export -f title
wooledg:~$ env | grep -A2 title
BASH_FUNC_title%%=() {  local IFS=' ';
 printf '\e]2;%s\a' "$*"
}


There's nowhere in that variable to store metadata such as original source
file or line number.  You'd need a second variable at the very least.
It would require a modification of the function export/import code, and
this would break compatibility *again* (just like the shellshock patches
did when they changed the implementation).



Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-11 Thread L A Walsh



On 3/6/2019 7:18 AM, Chet Ramey wrote:
>> Except that  the bash debugger gets lost on files that don't
>> have a real source file name. Environment is not the name of the file
>> containing the function -- it is a nebulous, ephemeral area of a
>> process -- but it certainly is not the repository for source files
>> that configure bash's behavior.
>> 
>
> If you don't want functions to appear in the environment, don't put them
> in the environment. Bash is reporting accurately where it read the
> definition for `addnums'.
>   
1) Where is it documented that if you export a function, the original
source location is thrown away by bash?  That is not expected behavior
from reading the bash manpage.  YOU may expect it, I understand and I
understand what you mean when you talk about a function that has been
read from disk that has the '-x' bit set, becomes an 'exported' function
that loses its debug information.

Fine.  How is it that throwing away debug for exported functions was
considered "desirable"?  I don't recall that ever being mentioned on this
list, nor in the manpages.

2). Ok, so it currently gets lost.  Why shouldn't it be fixed?
Either always keep around source+line or just when some option is set.
To minimize impact, I'd probably store all of them in 1 place so I can
do text compression and encode in some storeable format -- at worse BASE64.
>
>   
>>> which is correct: the shell read the definition of addnums from the
>>> environment,
>>>   
>> Except... how did it get into the environment/  It wasn't there when
>> bash started.  Bash created a COPY there -- but the COPY is not
>> a SOURCE file.
>> 
>
> You exported it, putting it into the environment, losing the file and line
> information.  
>   
And the fact that it is lost is desired, designed behavior or an oversite?
And is documented where?  Both semi pointless when the real question is how
to fix it (again, likely with some option, only).  NOTE: unlike the current
implementation, files should be stored by absolute path, since if the
script changes to another directory, we still want the path to be valid.
>   
>> There's no way you can convincingly say that you believe the source came
>> out of nothing into bashes environment, so we'll hopefully avoid
>> that topic as well as bash "being the way it is" due to being kidnapped
>> by space aliens.
>> 
>
> This is gibberish.
>   
Not entirely.  Certainly you would agree the function, when initially loaded
into bash came from some file and line number.  I was attempting humor
thinking
about an alternative where bash mysteriously appears in the environment from
some unknown origin.  Ok, so it's not funny to you.  Fine be a hard
audience,
see if you're happy!  ;~)

>   
>> Of course the debugger could recover here if bash had kept the
>> actual source lines of the function as they were read in, but, as you
>> mention, it would take more memory.  
>> 
>
> This is nonsense, unless you want to pass this hypothetical original
> text through the environment as well.  
>   
Uh...yeah, no matter where it is passed, if it had the source, it might
present
it for purposes of single stepping through it, but that would be a heavier
weight solution than just passing in the source+line.
>   
>> But bashdb doesn't work reliably without, at least being able to find
>> the source.  So the subject of that thread changed to:
>>
>>   "Please store source file name + line number for all functions
>>defined whether or not bashdb is running."
>>
>> Which lead to your assertion that it did, except neither I nor the
>> debugger can accept 'environment line 0' as a useful source file + line
>> number at which we can find the definition of the function we want to
>> step through.
>> 
>
> OK, then don't use the environment to pass functions you want to step
> through. If you want to do that, accept that you have deliberately
> defeated the bashdb feature you want to use.
>   
---
Only deliberately if I *knew* that bash threw away debug info on export.
As it isn't documented anywhere, I'd suggest that was unlikely.  Instead,
I copied that function to an external file in the same directory (via an
absolute path), so that when I needed to step through it, I could source
it again.  Not very elegant or efficient, but it is shell.

Thanks!








Re: [bug-bash] Hidden directories breaks path expansions

2019-03-07 Thread Dr. Werner Fink
On Thu, Mar 07, 2019 at 03:42:49PM +0100, Dr. Werner Fink wrote:
> On Mon, Mar 04, 2019 at 09:00:38AM -0500, Chet Ramey wrote:
> > On 3/4/19 8:19 AM, wer...@suse.de wrote:
> > 
> > > Bash Version: 5.0
> > > Patch Level: 2
> > > Release Status: release
> > > 
> > > Description:
> > >   Since patch bash50-001 there is a regession on path expansion.
> > > The script example below shows:
> > > 
> > > bash/bash> bash tmp/bug.sh
> > >   5.0.2(1)-release
> > >   drwxr-xr-x 2 nobody root 17 Mar  4 14:08 .
> > > 
> > >   bash/bash> /dist/unpacked/sle15-x86_64.full/bin/bash tmp/bug.sh
> > >   4.4.23(1)-release
> > >   -rw-r--r-- 1 nobody root 0 Mar  4 14:10 
> > > /tmp/bugthroughpatch001/hidden/foo/bar
> > > 
> > >   Disabling patch bash50-001 solves this problem but cause
> > >   other problems. It seems as seen by strace and ltrace that
> > >   the bash with patch bash50-001 now makes a stat(2) on every
> > >   single part of the path and run onto EACCES error which cause
> > >   the regression above.
> > 
> > http://lists.gnu.org/archive/html/bug-bash/2019-02/msg00151.html
> > 
> > There is a slightly updated version of that patch attached to this message.
> 
> OK ... the hidden directories do work now ... but in the test suite
> of sed the test case sed-4.7/testsuite/subst-mb-incomplete.sh with
> 
>  print_ver_ sed
> 
>  require_en_utf8_locale_
> 
>  echo > in || framework_failure_
>  printf '\233\375\200\n' > exp-out || framework_failure_
> 
>  LC_ALL=en_US.utf8 sed $(printf 's/^/\\L\233\375\\\200/') in > out 2> err
> 
>  compare exp-out out || fail=1
>  compare /dev/null err || fail=1
> 
>  Exit $fail
> 
> does fail (YaOB).

This is the resulting log file from the testsuite

--- exp-out 2019-03-07 16:29:24.554957850 +
+++ out 2019-03-07 16:29:24.558957897 +
@@ -1 +1 @@
-�375200
+L�375200
FAIL testsuite/subst-mb-incomplete.sh (exit status: 1)

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr


signature.asc
Description: PGP signature


Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-06 Thread Chet Ramey
On 3/5/19 12:13 AM, L A Walsh wrote:

>> OK, doing that doesn't reveal any problem. If you add
>>  shopt -s extdebug; declare -F addnums
>> to prog.sh, it prints
>>
>> addnums 0 environment
>>   
> That it prints 'environment' and '0' are issues as the manpage says:
> 
>the -F option to declare or typeset
>will list the function names only (and optionally the source file
>and line number, if the extdebug shell option is enabled).
> 
> Following those instructions, I enabled the extdebug option before
> the function was defined -- I set a null DEBUG handler just so
> if bash called a DEBUG handler, then I'd make sure to return 0 so
> nothing would be skipped.  And...then I find out that 'extdebug'
> also starts the bash debugger.
> 
> Except that  the bash debugger gets lost on files that don't
> have a real source file name. Environment is not the name of the file
> containing the function -- it is a nebulous, ephemeral area of a
> process -- but it certainly is not the repository for source files
> that configure bash's behavior.

If you don't want functions to appear in the environment, don't put them
in the environment. Bash is reporting accurately where it read the
definition for `addnums'.


>> which is correct: the shell read the definition of addnums from the
>> environment,
> Except... how did it get into the environment/  It wasn't there when
> bash started.  Bash created a COPY there -- but the COPY is not
> a SOURCE file.

You exported it, putting it into the environment, losing the file and line
information.

> 
> There's no way you can convincingly say that you believe the source came
> out of nothing into bashes environment, so we'll hopefully avoid
> that topic as well as bash "being the way it is" due to being kidnapped
> by space aliens.

This is gibberish.

> 
> Of course the debugger could recover here if bash had kept the
> actual source lines of the function as they were read in, but, as you
> mention, it would take more memory.  

This is nonsense, unless you want to pass this hypothetical original
text through the environment as well.

> 
> But bashdb doesn't work reliably without, at least being able to find
> the source.  So the subject of that thread changed to:
> 
>   "Please store source file name + line number for all functions
>defined whether or not bashdb is running."
> 
> Which lead to your assertion that it did, except neither I nor the
> debugger can accept 'environment line 0' as a useful source file + line
> number at which we can find the definition of the function we want to
> step through.

OK, then don't use the environment to pass functions you want to step
through. If you want to do that, accept that you have deliberately
defeated the bashdb feature you want to use.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-04 Thread L A Walsh



On 3/4/2019 4:53 PM, Chet Ramey wrote:
> On 3/4/19 6:44 PM, L A Walsh wrote:
>
>   
>>> What does `trace' mean here?
>>>   
>> ---
>> from the manpage:
>> "output generated when set -x is  enabled"
>> 
>
> OK. We've only been talking about function tracing to this point. There
> are several uses for the word, depending on context.
>   
---
???  Who's this 'we' kimosami?  :-)
Ok, so function tracing enables the functions inheriting various
traps, but doesn't explicitly enable the "-x" functionality...is that
right?

I've specifically been talking about information that is printed out
from the value of PS4.  Wouldn't that exclude function and error
tracing (unless -x is automatically turned on when you turn on
function and/or error tracing)?


> OK, doing that doesn't reveal any problem. If you add
>   shopt -s extdebug; declare -F addnums
> to prog.sh, it prints
>
> addnums 0 environment
>   
That it prints 'environment' and '0' are issues as the manpage says:

   the -F option to declare or typeset
   will list the function names only (and optionally the source file
   and line number, if the extdebug shell option is enabled).

Following those instructions, I enabled the extdebug option before
the function was defined -- I set a null DEBUG handler just so
if bash called a DEBUG handler, then I'd make sure to return 0 so
nothing would be skipped.  And...then I find out that 'extdebug'
also starts the bash debugger.

Except that  the bash debugger gets lost on files that don't
have a real source file name. Environment is not the name of the file
containing the function -- it is a nebulous, ephemeral area of a
process -- but it certainly is not the repository for source files
that configure bash's behavior.

> which is correct: the shell read the definition of addnums from the
> environment,
Except... how did it get into the environment/  It wasn't there when
bash started.  Bash created a COPY there -- but the COPY is not
a SOURCE file.

There's no way you can convincingly say that you believe the source came
out of nothing into bashes environment, so we'll hopefully avoid
that topic as well as bash "being the way it is" due to being kidnapped
by space aliens.

Of course the debugger could recover here if bash had kept the
actual source lines of the function as they were read in, but, as you
mention, it would take more memory.  So ... if you remember,
this started out in a previous thread with my reporting that turning
on file+line functions with shopt -s extdebug gave incidental errors
in bashdb -- which you said was started in response to setting extdebug.

But bashdb doesn't work reliably without, at least being able to find
the source.  So the subject of that thread changed to:

  "Please store source file name + line number for all functions
   defined whether or not bashdb is running."

Which lead to your assertion that it did, except neither I nor the
debugger can accept 'environment line 0' as a useful source file + line
number at which we can find the definition of the function we want to
step through.

So in terms of what is needed, not keeping the debug symbols for the
functions around (for bash, that's source file+line number at the least),
it makes the functions only slightly easier to debug than a stripped
binary.  From those terms it is a design "deficit" (or flaw)...which
still makes it something that doesn't meet the requirements (nevermind
that software engineers never know about requirements until after the
product is released) that needs a solution or fix.

In terms of documented behavior -- it's also "flawed", in that
bash doesn't achieve the behavior it is documented to have (and its
cheating to "fix" the documents to match the flawed behavior like
big companies regularly do (MS, Google, Adobe, etc...).

I'm not saying that they have to be stored by default either -- I was
willing to go and turn on options that I thought should store them --
it is just that they didn't work.  :-(

>  and the environment doesn't have line numbers, per se.
>   
You might think that, except why does single stepping through it increment
the line number?  ;^)

-l

This reminds me of another issue I'm dealing with: google changing the
definition of "filtering" to exclude keeping track of #copies going
through the same email box and *deleting* duplicates[*sic] being routed
to a a single email collection address before being passed on to the
end user's computer.  So even though they supposedly allow you to turn
off filtering, that doesn't prevent them from deleting emails before
you get them.

* - they aren't really duplicates as they have different headers and
delivery destinations.









Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-04 Thread Chet Ramey
On 3/4/19 6:44 PM, L A Walsh wrote:

>> What does `trace' mean here? 
> ---
> from the manpage:
> "output generated when set -x is  enabled"

OK. We've only been talking about function tracing to this point. There
are several uses for the word, depending on context.


> FWIW, the other day, I asked where 'function tracing' and 'error tracing'
> were defined (under the 'extdebug' option under 'shopt' in manpage). I'm
> still not sure what they mean (side info: I only know what execution tracing
> is. I don't know if it is the same or different than function or error
> tracing).

I posted the descriptions. They have to do with trap inheritance so the
debugger can single-step into and trace functions.

>> If I add these lines to prog.sh:
>>   
> _After_ lib.sh has been sourced (I have several functions defined and
> exported at login), run prog.sh.  sourcing lib.sh from prog.sh won't
> duplicate the problem.

OK, doing that doesn't reveal any problem. If you add
shopt -s extdebug; declare -F addnums
to prog.sh, it prints

addnums 0 environment

which is correct: the shell read the definition of addnums from the
environment, and the environment doesn't have line numbers, per se.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-04 Thread L A Walsh



On 3/4/2019 6:16 AM, Chet Ramey wrote:
> On 3/3/19 9:53 PM, L A Walsh wrote:
>   
>> In bash 4.4.12, if I have some 'library' like functions that I
>> read in at login time, and then later call them -- under trace
>> or under bashdb, no source is shown, as bashdb (and for trace, bash)
>> doesn't seem to be able to retrieve the original source file name and
>> line number where the function was defined.
>>
>> I'm attaching/including 2 files that demonstrate this:
>> The first I will call 'lib.sh' that is sourced from my
>> /etc/profile and my /etc/bashrc if bashrc can't find the
>> function.
>>
>> ---'lib.sh'---
>> #!/bin/bash
>> # add numbers passed in and print result to stdout
>> addnums() {
>>   declare -i sum=0
>>   while (($#)); do
>> [[ $1 =~ [-0-9]+ ]] || return -1
>> sum+=$1; shift
>>   done
>>   printf "%d\n" "$sum"
>>   return 0
>> }
>> declare -fxr addnums
>>
>>
>> ---'prog.sh'---
>> #!/bin/bash
>> # prog: calls addnums on each line read from stdin
>> while read ln; do
>>   addnums $ln
>> done
>> ---
>>
>> After lib.sh has been sourced, then either trace prog.sh
>> or try bashdb and single stepping through 'addnums'.
>> 
>
> What does `trace' mean here? 
---
from the manpage:
"output generated when set -x is  enabled"

Earlier, I shared a setting of PS4 that showed the behavior:

export PS4='>${BASH_SOURCE:+${BASH_SOURCE/$HOME/\~}}'\
'#${LINENO}${FUNCNAME:+(${FUNCNAME})}> '

also from the manpage:
PS4  The value of this parameter is expanded  as  with  PS1  and  the
 value  is  printed  before  each command bash displays during an
 execution trace. ...
   ^

FWIW, the other day, I asked where 'function tracing' and 'error tracing'
were defined (under the 'extdebug' option under 'shopt' in manpage). I'm
still not sure what they mean (side info: I only know what execution tracing
is. I don't know if it is the same or different than function or error
tracing).

In general, in the domain of "computer terminology", 'tracing'
means displaying each line of source just before it is executed.
Do you know of any other definition in the "knowledge domain"
or subject of "bash shell scripting" ?  :-)

-Linda






> If I add these lines to prog.sh:
>   
_After_ lib.sh has been sourced (I have several functions defined and
exported at login), run prog.sh.  sourcing lib.sh from prog.sh won't
duplicate the problem.

'



Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-04 Thread Grisha Levit
On Sun, Mar 3, 2019 at 9:56 PM L A Walsh  wrote:
> The first I will call 'lib.sh' that is sourced from my
> /etc/profile
[snip]
> declare -fxr addnums
[snip]
> ---'prog.sh'---
> #!/bin/bash
> # prog: calls addnums on each line read from stdin
> while read ln; do
>   addnums $ln
> done
> ---

It looks like you're exporting a function definition, which makes it
available to prog.sh.  Earlier you said that the reported filename was
"environment" which, makes sense then -- if you run prog.sh as a
script, it only knows about the addnums function because it's in the
environment, it doesn't have any way to inherit the source filename
and line number.  If you change prog.sh to source lib.sh rather than
rely on the imported definition, you should get the debugging
information you're looking for.



Re: Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-04 Thread Chet Ramey
On 3/3/19 9:53 PM, L A Walsh wrote:
> In bash 4.4.12, if I have some 'library' like functions that I
> read in at login time, and then later call them -- under trace
> or under bashdb, no source is shown, as bashdb (and for trace, bash)
> doesn't seem to be able to retrieve the original source file name and
> line number where the function was defined.
> 
> I'm attaching/including 2 files that demonstrate this:
> The first I will call 'lib.sh' that is sourced from my
> /etc/profile and my /etc/bashrc if bashrc can't find the
> function.
> 
> ---'lib.sh'---
> #!/bin/bash
> # add numbers passed in and print result to stdout
> addnums() {
>   declare -i sum=0
>   while (($#)); do
> [[ $1 =~ [-0-9]+ ]] || return -1
> sum+=$1; shift
>   done
>   printf "%d\n" "$sum"
>   return 0
> }
> declare -fxr addnums
> 
> 
> ---'prog.sh'---
> #!/bin/bash
> # prog: calls addnums on each line read from stdin
> while read ln; do
>   addnums $ln
> done
> ---
> 
> After lib.sh has been sourced, then either trace prog.sh
> or try bashdb and single stepping through 'addnums'.

What does `trace' mean here? If I add these lines to prog.sh:

echo $BASH_VERSION
. ./lib.sh
declare -F addnums

I get

4.4.23(7)-release
addnums 3 ./lib.sh



-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Bug: Bash forgets sourcefile and linenumber of read-in functions

2019-03-03 Thread L A Walsh
In bash 4.4.12, if I have some 'library' like functions that I
read in at login time, and then later call them -- under trace
or under bashdb, no source is shown, as bashdb (and for trace, bash)
doesn't seem to be able to retrieve the original source file name and
line number where the function was defined.

I'm attaching/including 2 files that demonstrate this:
The first I will call 'lib.sh' that is sourced from my
/etc/profile and my /etc/bashrc if bashrc can't find the
function.

---'lib.sh'---
#!/bin/bash
# add numbers passed in and print result to stdout
addnums() {
  declare -i sum=0
  while (($#)); do
[[ $1 =~ [-0-9]+ ]] || return -1
sum+=$1; shift
  done
  printf "%d\n" "$sum"
  return 0
}
declare -fxr addnums


---'prog.sh'---
#!/bin/bash
# prog: calls addnums on each line read from stdin
while read ln; do
  addnums $ln
done
---

After lib.sh has been sourced, then either trace prog.sh
or try bashdb and single stepping through 'addnums'.





Re: bug-bash@gnu.org

2019-02-17 Thread Chet Ramey
On 2/14/19 4:20 PM, rugk wrote:
> Hi,
> regarding the paste security issues (pastejacking) [1] there is one last
> thing that shall be done to make it possible for terminal emulators to
> enable a secure shell by default: Enable bracket pasting mode in bash, by
> default.

That's a good reason to turn this on by default for the next version. In
the meantime, it's easy enough to turn on in a startup file for users (or
distributions) who are concerned about it.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



bug-bash@gnu.org

2019-02-14 Thread rugk

Hi,
regarding the paste security issues (pastejacking) [1] there is one last 
thing that shall be done to make it possible for terminal emulators to 
enable a secure shell by default: Enable bracket pasting mode in bash, 
by default.


For details, see https://gitlab.gnome.org/GNOME/vte/issues/92, demo 
https://thejh.net/misc/website-terminal-copy-paste.


GNOME Terminal e.g. mitigated this, but without a proper default of the 
shell, it won't help.


Best regards,
rugk

[1] https://bugzilla.gnome.org/show_bug.cgi?id=697571





Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-21 Thread Chet Ramey
On 1/21/19 4:46 PM, Martijn Dekker wrote:

> So I think SRANDOM is the best name (or SECURE_RANDOM, though that is a
> bit long).

I'm OK with SRANDOM.


-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-21 Thread Quentin



On January 20, 2019 2:39:45 PM UTC, Martijn Dekker  wrote:
>filename_suffix() {
>   chars=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
>length=${#chars}
>for ((i=0; i<10; i++)) do
>printf '%s' "${chars:$(( SECURE_RANDOM % length + 1 )):1}"
>done
>}

The character distribution here will be biased, because ${#chars} is not a 
power-of-2.

TL;DR: discard out-of-range values instead of wrapping them with %:

  filename_suffix() {
chars=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
pow2length=64
result=
while (( ${#result} < 10 )); do
  result+="${chars:$(( SECURE_RANDOM % pow2length )):1}"
done
printf '%s' "$result"
  }

$pow2length is the next greater-or-equal power-of-2 starting from ${#chars}. It 
is fixed here for brevity, since the array is fixed as well, but could be 
computed.
If the value of $(( SECURE_RANDOM % pow2length )) is greater than the length of 
$chars, no character is added. This means that the loop may iterate more than 
10 times to yield 10 characters.

To elaborate on why % doesn't work, consider this: $RNG generates 3-bits random 
values, so in the range 0-7, and I want integers in the range 0-4. If I naively 
use %, I get this distribution:

  $RNG -> $RNG % 5
  0 -> 0
  1 -> 1
  2 -> 2
  3 -> 3
  4 -> 4
  5 -> 0
  6 -> 1
  7 -> 2

Clearly I'm less likely to get 3s and 4s. The same logic applies to your use of 
% with ${#chars}.

If external tools are not an issue and somewhat wasted resources do not itch 
you, for this purpose of generating random strings, dd+tr usually nails it 
(brevity-wise at least):

  dd if=/dev/urandom bs=1024 count=1 status=none | tr -d -c A-Za-z0-9

Then truncate to whatever length you need, or repeat if more characters are 
needed.

Cheers,
Quentin



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-21 Thread Martijn Dekker
Op 21-01-19 om 20:12 schreef Chet Ramey:
> On 1/20/19 9:04 PM, Rawiri Blundell wrote:
>> For what it's worth I did consider suggesting URANDOM, however I
>> figured some users may confuse it like this:
>>
>> RANDOM -> /dev/random
>> URANDOM -> /dev/urandom
>>
>> Couple that with an established base of myths about /dev/urandom, I
>> thought it might be best to suggest something else to eliminate that
>> potential confusion.
> 
> I can see that, but I think RANDOM is established enough that nobody
> assumes it has anything to do with /dev/random.

Not every shell scripter has years of experience. If you pair a RANDOM
with a URANDOM in the shell, then I do think many people will
automatically associate these with /dev/random and /dev/urandom.

Also, I think the name should describe the functionality, not the
specific way it's obtained -- because that could change at some point in
the future, and/or become system-dependent.

So I think SRANDOM is the best name (or SECURE_RANDOM, though that is a
bit long).

> If we're converging on something like URANDOM (or some other name) for a
> better RNG, I don't see the need to change the RANDOM generator.

FWIW, I agree.

- M.



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-21 Thread Chet Ramey
On 1/20/19 9:04 PM, Rawiri Blundell wrote:
> On Mon, Jan 21, 2019 at 10:54 AM Chet Ramey  wrote:
>>
>> On 1/20/19 7:52 AM, Rawiri Blundell wrote:
>>
>>> So it might be a case of restricting the usability of this change to
>>> newer kernels that have dedicated calls like getrandom() or
>>> getentropy(), and having to handle detecting/selecting those?
>>>
>>> So if this is an exercise that you're happy to entertain, and without
>>> wanting to feature-creep too much, why not something like this?
>>
>> I'd probably start with URANDOM as a 32-bit random integer read as
>> four bytes from /dev/urandom. It's trivial to create a filename from
>> that with whatever restrictions (and whatever characters) you want.
>>
> 
> For what it's worth I did consider suggesting URANDOM, however I
> figured some users may confuse it like this:
> 
> RANDOM -> /dev/random
> URANDOM -> /dev/urandom
> 
> Couple that with an established base of myths about /dev/urandom, I
> thought it might be best to suggest something else to eliminate that
> potential confusion.

I can see that, but I think RANDOM is established enough that nobody
assumes it has anything to do with /dev/random.

>>> As an aside, I can confirm the findings of a performance difference
>>> between 4.4 and 5.0 when running the script provided earlier in the
>>> discussion. At first glance it seems to be due to the switch from the
>>> old LCG to the current MINSTD RNG,
>>
>> There's no switch: the bash-4.4 generator and bash-5.0 generators are
>> identical. I'll have to do some profiling when I get a chance.
>>
> 
> I suspect that we're talking at cross purposes, but it's now neither
> here nor there.

We're only talking about the performance difference. It's hard to believe
it's due to the RNG, since that didn't change. The `switch' took place
ten years ago.

> You've expressed that RANDOM's period and seeding are issues for you.
> I think the ChaCha20 patch is a bit overkill for RANDOM's
> requirements, but would you be interested in some investigation into
> middle-ground alternatives like PCG or JSF32?

If we're converging on something like URANDOM (or some other name) for a
better RNG, I don't see the need to change the RANDOM generator.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-21 Thread Robert Elz
Date:Mon, 21 Jan 2019 09:43:17 -0500
From:Chet Ramey 
Message-ID:  <94f6225c-8de2-cd3d-c83e-0d061c8b0...@case.edu>

  | Take the linux mktemp, add the -c option,

Please don't, or at least not the -c option (I don't care if mktemp
is made into a builtin, seems unnecessary - scripts don't tend
to make all that many temporary files - but harmless) but please
don't make non-standard variations become more popular.

That just leads to less portable scripts, rather than better ones.

kre




Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-21 Thread Chet Ramey
On 1/21/19 8:48 AM, Greg Wooledge wrote:
> On Sun, Jan 20, 2019 at 03:39:45PM +0100, Martijn Dekker wrote:
>> E.g. to create a random character string for a temporary
>> file name, you could do
>>
>> filename_suffix() {
>> chars=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
>> length=${#chars}
>> for ((i=0; i<10; i++)) do
>> printf '%s' "${chars:$(( SECURE_RANDOM % length + 1 )):1}"
>> done
>> }
>> tmpfile=/tmp/myfile.$(filename_suffix)
> 
> If we're doing wishlists here, I would much rather have a portable
> builtin mktemp command.  Have it work like the Linux mktemp(1) command
> (automatically create the file before terminating), and if you want to
> put a cherry on top, let it accept and ignore the -c (create) option
> for compatibility with the HP-UX mktemp(1) which doesn't create the file
> by default.

Take the linux mktemp, add the -c option, and turn it into a loadable
builtin. I'd be happy to ship that with the next version.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-21 Thread Greg Wooledge
On Sun, Jan 20, 2019 at 03:39:45PM +0100, Martijn Dekker wrote:
> E.g. to create a random character string for a temporary
> file name, you could do
> 
> filename_suffix() {
> chars=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
> length=${#chars}
> for ((i=0; i<10; i++)) do
> printf '%s' "${chars:$(( SECURE_RANDOM % length + 1 )):1}"
> done
> }
> tmpfile=/tmp/myfile.$(filename_suffix)

If we're doing wishlists here, I would much rather have a portable
builtin mktemp command.  Have it work like the Linux mktemp(1) command
(automatically create the file before terminating), and if you want to
put a cherry on top, let it accept and ignore the -c (create) option
for compatibility with the HP-UX mktemp(1) which doesn't create the file
by default.

P.S. yours needs quite a lot more code (perhaps attempting to create
the file with noclobber in effect, or something similar) to be safe.



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Rawiri Blundell
On Mon, Jan 21, 2019 at 10:54 AM Chet Ramey  wrote:
>
> On 1/20/19 7:52 AM, Rawiri Blundell wrote:
>
> > So it might be a case of restricting the usability of this change to
> > newer kernels that have dedicated calls like getrandom() or
> > getentropy(), and having to handle detecting/selecting those?
> >
> > So if this is an exercise that you're happy to entertain, and without
> > wanting to feature-creep too much, why not something like this?
>
> I'd probably start with URANDOM as a 32-bit random integer read as
> four bytes from /dev/urandom. It's trivial to create a filename from
> that with whatever restrictions (and whatever characters) you want.
>

For what it's worth I did consider suggesting URANDOM, however I
figured some users may confuse it like this:

RANDOM -> /dev/random
URANDOM -> /dev/urandom

Couple that with an established base of myths about /dev/urandom, I
thought it might be best to suggest something else to eliminate that
potential confusion.

(SRANDOM was another one I considered, has a bit of awk familiarity to it...)

> > As an aside, I can confirm the findings of a performance difference
> > between 4.4 and 5.0 when running the script provided earlier in the
> > discussion. At first glance it seems to be due to the switch from the
> > old LCG to the current MINSTD RNG,
>
> There's no switch: the bash-4.4 generator and bash-5.0 generators are
> identical. I'll have to do some profiling when I get a chance.
>

I suspect that we're talking at cross purposes, but it's now neither
here nor there.

You've expressed that RANDOM's period and seeding are issues for you.
I think the ChaCha20 patch is a bit overkill for RANDOM's
requirements, but would you be interested in some investigation into
middle-ground alternatives like PCG or JSF32?

Rawiri



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Chet Ramey
On 1/20/19 8:07 PM, Rawiri Blundell wrote:

> */snip*
> 
> So it looks like problem solved?

There never was a problem.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Rawiri Blundell
On Mon, Jan 21, 2019 at 1:36 PM Eduardo A. Bustamante López
 wrote:
>
> On Sun, Jan 20, 2019 at 05:22:12PM -0500, Chet Ramey wrote:
> > On 1/20/19 4:54 PM, Chet Ramey wrote:
> >
> > >> As an aside, I can confirm the findings of a performance difference
> > >> between 4.4 and 5.0 when running the script provided earlier in the
> > >> discussion. At first glance it seems to be due to the switch from the
> > >> old LCG to the current MINSTD RNG,
> (...)
> > So I ran a quick test.
> >
> > $ ./bash ./x3
> > iterations: 100
> > BASH_VERSION: 5.0.2(4)-maint
> > time: 9.684
> > $ ../bash-5.0/bash ./x3
> > iterations: 100
> > BASH_VERSION: 5.0.0(1)-release
> > time: 9.749
> > $ ../bash-5.0-patched/bash ./x3
> > iterations: 100
> > BASH_VERSION: 5.0.2(3)-release
> > time: 9.840
> > $ ../bash-4.4-patched/bash ./x3
> > iterations: 100
> > BASH_VERSION: 4.4.23(7)-release
> > time: 11.365
> > $ ../bash-4.4-patched/bash ./x3
> > iterations: 100
> > BASH_VERSION: 4.4.23(7)-release
> > time: 11.235
> > jenna.local(1)
> >
> > Where the script is Eduardo's iterator that just expands $RANDOM
> > N times.
> >
> > The random number generator has been the same since bash-4.0.
>
> I'm sorry, my tests were wrong. I built bash using the default `./configure'
> behavior for the `devel' branch, which I always forget, uses the internal
> allocator with debugging enabled, and thus, all of my times were off due to 
> the
> additional malloc overhead.
>
> I rebuilt it with `../bash/configure --without-bash-malloc', which causes it 
> to
> use the system's allocator, and surely enough, the timings make more sense 
> now:
>
> (`build-bash-devel-malloc' is `configure --with-bash-malloc',
>  `build-bash-devel' is `configure --without-bash-malloc')
>
> | dualbus@system76-pc:~/src/gnu/build-bash-devel-malloc$ ./bash 
> ~/test-speed.sh
> | iterations: 100
> | BASH_VERSION: 5.0.0(1)-maint
> | time: 8.765
> |
> | dualbus@system76-pc:~/src/gnu/build-bash-devel-malloc$ 
> ../build-bash-devel/bash ~/test-speed.sh
> | iterations: 100
> | BASH_VERSION: 5.0.0(1)-maint
> | time: 3.431
> |
> | dualbus@system76-pc:~/src/gnu/build-bash-devel-malloc$ bash ~/test-speed.sh
> | iterations: 100
> | BASH_VERSION: 5.0.0(1)-release
> | time: 3.435
> |
> | dualbus@system76-pc:~/src/gnu/build-bash-4.4$ ./bash ~/test-speed.sh
> | iterations: 100
> | BASH_VERSION: 4.4.0(1)-release
> | time: 3.443

Hi,
Perfect timing - I was just about to reply to Chet's earlier message
with the following finding:

*snip*

On Mon, Jan 21, 2019 at 11:22 AM Chet Ramey  wrote:
>
> On 1/20/19 4:54 PM, Chet Ramey wrote:
>
> So I ran a quick test.
>
> [results removed for brevity]

That's interesting.  For comparison here's what I'm seeing:

▓▒░$ ~/bin/bash5/bash /tmp/randtest   # Downloaded and compiled as-is
iterations: 100
BASH_VERSION: 5.0.0(1)-release
time: 7.210
▓▒░$ ~/bin/bash5lcg/bash /tmp/randtest # With all the RANDOM related
code reverted to the 4.4 code
iterations: 100
BASH_VERSION: 5.0.0(2)-release
time: 7.271
▓▒░$ ~/bin/bash5chacha/bash /tmp/randtest # With Ole's chacha patch
iterations: 100
BASH_VERSION: 5.0.0(1)-release
time: 7.443
▓▒░$ /bin/bash /tmp/randtest # Distro provided package, wait a minute...
iterations: 100
BASH_VERSION: 4.4.19(1)-release
time: 5.610

I hadn't thought of the pre-compiled package vs source compiled
difference until just now.  So I figured that while I'd downloaded
4.4's source to double check variables.c, I may as well compile and
test it:

▓▒░$ ~/bin/bash44/bash /tmp/randtest
iterations: 100
BASH_VERSION: 4.4.0(1)-release
time: 7.432

So in the best tradition of n=1 science, it looks like the difference
may be down to compilation choices?  Can anyone else replicate this?

*/snip*

So it looks like problem solved?

Rawiri



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Eduardo A . Bustamante López
On Sun, Jan 20, 2019 at 05:22:12PM -0500, Chet Ramey wrote:
> On 1/20/19 4:54 PM, Chet Ramey wrote:
> 
> >> As an aside, I can confirm the findings of a performance difference
> >> between 4.4 and 5.0 when running the script provided earlier in the
> >> discussion. At first glance it seems to be due to the switch from the
> >> old LCG to the current MINSTD RNG, 
(...)
> So I ran a quick test.
> 
> $ ./bash ./x3
> iterations: 100
> BASH_VERSION: 5.0.2(4)-maint
> time: 9.684
> $ ../bash-5.0/bash ./x3
> iterations: 100
> BASH_VERSION: 5.0.0(1)-release
> time: 9.749
> $ ../bash-5.0-patched/bash ./x3
> iterations: 100
> BASH_VERSION: 5.0.2(3)-release
> time: 9.840
> $ ../bash-4.4-patched/bash ./x3
> iterations: 100
> BASH_VERSION: 4.4.23(7)-release
> time: 11.365
> $ ../bash-4.4-patched/bash ./x3
> iterations: 100
> BASH_VERSION: 4.4.23(7)-release
> time: 11.235
> jenna.local(1)
> 
> Where the script is Eduardo's iterator that just expands $RANDOM
> N times.
> 
> The random number generator has been the same since bash-4.0.

I'm sorry, my tests were wrong. I built bash using the default `./configure'
behavior for the `devel' branch, which I always forget, uses the internal
allocator with debugging enabled, and thus, all of my times were off due to the
additional malloc overhead.

I rebuilt it with `../bash/configure --without-bash-malloc', which causes it to
use the system's allocator, and surely enough, the timings make more sense now:

(`build-bash-devel-malloc' is `configure --with-bash-malloc',
 `build-bash-devel' is `configure --without-bash-malloc')

| dualbus@system76-pc:~/src/gnu/build-bash-devel-malloc$ ./bash ~/test-speed.sh 
| iterations: 100
| BASH_VERSION: 5.0.0(1)-maint
| time: 8.765
| 
| dualbus@system76-pc:~/src/gnu/build-bash-devel-malloc$ 
../build-bash-devel/bash ~/test-speed.sh 
| iterations: 100
| BASH_VERSION: 5.0.0(1)-maint
| time: 3.431
| 
| dualbus@system76-pc:~/src/gnu/build-bash-devel-malloc$ bash ~/test-speed.sh 
| iterations: 100
| BASH_VERSION: 5.0.0(1)-release
| time: 3.435
| 
| dualbus@system76-pc:~/src/gnu/build-bash-4.4$ ./bash ~/test-speed.sh 
| iterations: 100
| BASH_VERSION: 4.4.0(1)-release
| time: 3.443



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Chet Ramey
On 1/20/19 4:54 PM, Chet Ramey wrote:

>> As an aside, I can confirm the findings of a performance difference
>> between 4.4 and 5.0 when running the script provided earlier in the
>> discussion. At first glance it seems to be due to the switch from the
>> old LCG to the current MINSTD RNG, 
> 
> There's no switch: the bash-4.4 generator and bash-5.0 generators are
> identical. I'll have to do some profiling when I get a chance.

So I ran a quick test.

$ ./bash ./x3
iterations: 100
BASH_VERSION: 5.0.2(4)-maint
time: 9.684
$ ../bash-5.0/bash ./x3
iterations: 100
BASH_VERSION: 5.0.0(1)-release
time: 9.749
$ ../bash-5.0-patched/bash ./x3
iterations: 100
BASH_VERSION: 5.0.2(3)-release
time: 9.840
$ ../bash-4.4-patched/bash ./x3
iterations: 100
BASH_VERSION: 4.4.23(7)-release
time: 11.365
$ ../bash-4.4-patched/bash ./x3
iterations: 100
BASH_VERSION: 4.4.23(7)-release
time: 11.235
jenna.local(1)

Where the script is Eduardo's iterator that just expands $RANDOM
N times.

The random number generator has been the same since bash-4.0.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Chet Ramey
On 1/20/19 7:52 AM, Rawiri Blundell wrote:

> So it might be a case of restricting the usability of this change to
> newer kernels that have dedicated calls like getrandom() or
> getentropy(), and having to handle detecting/selecting those?
> 
> So if this is an exercise that you're happy to entertain, and without
> wanting to feature-creep too much, why not something like this?

I'd probably start with URANDOM as a 32-bit random integer read as
four bytes from /dev/urandom. It's trivial to create a filename from
that with whatever restrictions (and whatever characters) you want.

> As an aside, I can confirm the findings of a performance difference
> between 4.4 and 5.0 when running the script provided earlier in the
> discussion. At first glance it seems to be due to the switch from the
> old LCG to the current MINSTD RNG, 

There's no switch: the bash-4.4 generator and bash-5.0 generators are
identical. I'll have to do some profiling when I get a chance.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Martijn Dekker
Op 19-01-19 om 23:10 schreef Chet Ramey:
> On 1/19/19 2:45 PM, Martijn Dekker wrote:
>> Op 16-01-19 om 02:21 schreef Quentin:
>>> If you really need some quality CSPRNG values, I'd suggest adding a
>>> $SECURE_RANDOM variable that just reads from /dev/urandom.
>>
>> IMHO, this would clearly be the correct approach. I don't know of any
>> 21st century Unix or Unix-like system that doesn't have /dev/urandom. I
>> would really like to see shells adopt this idea -- hopefully all with
>> the same variable name.
> 
> OK, this is a reasonable approach. Since /dev/urandom just generates
> random bytes, there's a lot of flexibility and we're not subject to
> any kind of backwards compatibility constraints, especially not the
> 16-bit limit. What do you think would be the best way to present that
> to a user? As a 32-bit random number? A character string you can use to
> create filenames? Some other form?

I'd say numbers would be the most useful, as these are the easiest to
convert into anything else using shell arithmetic and parameter
expansions. E.g. to create a random character string for a temporary
file name, you could do

filename_suffix() {
chars=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
length=${#chars}
for ((i=0; i<10; i++)) do
printf '%s' "${chars:$(( SECURE_RANDOM % length + 1 )):1}"
done
}
tmpfile=/tmp/myfile.$(filename_suffix)

(which would of course already work with RANDOM but that would be
totally insecure, as in not effectively eliminating the risk of collisions).

- Martijn



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-20 Thread Rawiri Blundell
> OK, this is a reasonable approach. Since /dev/urandom just generates
> random bytes, there's a lot of flexibility and we're not subject to
> any kind of backwards compatibility constraints, especially not the
> 16-bit limit. What do you think would be the best way to present that
> to a user? As a 32-bit random number? A character string you can use to
> create filenames? Some other form?

Hi Chet, et al,
I really like this suggestion.  All recent variants of /dev/urandom
are using something like ChaCha20, Yarrow or Fortuna, and older
implementations will still be reasonably secure - using something like
Yarrow, Yarrow-like or arc4.  Ole's concerns should be well covered by
this approach.

Having said that, there appear to be some gotchas, as covered here:

* 
https://stackoverflow.com/questions/2572366/how-to-use-dev-random-or-urandom-in-c
* 
http://insanecoding.blogspot.com/2014/05/a-good-idea-with-bad-usage-devurandom.html

So it might be a case of restricting the usability of this change to
newer kernels that have dedicated calls like getrandom() or
getentropy(), and having to handle detecting/selecting those?

So if this is an exercise that you're happy to entertain, and without
wanting to feature-creep too much, why not something like this?

* $RANDOM = exactly as it is (or with an improved RNG... probably a
separate discussion now)
* $RANDINT = 32 bit integer from RANDOM's RNG i.e. this is seed-able
* $OS_RANDINT = 32 bit random integer from /dev/urandom
* $OS_RANDSTR = random character string from /dev/urandom

I suggest OS_ because it's less typing than, say, SECURE_ or SYSTEM_,
vaguely conveys the source of randomness and it might have some
familiarity with other languages. Python's os.urandom comes to mind.
SYS_ might be a good alternative, too.

Certain characters such as '!' may need to be omitted from
OS_RANDSTR's output, possibly that's a worthwhile thing to keep in
mind?

As an aside, I can confirm the findings of a performance difference
between 4.4 and 5.0 when running the script provided earlier in the
discussion. At first glance it seems to be due to the switch from the
old LCG to the current MINSTD RNG, but I've compiled 5.0 with the old
LCG code and on my hardware that performs roughly the same as the
MINSTD RNG and Ole's ChaCha20 patch. So I thought it might be
something to do with how loops are handled, but testing other tasks
shows no great difference there between 4.4 and 5.0. That's as far as
I got.

Cheers

Rawiri



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-19 Thread Chet Ramey
On 1/19/19 2:45 PM, Martijn Dekker wrote:
> Op 16-01-19 om 02:21 schreef Quentin:
>> If you really need some quality CSPRNG values, I'd suggest adding a
>> $SECURE_RANDOM variable that just reads from /dev/urandom.
> 
> IMHO, this would clearly be the correct approach. I don't know of any
> 21st century Unix or Unix-like system that doesn't have /dev/urandom. I
> would really like to see shells adopt this idea -- hopefully all with
> the same variable name.

OK, this is a reasonable approach. Since /dev/urandom just generates
random bytes, there's a lot of flexibility and we're not subject to
any kind of backwards compatibility constraints, especially not the
16-bit limit. What do you think would be the best way to present that
to a user? As a 32-bit random number? A character string you can use to
create filenames? Some other form?

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-19 Thread Martijn Dekker
Op 16-01-19 om 02:21 schreef Quentin:
> If you really need some quality CSPRNG values, I'd suggest adding a
> $SECURE_RANDOM variable that just reads from /dev/urandom.

IMHO, this would clearly be the correct approach. I don't know of any
21st century Unix or Unix-like system that doesn't have /dev/urandom. I
would really like to see shells adopt this idea -- hopefully all with
the same variable name.

- M.



Re: [bug-bash] $RANDOM not Cryptographically secure pseudorandom number generator

2019-01-15 Thread Quentin

Hello there,

I've reviewed both patches and found some things that should be either 
greatly improved, or buried some place very deep. :-p


On 2019-01-07 08:15, Ole Tange wrote:

On Mon, Jan 7, 2019 at 12:08 AM Chet Ramey  wrote:


On 1/5/19 3:12 PM, Eduardo A. Bustamante López wrote:
> On Fri, Dec 28, 2018 at 10:24:50AM +0100, Ole Tange wrote:
> (...)
>> Patch attached.

What's the period of the resulting RNG? That's the chief complaint 
with

the existing implementation.


My implementation made that hard to determine.


The Salsa20-based RNG from salsa20.patch is completely broken.

Please do not use this earlier patch:
* it doesn't follow the Salsa20 specification (specifically, the Salsa20 
state layout is completely ignored)
* the RNG itself leaks part of the Salsa20 state (15 bits from each of 
the 16 32-bits integers) as its output, which is then used as input to 
generate the next set of numbers; this at the very least hugely 
facilitates full prediction of future values from a restricted sample 
output of the RNG




I have updated the patch: It now supports BASH_RANDOM_16 as well as 32.

I have changed the algorithm to ChaCha20, as it seems that is the
variant that Google, FreeBSD, OpenBSD, and NetBSD have chosen, and it
seems it is a little harder to attack.


Now this new ChaCha20-based RNG sports a much better implementation (the 
state layout is mostly correct, and it does not leak its internal 
state).


A state layout issue remains in that upon reseed, the entire state gets 
altered, which is bad: the constant in `chachastate` must remain 
unaltered, otherwise this is no longer ChaCha20 (and beyond that, it is 
unnecessary to alter both the key and nonce/counter). Also, 
`stringaddseedrand()` allegedly "shakes" the bits if the seed is longer 
than the state, by calling `chacha20_block()`, but the only thing this 
actually does is to increase the counter value in the state (since with 
the switch to ChaCha20 the block primitive rightfully no longer alters 
the whole state).


However, I've got some much more fundamental concerns with this whole 
idea. In short, just because there's a top-notch stream cipher in it 
doesn't mean it's a (good) CSPRNG. And just like with ciphers, if you 
don't know what you're doing really well, don't roll your own CSPRNG.


Specifically, this ChaCha20-based RNG may be computationally secure (see 
https://crypto.stackexchange.com/a/39194), but it's neither forward 
secret nor prediction resistant (the Salsa20-based RNG, though much more 
messy, was however forward secret). Moreover, its computational security 
isn't even that strong, since it depends on the seed remaining secret. 
The initial seed being merely a combination of the current time of day 
and shell PID, it's not random enough and could be brute-forced.


All in all, this makes for a pretty weak CSPRNG, one that I would be 
unwilling to use to generate keys. To improve it, the first step would 
be to add real entropy (from /dev/urandom) to the mix, then use a vetted 
design such as HMAC-DRBG or CTR-DRBG (this one *may* play well with 
ChaCha20). These would give us something computationally secure, forward 
secret and prediction resistant.
Sadly, this would not be trivial to reconcile with the requirement that 
seeding $RANDOM with a known value gives the same random stream (this 
conflicts with the prediction resistance property of a CSPRNG). :(


In my opinion, $RANDOM doesn't need to change: being deterministic, it's 
never gonna be a CSPRNG, so the linear congruential RNG is fine. If you 
really need some quality CSPRNG values, I'd suggest adding a 
$SECURE_RANDOM variable that just reads from /dev/urandom. Implementing 
a full CTR-DRBG seems like way too much work for a shell.




(And please feel free to clean up my code: C is a language I code once
every 5 years or so).


(So while I'm at it: `chachastate` keeps switching types and is 
sometimes a global, sometimes a local variable. That's confusing.)




/OIe


Cheers,

Quentin



Re: [bug-bash] Bash-5.0-beta2 available for download

2018-11-30 Thread Dr. Werner Fink
On Thu, Nov 29, 2018 at 08:52:58AM -0800, Chet Ramey wrote:
> On 11/29/18 7:09 AM, Dr. Werner Fink wrote:
> > On Tue, Nov 27, 2018 at 01:24:38PM -0500, Chet Ramey wrote:
> >> The second beta release of bash-5.0 is now available with the URL
> >>
> >> ftp://ftp.cwru.edu/pub/bash/bash-5.0-beta2.tar.gz
> >>
> > I see this
> > 
> > [ 2709s] seq.c: In function 'long_double_format':
> > [ 2709s] seq.c:166:9: error: expected ';' before 'return'
> > [ 2709s]  return ldfmt;
> 
> Thanks. What are you using that doesn't have long double?

Seen on armv7l only :)

Werner

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr


signature.asc
Description: PGP signature


Re: [bug-bash] Bash-5.0-beta2 available for download

2018-11-29 Thread Chet Ramey
On 11/29/18 7:09 AM, Dr. Werner Fink wrote:
> On Tue, Nov 27, 2018 at 01:24:38PM -0500, Chet Ramey wrote:
>> The second beta release of bash-5.0 is now available with the URL
>>
>> ftp://ftp.cwru.edu/pub/bash/bash-5.0-beta2.tar.gz
>>
> I see this
> 
> [ 2709s] seq.c: In function 'long_double_format':
> [ 2709s] seq.c:166:9: error: expected ';' before 'return'
> [ 2709s]  return ldfmt;

Thanks. What are you using that doesn't have long double?

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



signature.asc
Description: OpenPGP digital signature


Re: [bug-bash] Bash-5.0-beta2 available for download

2018-11-29 Thread Dr. Werner Fink
On Tue, Nov 27, 2018 at 01:24:38PM -0500, Chet Ramey wrote:
> The second beta release of bash-5.0 is now available with the URL
> 
> ftp://ftp.cwru.edu/pub/bash/bash-5.0-beta2.tar.gz
> 
I see this

[ 2709s] seq.c: In function 'long_double_format':
[ 2709s] seq.c:166:9: error: expected ';' before 'return'
[ 2709s]  return ldfmt;
[ 2709s]  ^~

with attached patcj this should go away

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr
--- bash-5.0-beta2/examples/loadables/seq.c
+++ bash-5.0-beta2/examples/loadables/seq.c	2018-11-29 15:06:37.818582755 +
@@ -161,7 +161,7 @@ long_double_format (char const *fmt)
 strcpy (ldfmt + length_modifier_offset + 1,
 fmt + length_modifier_offset + has_L);
 #else
-strcpy (ldfmt + length_modifier_offset, fmt + length_modifier_offset)
+strcpy (ldfmt + length_modifier_offset, fmt + length_modifier_offset);
 #endif
 return ldfmt;
   }


signature.asc
Description: PGP signature


Re: [bug-bash] Which commit for a bug in 4.3.48 which is fixed in 4.4.23

2018-09-25 Thread Dr. Werner Fink
On Mon, Sep 24, 2018 at 01:52:54PM -0400, Chet Ramey wrote:
> On 9/24/18 1:50 PM, Eduardo Bustamante wrote:
> > On Mon, Sep 24, 2018 at 4:09 AM Dr. Werner Fink  wrote:
> > (...)
> >> Reconstructed the attached patch ... seems to work
> > 
> > Out of curiosity, what problem are you trying to solve?
> 
> https://bugzilla.novell.com/show_bug.cgi?id=1107430

Yep ... or as I prefer https://bugzilla.opensuse.org/show_bug.cgi?id=1107430
here a version upgrade is a no-go

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr


signature.asc
Description: PGP signature


Re: [bug-bash] Which commit for a bug in 4.3.48 which is fixed in 4.4.23

2018-09-24 Thread Chet Ramey
On 9/24/18 1:50 PM, Eduardo Bustamante wrote:
> On Mon, Sep 24, 2018 at 4:09 AM Dr. Werner Fink  wrote:
> (...)
>> Reconstructed the attached patch ... seems to work
> 
> Out of curiosity, what problem are you trying to solve?

https://bugzilla.novell.com/show_bug.cgi?id=1107430


-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Re: [bug-bash] Which commit for a bug in 4.3.48 which is fixed in 4.4.23

2018-09-24 Thread Eduardo Bustamante
On Mon, Sep 24, 2018 at 4:09 AM Dr. Werner Fink  wrote:
(...)
> Reconstructed the attached patch ... seems to work

Out of curiosity, what problem are you trying to solve?



Re: [bug-bash] Which commit for a bug in 4.3.48 which is fixed in 4.4.23

2018-09-24 Thread Dr. Werner Fink
On Fri, Sep 21, 2018 at 01:11:38PM +0200, Dr. Werner Fink wrote:
> Hi,
> 
> with 4.3.48 the line
> 
>   T="";echo ">${T//*/ }<"
> 
> leads to
> 
>   ><
> 
> but with 4.4.23 the correct result is given back
> 
>   > <
> 
> in the git repro I do not find any useful login entry for this

Reconstructed the attached patch ... seems to work

-- 
  "Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool." -- Edward Burr
Fix `*' matches any string, including the null string as e.g.

   T=""
   echo ">${T//*/ }<"

had not worked,, that is return string "> <"

---
 lib/glob/gmisc.c |4 ++--
 subst.c  |   20 +---
 2 files changed, 19 insertions(+), 5 deletions(-)

--- subst.c
+++ subst.c 2018-09-24 10:46:21.913346656 +
@@ -4396,7 +4396,7 @@ match_pattern (string, pat, mtype, sp, e
   size_t slen, plen, mslen, mplen;
 #endif
 
-  if (string == 0 || *string == 0 || pat == 0 || *pat == 0)
+  if (string == 0 || pat == 0 || *pat == 0)
 return (0);
 
 #if defined (HANDLE_MULTIBYTE)
@@ -6453,6 +6453,7 @@ get_var_and_type (varname, value, ind, q
 {
   if (value && vtype == VT_VARIABLE)
{
+ *varp = find_variable (vname);
  if (quoted & (Q_DOUBLE_QUOTES|Q_HERE_DOCUMENT))
*valp = dequote_string (value);
  else
@@ -6642,6 +6643,8 @@ pat_subst (string, pat, rep, mflags)
*   with REP and return the result.
*   2.  A null pattern with mtype == MATCH_END means to append REP to
*   STRING and return the result.
+   *   3.  A null STRING with a matching pattern means to append REP to
+   *   STRING and return the result.
* These don't understand or process `&' in the replacement string.
*/
   if ((pat == 0 || *pat == 0) && (mtype == MATCH_BEG || mtype == MATCH_END))
@@ -6663,17 +,27 @@ pat_subst (string, pat, rep, mflags)
}
   return (ret);
 }
+  else if (*string == 0 && (match_pattern (string, pat, mtype, , ) != 0))
+{
+  replen = STRLEN (rep);
+  ret = (char *)xmalloc (replen + 1);
+  if (replen == 0)
+   ret[0] = '\0';
+  else
+   strcpy (ret, rep);
+  return (ret);
+}
 
   ret = (char *)xmalloc (rsize = 64);
   ret[0] = '\0';
 
-  for (replen = STRLEN (rep), rptr = 0, str = string;;)
+  for (replen = STRLEN (rep), rptr = 0, str = string; *str;)
 {
   if (match_pattern (str, pat, mtype, , ) == 0)
break;
   l = s - str;
 
-  if (rxpand)
+  if (rep && rxpand)
 {
   int x;
   mlen = e - s;
@@ -6682,6 +6695,7 @@ pat_subst (string, pat, rep, mflags)
mstr[x] = s[x];
   mstr[mlen] = '\0';
   rstr = strcreplace (rep, '&', mstr, 0);
+ free (mstr);
   rslen = strlen (rstr);
 }
   else
--- lib/glob/gmisc.c
+++ lib/glob/gmisc.c2018-09-24 10:46:30.673185840 +
@@ -53,7 +53,7 @@ match_pattern_wchar (wpat, wstring)
   wchar_t wc;
 
   if (*wstring == 0)
-return (0);
+return (*wpat == L'*');/* XXX - allow only * to match empty string */
 
   switch (wc = *wpat++)
 {
@@ -230,7 +230,7 @@ match_pattern_char (pat, string)
   char c;
 
   if (*string == 0)
-return (0);
+return (*pat == '*');  /* XXX - allow only * to match empty string */
 
   switch (c = *pat++)
 {


signature.asc
Description: PGP signature


Re: bug-bash

2018-02-09 Thread Nolan

On 02/09/2018 05:54 AM, Eduardo A. Bustamante López wrote:

On Thu, Feb 08, 2018 at 05:51:06PM -0800, Nolan wrote:

On 02/08/2018 05:04 PM, Eduardo Bustamante wrote:

On Thu, Feb 8, 2018 at 4:23 PM, Nolan <4030...@gmail.com> wrote:

I have found a 'result' of a command that cannot be a feature.

Enter command, command executes, prints exit at the prompt.

Goes to next line in terminal showing the "#" prompt.

A "whoami" says root.

Is this known?
Do you need screen captures of my terminal session?

Nolan



What command are you running? Also run: type -a COMMAND_THAT_YOU_ARE_RUNNING

Most likely, you're running something that causes your shell to exit,
and if you logged in as root and then changed to your user, it
explains why you're left at a root owned shell.



I always log in a user, and only change to root as needed.

See the attached screen captures.

I was reading the Bash manual and practicing entering
the commands. Thereby learning them.

If you need anything else just hollow.

Thanks for the reply.
Nolan

ps: the 671 byte file is #1



root@databank:~# su nolan
nolan@databank:/root$ cd ~/bash
nolan@databank:~/bash$ exec < ls.sh
nolan@databank:~/bash$ ls -lA
total 40
-rw-r--r-- 1 nolan nolan   671 Feb  8 01:32 bug_in_bash.\?
-rw-r--r-- 1 nolan nolan 7 Feb  8 01:25 ls.sh
-rwxr-xr-x 1 nolan nolan   156 Feb  7 02:10 myCopy
-rwxr-xr-x 1 nolan nolan   130 Feb  7 02:39 myHello
-rw--- 1 nolan nolan 12288 Feb  7 02:39 .myHello.swp
-rw-r--r-- 1 nolan nolan74 Jan 29 13:14 test8
-rw-r--r-- 1 nolan nolan   275 Feb  7 02:20 testFile
-rwxr-xr-x 1 nolan nolan   144 Feb  7 02:33 varassign
nolan@databank:~/bash$ exit
root@databank:~# whoami
root
root@databank:~#




nolan@databank:~/bash$ echo ls -lA >> ls.sh
nolan@databank:~/bash$ read < ls.sh
nolan@databank:~/bash$ readline < ls.sh
bash: readline: command not found
nolan@databank:~/bash$ exec < ls.sh
nolan@databank:~/bash$ ls -lA
total 36
-rw-r--r-- 1 nolan nolan 7 Feb  8 01:25 ls.sh
-rwxr-xr-x 1 nolan nolan   156 Feb  7 02:10 myCopy
-rwxr-xr-x 1 nolan nolan   130 Feb  7 02:39 myHello
-rw--- 1 nolan nolan 12288 Feb  7 02:39 .myHello.swp
-rw-r--r-- 1 nolan nolan74 Jan 29 13:14 test8
-rw-r--r-- 1 nolan nolan   275 Feb  7 02:20 testFile
-rwxr-xr-x 1 nolan nolan   144 Feb  7 02:33 varassign
nolan@databank:~/bash$ exit
root@databank:~# whoami
root
root@databank:~#


Please keep the bug-bash mailing list in your replies.


There's no bug here.  You log into the machine as 'root', then 'su' into your
user; then, you run 'exec < ls.sh', which changes the input of the current
shell from terminal input to whatever is in 'ls.sh'. Once the shell consumes
the 'ls.sh' file, it detects the end-of-file, and exits, leaving you at the
'root' shell, where you started.


Also, for general inquiries, use help-bash,  not bug-bash.


Sorry about that, I hit the wrong reply.
I must have forgotten which terminal I was in,
as I only login as user: nolan
and `su' to root as needed, and then su 'nolan'.
I do keep 3 or 4 tty's open.
Thank's for the reply.




Re: bug-bash

2018-02-09 Thread Eduardo A . Bustamante López
On Thu, Feb 08, 2018 at 05:51:06PM -0800, Nolan wrote:
> On 02/08/2018 05:04 PM, Eduardo Bustamante wrote:
> > On Thu, Feb 8, 2018 at 4:23 PM, Nolan <4030...@gmail.com> wrote:
> > > I have found a 'result' of a command that cannot be a feature.
> > > 
> > > Enter command, command executes, prints exit at the prompt.
> > > 
> > > Goes to next line in terminal showing the "#" prompt.
> > > 
> > > A "whoami" says root.
> > > 
> > > Is this known?
> > > Do you need screen captures of my terminal session?
> > > 
> > > Nolan
> > > 
> > 
> > What command are you running? Also run: type -a COMMAND_THAT_YOU_ARE_RUNNING
> > 
> > Most likely, you're running something that causes your shell to exit,
> > and if you logged in as root and then changed to your user, it
> > explains why you're left at a root owned shell.
> > 
> 
> I always log in a user, and only change to root as needed.
> 
> See the attached screen captures.
> 
> I was reading the Bash manual and practicing entering
> the commands. Thereby learning them.
> 
> If you need anything else just hollow.
> 
> Thanks for the reply.
> Nolan
> 
> ps: the 671 byte file is #1

> root@databank:~# su nolan
> nolan@databank:/root$ cd ~/bash
> nolan@databank:~/bash$ exec < ls.sh
> nolan@databank:~/bash$ ls -lA
> total 40
> -rw-r--r-- 1 nolan nolan   671 Feb  8 01:32 bug_in_bash.\?
> -rw-r--r-- 1 nolan nolan 7 Feb  8 01:25 ls.sh
> -rwxr-xr-x 1 nolan nolan   156 Feb  7 02:10 myCopy
> -rwxr-xr-x 1 nolan nolan   130 Feb  7 02:39 myHello
> -rw--- 1 nolan nolan 12288 Feb  7 02:39 .myHello.swp
> -rw-r--r-- 1 nolan nolan74 Jan 29 13:14 test8
> -rw-r--r-- 1 nolan nolan   275 Feb  7 02:20 testFile
> -rwxr-xr-x 1 nolan nolan   144 Feb  7 02:33 varassign
> nolan@databank:~/bash$ exit
> root@databank:~# whoami
> root
> root@databank:~# 
> 

> nolan@databank:~/bash$ echo ls -lA >> ls.sh
> nolan@databank:~/bash$ read < ls.sh
> nolan@databank:~/bash$ readline < ls.sh
> bash: readline: command not found
> nolan@databank:~/bash$ exec < ls.sh
> nolan@databank:~/bash$ ls -lA
> total 36
> -rw-r--r-- 1 nolan nolan 7 Feb  8 01:25 ls.sh
> -rwxr-xr-x 1 nolan nolan   156 Feb  7 02:10 myCopy
> -rwxr-xr-x 1 nolan nolan   130 Feb  7 02:39 myHello
> -rw--- 1 nolan nolan 12288 Feb  7 02:39 .myHello.swp
> -rw-r--r-- 1 nolan nolan74 Jan 29 13:14 test8
> -rw-r--r-- 1 nolan nolan   275 Feb  7 02:20 testFile
> -rwxr-xr-x 1 nolan nolan   144 Feb  7 02:33 varassign
> nolan@databank:~/bash$ exit
> root@databank:~# whoami
> root
> root@databank:~# 

Please keep the bug-bash mailing list in your replies.


There's no bug here.  You log into the machine as 'root', then 'su' into your
user; then, you run 'exec < ls.sh', which changes the input of the current
shell from terminal input to whatever is in 'ls.sh'. Once the shell consumes
the 'ls.sh' file, it detects the end-of-file, and exits, leaving you at the
'root' shell, where you started.


Also, for general inquiries, use help-bash,  not bug-bash.



Re: bug-bash

2018-02-08 Thread Eduardo Bustamante
On Thu, Feb 8, 2018 at 4:23 PM, Nolan <4030...@gmail.com> wrote:
> I have found a 'result' of a command that cannot be a feature.
>
> Enter command, command executes, prints exit at the prompt.
>
> Goes to next line in terminal showing the "#" prompt.
>
> A "whoami" says root.
>
> Is this known?
> Do you need screen captures of my terminal session?
>
> Nolan
>

What command are you running? Also run: type -a COMMAND_THAT_YOU_ARE_RUNNING

Most likely, you're running something that causes your shell to exit,
and if you logged in as root and then changed to your user, it
explains why you're left at a root owned shell.



bug-bash

2018-02-08 Thread Nolan

I have found a 'result' of a command that cannot be a feature.

Enter command, command executes, prints exit at the prompt.

Goes to next line in terminal showing the "#" prompt.

A "whoami" says root.

Is this known?
Do you need screen captures of my terminal session?

Nolan



Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-10-05 Thread Dan Douglas
On 10/05/2017 02:29 PM, Dan Douglas wrote:
> ...

Another band-aid might be to build bash with -fsplit-stack. Hardly
worth mentioning as it doesn't fix anything - you just run out of memory
instead of overflowing a fixed-size stack, should someone actually want
that for some reason.



signature.asc
Description: OpenPGP digital signature


Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-10-05 Thread Dan Douglas
On 09/25/2017 01:38 PM, Eric Blake wrote:
> On 09/24/2017 12:53 PM, Shlomi Fish wrote:
> 
>>
>> I see. Well, the general wisdom is that a program should not ever segfault, 
>> but
>> instead gracefully handle the error and exit.
> 
> This is possible by installing a SIGSEGV handler that is able to
> gracefully exit the program when stack overflow is detected (although
> such a handler is EXTREMELY limited in what it is able to safely do); in
> fact, the GNU libsigsegv library helps in this task, and is used by some
> other applications (such as GNU m4 and GNU awk) that also can cause
> infinite recursion on poor user input. However, Chet is not obligated to
> use it (even though the idea has been mentioned on the list before).
> 
>> Perhaps implement a maximal
>> recursion depth like zsh does.
> 
> Bash does, in the form of FUNCNEST, but you have to opt into it, as
> otherwise it would be an arbitrary limit, and arbitrary limits go
> against the GNU coding standards.
> 
> By the way, it is in general IMPOSSIBLE to write bash so that it can
> handle ALL possible bad user scripts and still remain responsive to
> further input.  Note that in my description of handling SIGSEGV above
> that I mention that it is only safe to gracefully turn what would
> otherwise be the default core dump into a useful error message - but
> bash STILL has to exit at that point, because you cannot guarantee what
> other resources (including malloc locks) might still be on the stack,
> where a longjmp back out to the main parsing loop may cause future
> deadlock if you do anything unsafe.  If you think you can make bash
> gracefully handle ALL possible bad inputs WITHOUT exiting or going into
> an infloop itself, then you are claiming that you have solved the
> Halting Problem, which any good computer scientist already knows has
> been proven to be undecidable.
> 

If a shell (that's interpreted) crashes due to overflowing its process's
actual call stack it can only be because the shell's "call_function"
function (or its callees) call call_function, and call_function is not
itself tail-recursive so the C compiler can't eliminate it. It should
be perfectly possible to implement that without any recursion so the
shell's stack representation (presumably on the heap) can grow without
affecting the real stack for EVERY call to any trivial shell function. I
don't know what kind of major surgery would be required on bash to fix
that. libsigsegv would only be a band-aid.



signature.asc
Description: OpenPGP digital signature


Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-10-04 Thread Shlomi Fish
Hi all,

On Mon, 25 Sep 2017 13:38:01 -0500
Eric Blake  wrote:

> On 09/24/2017 12:53 PM, Shlomi Fish wrote:
> 
> > 
> > I see. Well, the general wisdom is that a program should not ever segfault,
> > but instead gracefully handle the error and exit.  
> 
> This is possible by installing a SIGSEGV handler that is able to
> gracefully exit the program when stack overflow is detected (although
> such a handler is EXTREMELY limited in what it is able to safely do); in
> fact, the GNU libsigsegv library helps in this task, and is used by some
> other applications (such as GNU m4 and GNU awk) that also can cause
> infinite recursion on poor user input. However, Chet is not obligated to
> use it (even though the idea has been mentioned on the list before).
> 
> > Perhaps implement a maximal
> > recursion depth like zsh does.  
> 
> Bash does, in the form of FUNCNEST, but you have to opt into it, as
> otherwise it would be an arbitrary limit, and arbitrary limits go
> against the GNU coding standards.
>

thanks for all the replies! All I can suggest is that FUNCNEST will have a
reasonable, but overridable and voidable default by default. Not sure if this
is an acceptable solution.

Regards,

Shlomi Fish 
> By the way, it is in general IMPOSSIBLE to write bash so that it can
> handle ALL possible bad user scripts and still remain responsive to
> further input.  Note that in my description of handling SIGSEGV above
> that I mention that it is only safe to gracefully turn what would
> otherwise be the default core dump into a useful error message - but
> bash STILL has to exit at that point, because you cannot guarantee what
> other resources (including malloc locks) might still be on the stack,
> where a longjmp back out to the main parsing loop may cause future
> deadlock if you do anything unsafe.  If you think you can make bash
> gracefully handle ALL possible bad inputs WITHOUT exiting or going into
> an infloop itself, then you are claiming that you have solved the
> Halting Problem, which any good computer scientist already knows has
> been proven to be undecidable.
> 



-- 
-
Shlomi Fish   http://www.shlomifish.org/
http://www.shlomifish.org/humour/bits/New-versions-of-the-GPL/

“If it’s not bloat, it’s not us.”, said Richard Stallman, the colourful head of
the GNU project, and started to sing the Free Software song.
— “The GNU Project Will Integrate GNU Guile into GNU coreutils”

Please reply to list if it's a mailing list post - http://shlom.in/reply .



Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-09-25 Thread Eric Blake
On 09/24/2017 12:53 PM, Shlomi Fish wrote:

> 
> I see. Well, the general wisdom is that a program should not ever segfault, 
> but
> instead gracefully handle the error and exit.

This is possible by installing a SIGSEGV handler that is able to
gracefully exit the program when stack overflow is detected (although
such a handler is EXTREMELY limited in what it is able to safely do); in
fact, the GNU libsigsegv library helps in this task, and is used by some
other applications (such as GNU m4 and GNU awk) that also can cause
infinite recursion on poor user input. However, Chet is not obligated to
use it (even though the idea has been mentioned on the list before).

> Perhaps implement a maximal
> recursion depth like zsh does.

Bash does, in the form of FUNCNEST, but you have to opt into it, as
otherwise it would be an arbitrary limit, and arbitrary limits go
against the GNU coding standards.

By the way, it is in general IMPOSSIBLE to write bash so that it can
handle ALL possible bad user scripts and still remain responsive to
further input.  Note that in my description of handling SIGSEGV above
that I mention that it is only safe to gracefully turn what would
otherwise be the default core dump into a useful error message - but
bash STILL has to exit at that point, because you cannot guarantee what
other resources (including malloc locks) might still be on the stack,
where a longjmp back out to the main parsing loop may cause future
deadlock if you do anything unsafe.  If you think you can make bash
gracefully handle ALL possible bad inputs WITHOUT exiting or going into
an infloop itself, then you are claiming that you have solved the
Halting Problem, which any good computer scientist already knows has
been proven to be undecidable.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



signature.asc
Description: OpenPGP digital signature


Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-09-25 Thread Greg Wooledge
On Sun, Sep 24, 2017 at 08:53:46PM +0300, Shlomi Fish wrote:
> I see. Well, the general wisdom is that a program should not ever segfault, 
> but
> instead gracefully handle the error and exit.

This only applies to applications, not to tools that let YOU write
applications.

I can write a trivial C program that gcc will compile into a program
that segfaults.  That doesn't mean gcc has a bug.  It means my C program
has a bug.

Likewise, if you write a shell script that causes a shell to recurse
infinitely and exceed its available stack space, the bug is in your
script, not in the shell that faithfully tried to run it.

(See also Chet's two replies pointing to FUNCNEST.)



Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-09-24 Thread Chet Ramey
On 9/24/17 1:53 PM, Shlomi Fish wrote:

> I see. Well, the general wisdom is that a program should not ever segfault, 
> but
> instead gracefully handle the error and exit. Perhaps implement a maximal
> recursion depth like zsh does. 

Perhaps read the documentation about the FUNCNEST variable. You get to
decide how much recursion you want.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-09-24 Thread Shlomi Fish
Hi all,

On Sun, 24 Sep 2017 19:24:10 +0300
Pierre Gaston <pierre.gas...@gmail.com> wrote:

> On Sun, Sep 24, 2017 at 5:01 PM, Shlomi Fish <shlo...@shlomifish.org> wrote:
> 
> > Hi all,
> >
> > With bash git master on Mageia v7 x86-64, bash on Debian Stable and other
> > reported sytems:
> >
> > shlomif@telaviv1:~$ /home/shlomif/apps/bash/bin/bash -c 'run() { run; } ;
> > run'
> > Segmentation fault (core dumped)
> > shlomif@telaviv1:~$
> >  
> 
> This, or some, variant has been reported multiple times.
> Like in most programming languages, you can easily write programs that
> behave badly,
> in this case you are exhausting the stack has there is no tail call
> optimization.
> 

I see. Well, the general wisdom is that a program should not ever segfault, but
instead gracefully handle the error and exit. Perhaps implement a maximal
recursion depth like zsh does. Also see the first item at
https://www.joelonsoftware.com/2007/02/19/seven-steps-to-remarkable-customer-service/
about permanently fixing reported problems at their core instead of dealing
with user reports and requests time and again.

Regards,

Shlomi

> see for instance
> https://lists.gnu.org/archive/html/bug-bash/2012-09/msg00073.html
> and the following discussion
> https://lists.gnu.org/archive/html/bug-bash/2012-10/threads.html#5



-- 
-
Shlomi Fish   http://www.shlomifish.org/
https://youtu.be/GoEn1YfYTBM - Tiffany Alvord - “Fall Together”

Chuck Norris does not keep any numbers on his mobile phone’s address book.
Instead, he memorised the entire phone directory.
— http://www.shlomifish.org/humour/bits/facts/Chuck-Norris/

Please reply to list if it's a mailing list post - http://shlom.in/reply .



Re: [BUG] Bash segfaults on an infinitely recursive funcion

2017-09-24 Thread Chet Ramey
On 9/24/17 9:25 AM, Shlomi Fish wrote:
> Hi all,
> 
> With bash git master on Mageia v7 x86-64, bash on Debian Stable and other
> reported sytems:
> 
> shlomif@telaviv1:~$ /home/shlomif/apps/bash/bin/bash -c 'run() { run; } ; run'
> Segmentation fault (core dumped)
> shlomif@telaviv1:~$ 
> 
> note that this is not a fork bomb as no processes are spawned, and it is also
> not an out-of-memory problem. I expect bash to fail on this, but it ought not
> to segfault.

This has come up many times in the past.

You wrote a recursive function that eventually exceeds your shell's
stack space allocation, and the kernel sends it a SIGSEGV.

If you want to constrain function recursion, look at the FUNCNEST
shell variable.

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/



Re: [BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-09-24 Thread Pierre Gaston
On Sun, Sep 24, 2017 at 5:01 PM, Shlomi Fish <shlo...@shlomifish.org> wrote:

> Hi all,
>
> With bash git master on Mageia v7 x86-64, bash on Debian Stable and other
> reported sytems:
>
> shlomif@telaviv1:~$ /home/shlomif/apps/bash/bin/bash -c 'run() { run; } ;
> run'
> Segmentation fault (core dumped)
> shlomif@telaviv1:~$
>

This, or some, variant has been reported multiple times.
Like in most programming languages, you can easily write programs that
behave badly,
in this case you are exhausting the stack has there is no tail call
optimization.

see for instance
https://lists.gnu.org/archive/html/bug-bash/2012-09/msg00073.html
and the following discussion
https://lists.gnu.org/archive/html/bug-bash/2012-10/threads.html#5


[BUG] Bash segfaults on an infinitely recursive funcion (resend)

2017-09-24 Thread Shlomi Fish
Hi all,

With bash git master on Mageia v7 x86-64, bash on Debian Stable and other
reported sytems:

shlomif@telaviv1:~$ /home/shlomif/apps/bash/bin/bash -c 'run() { run; } ; run'
Segmentation fault (core dumped)
shlomif@telaviv1:~$ 

note that this is not a fork bomb as no processes are spawned, and it is also
not an out-of-memory problem. I expect bash to fail on this, but it ought not
to segfault.

also:

shlomif@telaviv1:~$ dash -c 'run() { run; } ; run'
Segmentation fault (core dumped)
shlomif@telaviv1:~$ zsh -c 'run() { run; } ; run'
run: maximum nested function level reached

I hereby put the reproducing code under the
https://creativecommons.org/choose/zero/ .

Credit is due to rosa, ongy, anEpiov and other people of freenode's
##programming channel for inspiring this bug report and helping to diagnose it.

Regards,

Shlomi



[BUG] Bash segfaults on an infinitely recursive funcion

2017-09-24 Thread Shlomi Fish
Hi all,

With bash git master on Mageia v7 x86-64, bash on Debian Stable and other
reported sytems:

shlomif@telaviv1:~$ /home/shlomif/apps/bash/bin/bash -c 'run() { run; } ; run'
Segmentation fault (core dumped)
shlomif@telaviv1:~$ 

note that this is not a fork bomb as no processes are spawned, and it is also
not an out-of-memory problem. I expect bash to fail on this, but it ought not
to segfault.

also:

shlomif@telaviv1:~$ dash -c 'run() { run; } ; run'
Segmentation fault (core dumped)
shlomif@telaviv1:~$ zsh -c 'run() { run; } ; run'
run: maximum nested function level reached

I hereby put the reproducing code under the
https://creativecommons.org/choose/zero/ .

Credit is due to rosa, ongy, anEpiov and other people of freenode's
##programming channel for inspiring this bug report and helping to diagnose it.

Regards,

Shlomi
 m
-- 
-
Shlomi Fish   http://www.shlomifish.org/
Free (Creative Commons) Music Downloads, Reviews and more - http://jamendo.com/

For every A, Chuck Norris is both A and not-A.
Chuck Norris is freaking everything.
— http://www.shlomifish.org/humour/bits/facts/Chuck-Norris/




Re: [bug-bash] Named fifo's causing hanging bash scripts

2015-01-16 Thread Chet Ramey
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 1/13/15 4:29 AM, Dr. Werner Fink wrote:

 Bash Version: 4.3
 Patch Level: 33
 Release Status: release

 Description:
 Named fifo's causing hanging bash scripts like

 while IFS=| read a b c ; do
   [shell code]
 done  (shell code)

 can cause random hangs of the bash.An strace shows that the bash
 stays in wait4()

 And when you attach to one of the hanging bash processes using gdb, what
 does the stack traceback look like?
 
 Yes (and sorry for the wrong email address as this was done on a clean 
 virtual sysstem)
 
 there are two hanging bash processes together with the find command:
 
 werner   19062  0.8  0.0  11864  2868 ttyS0S+   10:21   0:00 bash -x 
 /tmp/brp-25-symlink
 werner   19063  0.0  0.0  11860  1920 ttyS0S+   10:21   0:00 bash -x 
 /tmp/brp-25-symlink
 werner   19064  0.2  0.0  16684  2516 ttyS0S+   10:21   0:00 find . -type 
 l -printf %p|%h|%l n
 
 the gdb -p 19062 and gdb -p 19063 show
 
 (gdb) bt
 #0  0x7f530818a65c in waitpid () from /lib64/libc.so.6
 #1  0x0042b233 in waitchld (block=block@entry=1, wpid=19175) at 
 jobs.c:3235
 #2  0x0042c6da in wait_for (pid=pid@entry=19175) at jobs.c:2496

What do ps and gdb tell you about pid 19175 (and the corresponding pid in
the call to waitchld in the other traceback)?  Running, terminated, reaped,
other?

Chet


- -- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (Darwin)

iEYEARECAAYFAlS5HqsACgkQu1hp8GTqdKuU5QCeKfuBQ4dYeU3fSjJPgtB+31Ep
YPQAoIk8aeYkJWWcghPjYONgvyrE/qy9
=duRA
-END PGP SIGNATURE-



Re: [bug-bash] Named fifo's causing hanging bash scripts

2015-01-16 Thread Dr. Werner Fink
On Fri, Jan 16, 2015 at 09:22:36AM -0500, Chet Ramey wrote:
 On 1/13/15 4:29 AM, Dr. Werner Fink wrote:
 
  Bash Version: 4.3
  Patch Level: 33
  Release Status: release
 
  Description:
  Named fifo's causing hanging bash scripts like
 
  while IFS=| read a b c ; do
[shell code]
  done  (shell code)
 
  can cause random hangs of the bash.An strace shows that the 
  bash
  stays in wait4()
 
  And when you attach to one of the hanging bash processes using gdb, what
  does the stack traceback look like?
  
  Yes (and sorry for the wrong email address as this was done on a clean 
  virtual sysstem)
  
  there are two hanging bash processes together with the find command:
  
  werner   19062  0.8  0.0  11864  2868 ttyS0S+   10:21   0:00 bash -x 
  /tmp/brp-25-symlink
  werner   19063  0.0  0.0  11860  1920 ttyS0S+   10:21   0:00 bash -x 
  /tmp/brp-25-symlink
  werner   19064  0.2  0.0  16684  2516 ttyS0S+   10:21   0:00 find . 
  -type l -printf %p|%h|%l n
  
  the gdb -p 19062 and gdb -p 19063 show
  
  (gdb) bt
  #0  0x7f530818a65c in waitpid () from /lib64/libc.so.6
  #1  0x0042b233 in waitchld (block=block@entry=1, wpid=19175) at 
  jobs.c:3235
  #2  0x0042c6da in wait_for (pid=pid@entry=19175) at jobs.c:2496
 
 What do ps and gdb tell you about pid 19175 (and the corresponding pid in
 the call to waitchld in the other traceback)?  Running, terminated, reaped,
 other?

  d136:~ # ps 10942
PID TTY  STAT   TIME COMMAND
  d136:~ #

... the process does not exists anymore. I guess that this could belong to
the sed commands of the script.  The other thread is showing

  d136: # ps 10922
PID TTY  STAT   TIME COMMAND
  13177 pts/1S+ 0:00 find . -type l -printf %p|%h|%l n

and the backtrace shows here

 0x7fccae8d4860 in __write_nocancel () from /lib64/libc.so.6
 #0  0x7fccae8d4860 in __write_nocancel () from /lib64/libc.so.6
 #1  0x7fccae86f6b3 in _IO_new_file_write () from /lib64/libc.so.6
 #2  0x7fccae86ed73 in new_do_write () from /lib64/libc.so.6
 #3  0x7fccae8704e5 in __GI__IO_do_write () from /lib64/libc.so.6
 #4  0x7fccae86fbe1 in __GI__IO_file_xsputn () from /lib64/libc.so.6
 #5  0x7fccae8416e0 in vfprintf () from /lib64/libc.so.6
 #6  0x7fccae8eec05 in __fprintf_chk () from /lib64/libc.so.6
 #7  0x004106d5 in ?? ()
 #8  0x0040a11b in ?? ()
 #9  0x0040afa9 in ?? ()
 #10 0x0040b0a6 in ?? ()
 #11 0x00409bfe in ?? ()
 #12 0x00409bfe in ?? ()
 #13 0x00404199 in ?? ()
 #14 0x00403911 in ?? ()
 #15 0x7fccae81cb05 in __libc_start_main () from /lib64/libc.so.6
 #16 0x004039dd in ?? ()

which IMHO could be related that output of find is not read anymore(?)


 
 Chet

Werner

-- 
  Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool. -- Edward Burr


signature.asc
Description: Digital signature


Re: [bug-bash] Named fifo's causing hanging bash scripts

2015-01-16 Thread Chet Ramey
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 1/16/15 10:32 AM, Dr. Werner Fink wrote:
 On Fri, Jan 16, 2015 at 09:22:36AM -0500, Chet Ramey wrote:
 On 1/13/15 4:29 AM, Dr. Werner Fink wrote:

 Bash Version: 4.3
 Patch Level: 33
 Release Status: release

 Description:
 Named fifo's causing hanging bash scripts like

 while IFS=| read a b c ; do
   [shell code]
 done  (shell code)

 can cause random hangs of the bash.An strace shows that the 
 bash
 stays in wait4()

 And when you attach to one of the hanging bash processes using gdb, what
 does the stack traceback look like?

 Yes (and sorry for the wrong email address as this was done on a clean 
 virtual sysstem)

 there are two hanging bash processes together with the find command:

 werner   19062  0.8  0.0  11864  2868 ttyS0S+   10:21   0:00 bash -x 
 /tmp/brp-25-symlink
 werner   19063  0.0  0.0  11860  1920 ttyS0S+   10:21   0:00 bash -x 
 /tmp/brp-25-symlink
 werner   19064  0.2  0.0  16684  2516 ttyS0S+   10:21   0:00 find . 
 -type l -printf %p|%h|%l n

 the gdb -p 19062 and gdb -p 19063 show

 (gdb) bt
 #0  0x7f530818a65c in waitpid () from /lib64/libc.so.6
 #1  0x0042b233 in waitchld (block=block@entry=1, wpid=19175) at 
 jobs.c:3235
 #2  0x0042c6da in wait_for (pid=pid@entry=19175) at jobs.c:2496

 What do ps and gdb tell you about pid 19175 (and the corresponding pid in
 the call to waitchld in the other traceback)?  Running, terminated, reaped,
 other?
 
   d136:~ # ps 10942
 PID TTY  STAT   TIME COMMAND
   d136:~ #
 
 ... the process does not exists anymore. I guess that this could belong to
 the sed commands of the script.  

This is why I need to be able to reproduce it.  If the process got reaped,
when would it have happened and why would the call to wait_for() have
found a valid CHILD struct for it?  The whole loop runs with SIGCHLD
blocked, so it's not as if the signal handler could have reaped the
child out from under it.  I have questions but no way to find answers.


- -- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (Darwin)

iEYEARECAAYFAlS5MjoACgkQu1hp8GTqdKvN5ACeK9XEiIQ1glUHC4hEF3ZTKJjL
dUkAoI6nnxKypXP3MFns6/TyaOHNmHL5
=x3Ck
-END PGP SIGNATURE-



Re: [bug-bash] Named fifo's causing hanging bash scripts

2015-01-16 Thread Dr. Werner Fink
On Fri, Jan 16, 2015 at 10:46:02AM -0500, Chet Ramey wrote:
 
  What do ps and gdb tell you about pid 19175 (and the corresponding pid in
  the call to waitchld in the other traceback)?  Running, terminated, reaped,
  other?
  
d136:~ # ps 10942
  PID TTY  STAT   TIME COMMAND
d136:~ #
  
  ... the process does not exists anymore. I guess that this could belong to
  the sed commands of the script.  
 
 This is why I need to be able to reproduce it.  If the process got reaped,
 when would it have happened and why would the call to wait_for() have
 found a valid CHILD struct for it?  The whole loop runs with SIGCHLD
 blocked, so it's not as if the signal handler could have reaped the
 child out from under it.  I have questions but no way to find answers.

OK, thanks for your effort ... I've strip the spec file down step by step and
reached success at commenting out -DMUST_UNBLOCK_CHLD=1 (mea culpa) ...  many
thanks for your help!

Werner

-- 
  Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool. -- Edward Burr


signature.asc
Description: Digital signature


Re: [bug-bash] Named fifo's causing hanging bash scripts

2015-01-16 Thread Jonathan Hankins
Dr. Fink,

Have you tried getting rid of the stderr redirect on your find command to
make sure find isn't showing any errors?

If you eliminate most of the inside of your while loop, does it still
hang?  For example:

while IFS=| read link link_dir link_dest; do
echo $link,$link_dir,$link_dest
done  (find . -type l -printf '%p|%h|%l\n' 2/dev/null)

-Jonathan Hankins


On Fri, Jan 16, 2015 at 9:46 AM, Chet Ramey chet.ra...@case.edu wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 1/16/15 10:32 AM, Dr. Werner Fink wrote:
  On Fri, Jan 16, 2015 at 09:22:36AM -0500, Chet Ramey wrote:
  On 1/13/15 4:29 AM, Dr. Werner Fink wrote:
 
  Bash Version: 4.3
  Patch Level: 33
  Release Status: release
 
  Description:
  Named fifo's causing hanging bash scripts like
 
  while IFS=| read a b c ; do
[shell code]
  done  (shell code)
 
  can cause random hangs of the bash.An strace shows that
 the bash
  stays in wait4()
 
  And when you attach to one of the hanging bash processes using gdb,
 what
  does the stack traceback look like?
 
  Yes (and sorry for the wrong email address as this was done on a clean
 virtual sysstem)
 
  there are two hanging bash processes together with the find command:
 
  werner   19062  0.8  0.0  11864  2868 ttyS0S+   10:21   0:00 bash
 -x /tmp/brp-25-symlink
  werner   19063  0.0  0.0  11860  1920 ttyS0S+   10:21   0:00 bash
 -x /tmp/brp-25-symlink
  werner   19064  0.2  0.0  16684  2516 ttyS0S+   10:21   0:00 find
 . -type l -printf %p|%h|%l n
 
  the gdb -p 19062 and gdb -p 19063 show
 
  (gdb) bt
  #0  0x7f530818a65c in waitpid () from /lib64/libc.so.6
  #1  0x0042b233 in waitchld (block=block@entry=1, wpid=19175)
 at jobs.c:3235
  #2  0x0042c6da in wait_for (pid=pid@entry=19175) at
 jobs.c:2496
 
  What do ps and gdb tell you about pid 19175 (and the corresponding pid
 in
  the call to waitchld in the other traceback)?  Running, terminated,
 reaped,
  other?
 
d136:~ # ps 10942
  PID TTY  STAT   TIME COMMAND
d136:~ #
 
  ... the process does not exists anymore. I guess that this could belong
 to
  the sed commands of the script.

 This is why I need to be able to reproduce it.  If the process got reaped,
 when would it have happened and why would the call to wait_for() have
 found a valid CHILD struct for it?  The whole loop runs with SIGCHLD
 blocked, so it's not as if the signal handler could have reaped the
 child out from under it.  I have questions but no way to find answers.


 - --
 ``The lyf so short, the craft so long to lerne.'' - Chaucer
  ``Ars longa, vita brevis'' - Hippocrates
 Chet Ramey, ITS, CWRUc...@case.edu
 http://cnswww.cns.cwru.edu/~chet/
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.11 (Darwin)

 iEYEARECAAYFAlS5MjoACgkQu1hp8GTqdKvN5ACeK9XEiIQ1glUHC4hEF3ZTKJjL
 dUkAoI6nnxKypXP3MFns6/TyaOHNmHL5
 =x3Ck
 -END PGP SIGNATURE-




-- 

Jonathan HankinsHomewood City Schools

The simplest thought, like the concept of the number one,
has an elaborate logical underpinning. - Carl Sagan

jhank...@homewood.k12.al.us



Re: [bug-bash] Named fifo's causing hanging bash scripts

2015-01-13 Thread Dr. Werner Fink
On Mon, Jan 12, 2015 at 11:50:56AM -0500, Chet Ramey wrote:
 On 1/12/15 9:55 AM, wer...@linux-8jdz.site wrote:
  Configuration Information [Automatically generated, do not change]:
  Machine: x86_64
  OS: linux-gnu
  Compiler: gcc -I/home/abuild/rpmbuild/BUILD/bash-4.3 
  -L/home/abuild/rpmbuild/BUILD/bash-4.3/../readline-6.3
  Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
  -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-suse-linux-gnu' 
  -DCONF_VENDOR='suse' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' 
  -DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib   -fmessage-length=0 
  -grecord-gcc-switches -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector 
  -funwind-tables -fasynchronous-unwind-tables -g  -D_GNU_SOURCE 
  -DRECYCLES_PIDS -Wall -g -Wuninitialized -Wextra -Wno-unprototyped-calls 
  -Wno-switch-enum -Wno-unused-variable -Wno-unused-parameter 
  -Wno-parentheses -ftree-loop-linear -pipe -DBNC382214=0 
  -DMUST_UNBLOCK_CHLD=1 -DIMPORT_FUNCTIONS_DEF=0 -fprofile-use
  uname output: Linux d136 3.15.0-rc7-3-desktop #1 SMP PREEMPT Wed May 28 
  15:39:51 UTC 2014 (96f5b60) x86_64 x86_64 x86_64 GNU/Linux
  Machine Type: x86_64-suse-linux-gnu
  
  Bash Version: 4.3
  Patch Level: 33
  Release Status: release
  
  Description:
  Named fifo's causing hanging bash scripts like
  
  while IFS=| read a b c ; do
[shell code]
  done  (shell code)
  
  can cause random hangs of the bash.An strace shows that the bash
  stays in wait4()
 
 And when you attach to one of the hanging bash processes using gdb, what
 does the stack traceback look like?

Yes (and sorry for the wrong email address as this was done on a clean virtual 
sysstem)

there are two hanging bash processes together with the find command:

werner   19062  0.8  0.0  11864  2868 ttyS0S+   10:21   0:00 bash -x 
/tmp/brp-25-symlink
werner   19063  0.0  0.0  11860  1920 ttyS0S+   10:21   0:00 bash -x 
/tmp/brp-25-symlink
werner   19064  0.2  0.0  16684  2516 ttyS0S+   10:21   0:00 find . -type l 
-printf %p|%h|%l n

the gdb -p 19062 and gdb -p 19063 show

(gdb) bt
#0  0x7f530818a65c in waitpid () from /lib64/libc.so.6
#1  0x0042b233 in waitchld (block=block@entry=1, wpid=19175) at 
jobs.c:3235
#2  0x0042c6da in wait_for (pid=pid@entry=19175) at jobs.c:2496
#3  0x004302e1 in command_substitute (string=string@entry=0x22ccd80 
dirname_int $link, 
quoted=quoted@entry=1) at subst.c:5534
#4  0x004704db in param_expand (string=string@entry=0x22cc8d0 
$(dirname_int $link), 
sindex=sindex@entry=0x7fff39f90ef0, quoted=quoted@entry=1, 
expanded_something=expanded_something@entry=0x0, 
contains_dollar_at=contains_dollar_at@entry=0x7fff39f90f20, 
quoted_dollar_at_p=quoted_dollar_at_p@entry=0x7fff39f90f00, 
had_quoted_null_p=had_quoted_null_p@entry=0x7fff39f90f10, pflags=0) at 
subst.c:7970
#5  0x00471123 in expand_word_internal (word=word@entry=0x22cc1a0, 
quoted=quoted@entry=1, 
isexp=isexp@entry=0, 
contains_dollar_at=contains_dollar_at@entry=0x7fff39f91080, 
expanded_something=expanded_something@entry=0x0) at subst.c:8393
#6  0x0047130f in expand_word_internal (word=word@entry=0x7fff39f91120, 
quoted=quoted@entry=0, 
isexp=isexp@entry=0, contains_dollar_at=contains_dollar_at@entry=0x0, 
expanded_something=expanded_something@entry=0x0) at subst.c:8548
#7  0x00472daf in call_expand_word_internal (e=0x0, c=0x0, i=0, q=0, 
w=0x7fff39f91120) at subst.c:3299
#8  expand_string_assignment (string=string@entry=0x22cb159 \$(dirname_int 
$link)\, quoted=quoted@entry=0)
at subst.c:3387
#9  0x00473110 in expand_string_if_necessary (string=optimized out, 
string@entry=0x22cb159 \$(dirname_int $link)\, quoted=quoted@entry=0, 
func=func@entry=0x472d50 expand_string_assignment) at subst.c:3092
#10 0x00473349 in do_assignment_internal (word=0x22cbbe0, expand=1) at 
subst.c:2823
#11 0x0047776a in do_word_assignment (flags=optimized out, 
word=optimized out) at subst.c:2912
#12 expand_word_list_internal (eflags=optimized out, list=optimized out) at 
subst.c:9669
#13 expand_words (list=0x) at subst.c:9280
#14 0x00461093 in execute_simple_command (simple_command=0x22c1ed0, 
pipe_in=pipe_in@entry=-1, 
pipe_out=pipe_out@entry=-1, async=async@entry=0, 
fds_to_close=fds_to_close@entry=0x22ccce0)
at execute_cmd.c:4001
#15 0x004629fc in execute_command_internal (command=0x22bc9e0, 
asynchronous=asynchronous@entry=0, 
pipe_in=pipe_in@entry=-1, pipe_out=pipe_out@entry=-1, 
fds_to_close=fds_to_close@entry=0x22ccce0)
at execute_cmd.c:788
#16 0x00462ba6 in execute_connection (fds_to_close=0x22ccce0, 
pipe_out=-1, pipe_in=-1, asynchronous=0, 
command=0x22c0bd0) at execute_cmd.c:2497
#17 execute_command_internal (command=command@entry=0x22c0bd0, 
asynchronous=asynchronous@entry=0, 
pipe_in=pipe_in@entry=-1, 

Re: [bug-bash] Troublesome checkwinsize (none) behaviour

2013-07-15 Thread Dr. Werner Fink
On Sat, Jul 13, 2013 at 02:52:07AM -0700, Linda Walsh wrote:
 Not even putting an underscore in front or back of it.  'path' is a
 not an uncommon name for shell scripts to use.
 
 Also, I assume you know that suse scripts export COLUMNS in places
 like  /etc/profile, /etc/csh.login and /etc/ksh.kshrc...

Not for normal login sessions or do you work on/with an iSeries?

 Perhaps one of those is propagating to the error cases mentioned?

No it is not.

Werner

-- 
  Having a smoking section in a restaurant is like having
  a peeing section in a swimming pool. -- Edward Burr


pgp7e1hUU8ydH.pgp
Description: PGP signature


Re: bug: bash 4.2.20 impossibly slow

2012-03-18 Thread Somchai Smythe
On 3/16/12, Chet Ramey chet.ra...@case.edu wrote:
 On 3/14/12 2:14 PM, Somchai Smythe wrote:
 Hello,

 I am reporting a problem with performance, not correctness.

 While preparing some examples for a course lecture where I code the
 same algorithm in many languages to compare languages, I ran some code
 and while it was  reasonably quick with ksh, it would just apparently
 hang at 100% cpu in bash.  I finally let it run overnight and it does
 complete correctly in bash, but what takes ksh less than a minute
 takes bash 6 1/2 hours to complete (and keeping one core at 100% the
 entire 6.5 hours) on the same hardware.  I suspect there may be some
 special way to compile bash that I don't know about that maybe works
 with arrays differently, so I reporting this.  I am not subscribed, so
 please cc: me.  I cannot use bashbug since my university blocks
 outgoing mail.  I used exactly the same file unmodified for the tests
 in ksh and bash.  My hope is that bash would be at least 'competitive'
 and complete it without being more than 10x slower.  As it is, I
 cannot use bash in the lecture (for this) since it is only 3 hours
 long and the program won't complete in that amount of time.

 BLS2 $bash --version
 GNU bash, version 4.2.20(2)-release (x86_64-unknown-linux-gnu)
 Copyright (C) 2011 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later
 http://gnu.org/licenses/gpl.html

   [...]

 The program run was a simple prime number sieve program using an array
 with only 50 elements:

 #! /bin/bash

 if ((${#}==1))
 then
   n=${1}
 else
   n=50
 fi
 for ((i=1;i=n;i++))
 do
   ((a[i]=1))
 done
 for ((i=2;i=(n/2);i++))
 do
   for ((j=2;j=(n/i);j++))
   do
  ((k=i*j))
  ((a[k]=0))
   done
 done
 for ((i=1;i=n;i++))
 do
   if ((a[i]!=0))
   then
 printf %d\n ${i}
   fi
 done
 exit 0

 When run like:
 time bash ./sieve.sh

 .
 499717
 499729
 499739
 499747
 499781
 499787
 499801
 499819
 499853
 499879
 499883
 499897
 499903
 499927
 499943
 499957
 499969
 499973
 499979

 real396m9.884s
 user395m43.102s
 sys 0m8.913s

   [...]

 Computer is a Core2 Duo with 3GB of ram running a pure64bit linux
 distribution with kernel 3.2.9 with gcc 4.5.3 and glibc 2.14.1.

 Both programs get the same answer, so this is not a correctness issue,
 but instead a performance issue.

   [...]

 Experimenting a bit shows me that at  elements bash is still
 reasonably fast, but at 2 elements it takes:

 real0m39.077s
 user0m38.807s
 sys 0m0.150s

 For that, ksh takes:

 real0m1.631s
 user0m1.560s
 sys 0m0.007s

 Perhaps that shorter total time still shows the problem dramatically
 enough that runs that size can be used to track down the problem
 without having to wait hours for the test runs.

 As I said in my earlier reply, the bash array implementation uses sparse
 doubly-linked lists.  Inserts that don't append a value to the end of
 the array take O(N) instead of O(1), since you have to search the list
 to find the right spot to insert the new element.

 There is a lot that can be done to accommodate sequential insertion
 patterns, which fits the sieve algorithm pretty well.  It would be
 pretty simple, since the array code already maintains a pointer to the
 last reference; starting the search for the spot to insert at the last
 reference should reduce the search time considerably.

 That makes a pretty big difference.  Starting at  elements, but
 removing the echos to minimize output:

 (bash)
 real  0m1.833s
 user  0m1.704s
 sys   0m0.118s

 (bash-4.2.24)
 real  0m2.728s
 user  0m2.603s
 sys   0m0.114s

 With 2 elements:
 (bash)
 real  0m6.957s
 user  0m6.437s
 sys   0m0.377s

 (bash-4.2.24)
 real  0m16.520s
 user  0m16.113s
 sys   0m0.387s

 You start seeing real differences at 9 elements (for what it's worth,
 loading up the array initially takes about 1.2s of that total):

 (bash)
 real  0m32.329s
 user  0m30.845s
 sys   0m1.376s

 (bash-4.2.24)
 real  2m46.972s
 user  2m45.095s
 sys   0m1.590s

 At this point I quit testing bash-4.2.24; I don't have *that* much time.
 The rest of these are just with the modified development sources, and not
 in a really rigorous way, since I was using the machine for other things
 at the time.

 29 elements:
 real  3m24.318s
 user  3m19.463s
 sys   0m4.768s

 39 elements:
 real  5m27.585s
 user  5m20.870s
 sys   0m6.511s

 49 elements:
 real  8m9.997s
 user  8m0.963s
 sys   0m8.553s

 It's never going to be O(1) unless I rewrite the whole array module, and
 I'm not saying that the bash builtin malloc's memory allocation patterns
 don't have an effect, but small changes (which I've attached) can make a
 big difference.

 Chet

Thank you very much for the patch; it makes a very dramatic difference
for this use case.  In my testing, it went from 6.5 hours to just over
20 minutes.

I guess this wasn't the default since it may not be great for random
access, but I hope that 

Re: bug: bash 4.2.20 impossibly slow

2012-03-16 Thread Chet Ramey
On 3/14/12 2:14 PM, Somchai Smythe wrote:
 Hello,
 
 I am reporting a problem with performance, not correctness.
 
 While preparing some examples for a course lecture where I code the
 same algorithm in many languages to compare languages, I ran some code
 and while it was  reasonably quick with ksh, it would just apparently
 hang at 100% cpu in bash.  I finally let it run overnight and it does
 complete correctly in bash, but what takes ksh less than a minute
 takes bash 6 1/2 hours to complete (and keeping one core at 100% the
 entire 6.5 hours) on the same hardware.  I suspect there may be some
 special way to compile bash that I don't know about that maybe works
 with arrays differently, so I reporting this.  I am not subscribed, so
 please cc: me.  I cannot use bashbug since my university blocks
 outgoing mail.  I used exactly the same file unmodified for the tests
 in ksh and bash.  My hope is that bash would be at least 'competitive'
 and complete it without being more than 10x slower.  As it is, I
 cannot use bash in the lecture (for this) since it is only 3 hours
 long and the program won't complete in that amount of time.
 
 BLS2 $bash --version
 GNU bash, version 4.2.20(2)-release (x86_64-unknown-linux-gnu)
 Copyright (C) 2011 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html

[...]

 The program run was a simple prime number sieve program using an array
 with only 50 elements:
 
 #! /bin/bash
 
 if ((${#}==1))
 then
   n=${1}
 else
   n=50
 fi
 for ((i=1;i=n;i++))
 do
   ((a[i]=1))
 done
 for ((i=2;i=(n/2);i++))
 do
   for ((j=2;j=(n/i);j++))
   do
  ((k=i*j))
  ((a[k]=0))
   done
 done
 for ((i=1;i=n;i++))
 do
   if ((a[i]!=0))
   then
 printf %d\n ${i}
   fi
 done
 exit 0
 
 When run like:
 time bash ./sieve.sh
 
 .
 499717
 499729
 499739
 499747
 499781
 499787
 499801
 499819
 499853
 499879
 499883
 499897
 499903
 499927
 499943
 499957
 499969
 499973
 499979
 
 real396m9.884s
 user395m43.102s
 sys 0m8.913s
 
[...]

 Computer is a Core2 Duo with 3GB of ram running a pure64bit linux
 distribution with kernel 3.2.9 with gcc 4.5.3 and glibc 2.14.1.
 
 Both programs get the same answer, so this is not a correctness issue,
 but instead a performance issue.
 
[...]

 Experimenting a bit shows me that at  elements bash is still
 reasonably fast, but at 2 elements it takes:
 
 real0m39.077s
 user0m38.807s
 sys 0m0.150s
 
 For that, ksh takes:
 
 real0m1.631s
 user0m1.560s
 sys 0m0.007s
 
 Perhaps that shorter total time still shows the problem dramatically
 enough that runs that size can be used to track down the problem
 without having to wait hours for the test runs.

As I said in my earlier reply, the bash array implementation uses sparse
doubly-linked lists.  Inserts that don't append a value to the end of
the array take O(N) instead of O(1), since you have to search the list
to find the right spot to insert the new element.

There is a lot that can be done to accommodate sequential insertion
patterns, which fits the sieve algorithm pretty well.  It would be
pretty simple, since the array code already maintains a pointer to the
last reference; starting the search for the spot to insert at the last
reference should reduce the search time considerably.

That makes a pretty big difference.  Starting at  elements, but
removing the echos to minimize output:

(bash)
real0m1.833s
user0m1.704s
sys 0m0.118s

(bash-4.2.24)
real0m2.728s
user0m2.603s
sys 0m0.114s

With 2 elements:
(bash)
real0m6.957s
user0m6.437s
sys 0m0.377s

(bash-4.2.24)
real0m16.520s
user0m16.113s
sys 0m0.387s

You start seeing real differences at 9 elements (for what it's worth,
loading up the array initially takes about 1.2s of that total):

(bash)
real0m32.329s
user0m30.845s
sys 0m1.376s

(bash-4.2.24)
real2m46.972s
user2m45.095s
sys 0m1.590s

At this point I quit testing bash-4.2.24; I don't have *that* much time.
The rest of these are just with the modified development sources, and not
in a really rigorous way, since I was using the machine for other things
at the time.

29 elements:
real3m24.318s
user3m19.463s
sys 0m4.768s

39 elements:
real5m27.585s
user5m20.870s
sys 0m6.511s

49 elements:
real8m9.997s
user8m0.963s
sys 0m8.553s

It's never going to be O(1) unless I rewrite the whole array module, and
I'm not saying that the bash builtin malloc's memory allocation patterns
don't have an effect, but small changes (which I've attached) can make a
big difference.

Chet
-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, ITS, CWRUc...@case.eduhttp://cnswww.cns.cwru.edu/~chet/
*** ../bash-4.2-patched/array.c	2009-03-29 22:16:43.0 -0400
--- array.c	

  1   2   >