Re: bash 5.0.2(1) multiline command in history bug
Thank you, but it definitely happens in the up-to-date MacPorts bash-5.0.2(1) distribution, I'll try looking at their patches, building a completely unpatched 5.0.2, and contact the MacPorts developers to let them know if it is a problem with their patches. It uses Apple's EditLine, not readline, which could be something to do with it. Thanks & Best Regards, Jason On 04/02/2019, Chet Ramey wrote: > On 2/4/19 3:22 AM, Jason Vas Dias wrote: >> Good day - >> >> Under bash 4.4.23, with emacs history editing enabled, I can do: >>$ echo '1 >> > 2 >> > 3 >> > ' >> 1 >> 2 >> 3 >>$ >>and I can then press the (move-up / history-previous) key, >>and the same command, including embedded new lines in the arguments, >>is echoed back to me, and I can press to repeat exactly that >>command (scroll up in history and repeat last command). >> >>Now, with bash-5.0.2, this capability is removed: scrolling up in >> the history, >>if the previous command had a multi-line argument, shows the multiline >>argument folded, like: >> $ echo '1 2 3 ' > > I can't reproduce this. I get: > > $ echo $BASH_VERSION > 5.0.2(2)-release > $ echo '1 >> 2 >> 3' > 1 > 2 > 3 > [C-P here] > $ echo '1 > 2 > 3' > > > > -- > ``The lyf so short, the craft so long to lerne.'' - Chaucer >``Ars longa, vita brevis'' - Hippocrates > Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/ >
bash 5.0.2(1) multiline command in history bug
Good day - Under bash 4.4.23, with emacs history editing enabled, I can do: $ echo '1 > 2 > 3 > ' 1 2 3 $ and I can then press the (move-up / history-previous) key, and the same command, including embedded new lines in the arguments, is echoed back to me, and I can press to repeat exactly that command (scroll up in history and repeat last command). Now, with bash-5.0.2, this capability is removed: scrolling up in the history, if the previous command had a multi-line argument, shows the multiline argument folded, like: $ echo '1 2 3 ' and, even worse, it has actually edited the command to remove the new lines, so it runs a different command (without new lines) when repeating a historical command - very bad! History is now unreliable and broken in bash! I use multi-line sed commands frequently, which bash 5.0.2 now stores incorrectly in the history file and is incapable of repeating. So bash-5.0.2 has been made essentially unusable for entering commands that have arguments which contain new lines, and now edits historical commands unconditionally & automatically without user initiation of command editing. Can either of these new behaviors be disabled in bash 5.0.2 ? Thanks & Best Regards, Jason Vas Dias
q// / qq// syntax or built-ins please ?
Please , in some future versions of bash, could it provide support / help for avoiding "quoting hell", in such situations as : $ echo " Error: Cannot find file '/missing/file_1'. Error: Cannot find file '/missing/file_2'. " | while read line; do cmd='if [[ '$'"'"$line"$'"'' =~ ^[^'"\\'"']*['"\\'"']([^'"\\'"']+)['"\\'"'][^'"\\'"']*$ ]]; then echo ${BASH_REMATCH[1]}; fi;'; echo "$cmd" | bash -; done See what I have to do to match lines containing a non-empty single-quoted string ? ie. I just want to cut-and-paste such lines from the output of some application, and weed out the empty lines and print the single-quoted string in lines containing them (only!), with a simple bash command. If you replace "echo $cmd | bash -" with 'eval "$cmd"' , it does not work, because the double-quotes which I had painstakingly inserted with '$'"'' get removed somehow by eval - why is this? ie, only if "$line" is empty, does bash evaluate the text: 'if [[ "" = ... ]]; then ...' else, for the lines I want to match, it would evaluate eg. : + eval 'if [[ Error: Cannot find file '\''/missing_file_1'\''.\" =~ ^[^\\\'\'']*[\\\'\'']([\\\'\'']+)[\\\'\''][^\\\'\'']*$ ]]; then echo ${BASH_REMATCH[1]}; fi;' + set +x ( nothing printed - the single quotes are stripped) + eval 'if [[ \"\" =~ ^[^\\\'\'']*[\\\'\'']([\\\'\'']+)[\\\'\''][^\\\'\'']*$ ]]; then echo ${BASH_REMATCH[1]}; fi;' ++ [[ "" =~ ^[^\']*[\']([\']+)[\'][^\']*$ ]] + set +x + I think bash needs some kind of "q/.../'" and 'qq/../' syntax / built-ins, or whatever syntax its author likes, like PERL has, whereby the single quote ("'") or double quote ('"') respectively are totally ignored within '/.../' parameter strings, which should use a different quoting character than '"' or "'" to delineate them. If the author won't develop these, I will & will send a patch. Regards, Jason
Re: 4.3.42 : shopt -s lastpipe & "no record of process" warnings
OK, I found one fix - you probably won't like it, but it fixes the oroblem, and all the test cases run in 'make test' still work - especially 'run_jobs' and 'run_lastpipe' - the patch simply removes the error from the FIND_CHILD macro in jobs.c, so it noe reads ; jobs.c @ line 2377: #define FIND_CHILD(pid,child) \ do \ {\ child = find_pipeline (pid, 0, (int*)NULL); \ if ( child == 0 ) \ { \ give_terminal_to (shell_pgrp, 0); \ UNLOCK_CHILD (oset); \ restore_sigint_handler (); \ return (termination_state = 0): \ } \ } \ while (0) So I've removed the manufactured "No record of process X" error altogether, and yet all test cases still pass - it seems it was unecessary. Will a fix for this issue appear in a 'bash43-43+' patch file or in the forthcoming bash-4.4 release ? I think it should, as it is rather unfair of bash to make its internal bookkeeping errors appear as if they could be user programming errors. Regards, Jason On 05/06/2016, Jason Vas Dias <jason.vas.d...@gmail.com> wrote: > The strace log shows that process 8277 is the > bash subshell that runs f() , forks to create 8278, > which forks to execve "some_executable" ; > process 8277 then eads the 8278 output on a pipe, > so it knows when the pipe is closed, amd the strace > log shows 8277 first does a wait4, which returns 8278, > but does NOT get a SIGCHLD event for 8278. > Maybe this is the problem? Since the wait has > already succeeded, no SIGCHLD will be generated. > 8277 evidently is not aware that its 8278 child has exited, > and goes on to issue a further wait4 for it which > returns -1 with errno==ECHLD and then emits the > message: > "xxx.sh: line 46: No record of process 8278" . > Line 46 in my script consist of > function f() > . > This seems buggy to me - I'll try developing a > patch to fix it and will post back here. > > Regards, Jason > On 05/06/2016, Jason Vas Dias <jason.vas.d...@gmail.com> wrote: >> With a build of bash 4.3 patchlevel 42 >> on a Linux x86_64 system I am getting >> warning messages on stderr when >> running a certain script like: >>"wait_for: no record of process 8278" . >> Running bash with the script as >> input under strace shows that process 8277 >> does a successful wait4(-1,...) which DOES >> return pid 8278 . So why is bash complaining >> it has no record of it ? >> Is bash getting its book-keeping wrong here? >> The script is not using any background >> jobs with '&' or using the 'wait' built-in. >> It is simply doing something like: >> >> shopt -s lastpipe; >> set -o pipefail; >> function f() >> { some_executable "$@" | { >> while read line; do { ... ; } done; >>} >>return 0; >> } >> ... >> f $args | { while read result; do >> ...; done ; } >> >> >> So I'd expect the initial bash process to run >> a subshell bash to invoke the f() function, >> which runs a command child that execve-s >> "some_executable', parsing its output and writing >> to the subshell bash on a pipe, which writes to the >> parent bash on a pipe, which parses it & does whatever. >> Without the lastpipe option, this would be the >> other way round - the parent would run f, and >> its output would be parsed in the subshell >> running the f output parsing loop. >> All this seems to work OK, but why the warning >> message about "no record of process X"? >> Or is this message indicating something has >> gone seriously wrong ? >> Thanks in advance for any replies, >> Regards, >> Jason >> >
Re: 4.3.42 : shopt -s lastpipe & "no record of process" warnings
The strace log shows that process 8277 is the bash subshell that runs f() , forks to create 8278, which forks to execve "some_executable" ; process 8277 then eads the 8278 output on a pipe, so it knows when the pipe is closed, amd the strace log shows 8277 first does a wait4, which returns 8278, but does NOT get a SIGCHLD event for 8278. Maybe this is the problem? Since the wait has already succeeded, no SIGCHLD will be generated. 8277 evidently is not aware that its 8278 child has exited, and goes on to issue a further wait4 for it which returns -1 with errno==ECHLD and then emits the message: "xxx.sh: line 46: No record of process 8278" . Line 46 in my script consist of function f() . This seems buggy to me - I'll try developing a patch to fix it and will post back here. Regards, Jason On 05/06/2016, Jason Vas Dias <jason.vas.d...@gmail.com> wrote: > With a build of bash 4.3 patchlevel 42 > on a Linux x86_64 system I am getting > warning messages on stderr when > running a certain script like: >"wait_for: no record of process 8278" . > Running bash with the script as > input under strace shows that process 8277 > does a successful wait4(-1,...) which DOES > return pid 8278 . So why is bash complaining > it has no record of it ? > Is bash getting its book-keeping wrong here? > The script is not using any background > jobs with '&' or using the 'wait' built-in. > It is simply doing something like: > > shopt -s lastpipe; > set -o pipefail; > function f() > { some_executable "$@" | { > while read line; do { ... ; } done; >} >return 0; > } > ... > f $args | { while read result; do > ...; done ; } > > > So I'd expect the initial bash process to run > a subshell bash to invoke the f() function, > which runs a command child that execve-s > "some_executable', parsing its output and writing > to the subshell bash on a pipe, which writes to the > parent bash on a pipe, which parses it & does whatever. > Without the lastpipe option, this would be the > other way round - the parent would run f, and > its output would be parsed in the subshell > running the f output parsing loop. > All this seems to work OK, but why the warning > message about "no record of process X"? > Or is this message indicating something has > gone seriously wrong ? > Thanks in advance for any replies, > Regards, > Jason >
4.3.42 : shopt -s lastpipe & "no record of process" warnings
With a build of bash 4.3 patchlevel 42 on a Linux x86_64 system I am getting warning messages on stderr when running a certain script like: "wait_for: no record of process 8278" . Running bash with the script as input under strace shows that process 8277 does a successful wait4(-1,...) which DOES return pid 8278 . So why is bash complaining it has no record of it ? Is bash getting its book-keeping wrong here? The script is not using any background jobs with '&' or using the 'wait' built-in. It is simply doing something like: shopt -s lastpipe; set -o pipefail; function f() { some_executable "$@" | { while read line; do { ... ; } done; } return 0; } ... f $args | { while read result; do ...; done ; } So I'd expect the initial bash process to run a subshell bash to invoke the f() function, which runs a command child that execve-s "some_executable', parsing its output and writing to the subshell bash on a pipe, which writes to the parent bash on a pipe, which parses it & does whatever. Without the lastpipe option, this would be the other way round - the parent would run f, and its output would be parsed in the subshell running the f output parsing loop. All this seems to work OK, but why the warning message about "no record of process X"? Or is this message indicating something has gone seriously wrong ? Thanks in advance for any replies, Regards, Jason
bash-4.3.33 regexp bug
Good day list, Chet - I think this is a bug: ( set -x ; tab=$'\011'; s=some text: 1.2.3; if [[ $s =~ ^some text:[\ $tab]+([0-9.]+) ]]; then echo ${BASH_REMATCH[1]}; fi ) -bash: syntax error in conditional expression -bash: syntax error near `$tab]+([0-9.]+)' Do you agree ? If not, what sort of regexp should I use to match ':[spacetab]+[0-9]+' ? The problem happens regardless of whether I use the $tab variable or a literal '\'$'\011' sequence (sorry, I can't type tab in this mailer). Thanks in advance for any replies, Regards, Jason
[contrib]: setpgrp + killpg builtins
Dear bash developers - It is very difficult to overcome the problems caused by the scenario described within this email without something the enclosed setpgrp pid pgrp and killpg pgrp sig bash loadable builtins . Without them, or options to change signal handling for simple commands, it is too easy to create orphan processes, and too difficult to find a workaround to prevent orphan processes being created, as in the following scenario: 1. An invoker.sh process runs a job.sh bash script in a separate process which runs a long-running (or non-terminating!) 'Simple Command' (not a shell Job) (call it nterm.sh). 2. After a while, the originator decides that the job has timed-out, and kills its process (the instance of bash running job.sh), and then exits. 3. The long-command nterm.sh process is left still running as an orphan, and would become a zombie if it tries to exit. I tested this with lastest bash-4.3.33 and with bash-4.2 . The problem is most shell scripts use just simple commands and not background jobs - changing a large number of scripts to use asynchronous background jobs for every simple command that may potentially not terminate due to for example NFS hangs is not an option. Simple commands will run in their own process groups in interactive mode, or that of the parent in non-interactive mode, and will not be killed when their parent job.sh exits because the parent has no background pid to wait for so cannot wait for them. This is demonstrated by the attached shell scripts in the nterm-demo.tar file (nterm-demo/*) : invoker.sh: forks off job.sh, waits for it to timeout, and kills it job.sh: runs nterm.sh as a simple command nterm.sh : a non-terminating process killpg.c : killpg built-in To demonstrate: $ tar -xpf nterm_demo.tar $ cd nterm_demo $ BASH_BUILD_DIR=... BASH_SOURCE_DIR=... make Example output is : gcc -fPIC -O3 -g -I. -I/home/jvasdias/src/3P/bash -I/home/jvasdias/src/3P/bash/lib -I/home/jvasdias/src/3P/bash/builtins -I/home/jvasdias/src/3P/bash/include -I/home/jvasdias/src/3P/bash-4.30-ubuntu -I/home/jvasdias/src/3P/bash-4.30-ubuntu/lib -I/home/jvasdias/src/3P/bash-4.30-ubuntu/builtins -c -o setpgid.o setpgid.c gcc -shared -Wl,-soname,$@ setpgid.o -o setpgid gcc -fPIC -O3 -g -I. -I/home/jvasdias/src/3P/bash -I/home/jvasdias/src/3P/bash/lib -I/home/jvasdias/src/3P/bash/builtins -I/home/jvasdias/src/3P/bash/include -I/home/jvasdias/src/3P/bash-4.30-ubuntu -I/home/jvasdias/src/3P/bash-4.30-ubuntu/lib -I/home/jvasdias/src/3P/bash-4.30-ubuntu/builtins -c -o killpg.o killpg.c ... gcc -shared -Wl,-soname,$@ killpg.o -o killpg bash -c ./invoker.sh 0- 21 | tee ./invoker.sh: hB : 11524 JOB: 11528 ./job.sh: 11528: pgid : 11510 ./job.sh: 11528: pgid now : 11528 ./nterm.sh: hB: 11535 : pgid: 11528 non-terminating command 11535 (11528) still running. ./invoker.sh: timeout - killing job: 11528 Terminated ./nterm.sh: 11535: exits 143 ./job.sh: 11528: exits 143 ./invoker.sh: 11524: exiting. $ To demonstrate the problem, make the built-ins not be found: $ make show_the_bug unset BASH_LOADABLES_DIR; ./invoker.sh 0- 21 | tee ./invoker.sh: hB : 11670 Demonstrating the bug. Please kill the nterm.sh process manually. JOB: 11672 ./job.sh: 11672: pgid : 11668 job.sh will be killed, but nterm.sh will not. ./nterm.sh: hB: 11676 : pgid: 11668 non-terminating command 11676 (11668) still running. ./invoker.sh: timeout - killing job: 11672 non-terminating command 11676 (11668) still running. non-terminating command 11676 (11668) still running. ^Cmake: *** [show_the_bug] Interrupt Fortunately, make carefully cleans up and kills 11676 silenty. If one types at the command line or in a shell script: $ BASH_LOADABLES_DIR='' ./invoker.sh then it is really hard to kill the resulting nterm.sh process - one has to use kill -9 $nterm_pid . So, please give scripts some means of saying if I am killed, kill my current simple command, even in interactive mode, with some new shopt option, or provide something like the killpg / setpgid built-ins attached. Thanks Regards, Jason nterm_demo.tar Description: Unix tar archive
test '-v' - associative vs. normal array discrepancy - a bug ?
Good day - Please could anyone explain why the first command below produces no output: $ ( declare -A a=([a]=1); if [ -v a ]; then echo yes; fi ) $ ( declare -a a=([0]=1); if [ -v a ]; then echo yes; fi ) yes $ There does not appear to be any documentation about different behaviour of -v for associative vs. normal arrays: -v varname True if the shell variable varname is set (has been assigned a value). Should that be ammended to : -v varname True if the shell variable varname is not an associative array and is set (has been assigned a value). or is this a bug with -v ? I'm using bash-4.3.30(1) , but it appears also to be an issue with bash-4.3.11(1) . Regards, Jason
Re: test '-v' - associative vs. normal array discrepancy - a bug ?
Thanks to all who replied. I would really like -v to do as it documented to do : -v True if the shell variable varname is set (has been assigned a value) To me, the fact that -v does not return true if the variable is an array and does not have element 0 - or element '0' in the case of assocs - means it does not behave as documented. Either its behaviour should be changed to return true if an array is non-empty (contains ANY non-empty element) or the documentation should be changed to document '-v's behaviour for both normal and associative arrays. This is the function I was using to replace -v if bash version 4.3 is detected - it does more than -v in that it actually returns the value of the variable, but it works for any array or array member, and does not care if the array does not have element 0 or member '0' : function is_defined { if (( $# == 0 )); then return 1; fi local v=$1; if [[ $v =~ \[ ]]; then local val; eval 'val=${'$v'}'; if [ x$val = x ]; then return 1; fi echo -n ${val//[\'\]/}; return 0; else local val=$(declare -p $v 2/dev/null); if [[ $val =~ ^declare[\ \ xaAgri\-]+[\ \ ]*[^\ \ \=]+[\=]([^\=].*)[\ \ ]*$ ]]; then val=${BASH_REMATCH[1]/()/}; else return 1; fi echo -n ${val//[\'\]/}; fi } I thought the intent of -v was to free us of such code . I think there is a need for some built-in or test option that can tell us if a variable has been set or not, regardless if it is an array or if its 0th element or member '0' is set or not. To get this currently, we'd have to test if the variable is an array, if it is an associative or normal array, and have different methods for each case . [ ${#v[@] -gt 0 ] does not work if $v is a normal variable. It would be nice if -v could mean 'has any non-empty member' for both associative and normal arrays - I think Piotr's suggestion to make it do this is a good one. Thanks Regards, Jason On 11/19/14, Piotr Grzybowski narsil...@gmail.com wrote: On Wed, Nov 19, 2014 at 8:25 PM, Chet Ramey chet.ra...@case.edu wrote: No. There's a way to test for a non-zero number of elements directly. Sure thing, I just had a feeling that Jason wants to use -v instead. I can turn it into -E VAR True if the shell variable VAR is an empty array ;-) cheers, pg
Re: Bash-4.3 Official Patch 25
Good day Chet, bash-list - I just checked out the latest git head, applied the bash43-025 patch, and built $ ./bash --version GNU bash, version 4.3.25(3)-release (x86_64-unknown-linux-gnu) ... which PASSED its 'make check' test suite, both under Ubuntu 14.04.1 LTS and under RHEL-6.5+ , on an x86_64 (Haswell) 8-core platform . But now there is an issue - bash seems to lose its idea of stdout / stderr being a terminal within read loops, as illustrated by this test script (/tmp/t.sh): quote #!/bin/bash tty echo $'1\n2' test.list; while read line; do tty; done test.list /quote Its output illustrates the problem: quote $ ./bash /tmp/t.sh /dev/pts/6 not a tty not a tty /quote This bug seems to have infected the latest Ubuntu bash release also, which was created and pushed out today with the bash43-025 fix for the CVE-2014-6271 issue : quote $ /bin/bash /tmp/t.sh /dev/pts/6 not a tty not a tty /quote (/bin/bash is from the bash-4.3-7ubuntu1.1 package) . But /dev/fd/1 remains the same file : quote #!/bin/bash tty ls -l /dev/fd/1; echo $'1\n2' test.list; while read line; do tty; ls -l /dev/fd/1; done test.list /quote Its output under Ubuntu bash: $ /bin/bash /tmp/tsh /dev/pts/6 lrwx-- 1 jvasdias jvd 64 Sep 25 14:47 /dev/fd/1 - /dev/pts/6 not a tty lrwx-- 1 jvasdias jvd 64 Sep 25 14:47 /dev/fd/1 - /dev/pts/6 not a tty lrwx-- 1 jvasdias jvd 64 Sep 25 14:47 /dev/fd/1 - /dev/pts/6 This is rather confusing ! Any ideas what may the the issue here ? Thanks Regards, Jason On 9/24/14, Chet Ramey chet.ra...@case.edu wrote: BASH PATCH REPORT = Bash-Release: 4.3 Patch-ID: bash43-025 Bug-Reported-by: Stephane Chazelas stephane.chaze...@gmail.com Bug-Reference-ID: Bug-Reference-URL: Bug-Description: Under certain circumstances, bash will execute user code while processing the environment for exported function definitions. Patch (apply with `patch -p0'): *** ../bash-4.3-patched/builtins/common.h 2013-07-08 16:54:47.0 -0400 --- builtins/common.h 2014-09-12 14:25:47.0 -0400 *** *** 34,37 --- 49,54 #define SEVAL_PARSEONLY 0x020 #define SEVAL_NOLONGJMP 0x040 + #define SEVAL_FUNCDEF 0x080 /* only allow function definitions */ + #define SEVAL_ONECMD0x100 /* only allow a single command */ /* Flags for describe_command, shared between type.def and command.def */ *** ../bash-4.3-patched/builtins/evalstring.c 2014-02-11 09:42:10.0 -0500 --- builtins/evalstring.c 2014-09-14 14:15:13.0 -0400 *** *** 309,312 --- 313,324 struct fd_bitmap *bitmap; + if ((flags SEVAL_FUNCDEF) command-type != cm_function_def) + { + internal_warning (%s: ignoring function definition attempt, from_file); + should_jump_to_top_level = 0; + last_result = last_command_exit_value = EX_BADUSAGE; + break; + } + bitmap = new_fd_bitmap (FD_BITMAP_SIZE); begin_unwind_frame (pe_dispose); *** *** 369,372 --- 381,387 dispose_fd_bitmap (bitmap); discard_unwind_frame (pe_dispose); + + if (flags SEVAL_ONECMD) + break; } } *** ../bash-4.3-patched/variables.c 2014-05-15 08:26:50.0 -0400 --- variables.c 2014-09-14 14:23:35.0 -0400 *** *** 359,369 strcpy (temp_string + char_index + 1, string); ! if (posixly_correct == 0 || legal_identifier (name)) ! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST); ! ! /* Ancient backwards compatibility. Old versions of bash exported ! functions like name()=() {...} */ ! if (name[char_index - 1] == ')' name[char_index - 2] == '(') ! name[char_index - 2] = '\0'; if (temp_var = find_function (name)) --- 364,372 strcpy (temp_string + char_index + 1, string); ! /* Don't import function names that are invalid identifiers from the ! environment, though we still allow them to be defined as shell ! variables. */ ! if (legal_identifier (name)) ! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST|SEVAL_FUNCDEF|SEVAL_ONECMD); if (temp_var = find_function (name)) *** *** 382,389 report_error (_(error importing function definition for `%s'), name); } - - /* ( */ - if (name[char_index - 1] == ')' name[char_index - 2] == '\0') - name[char_index - 2] = '('; /* ) */ } #if defined (ARRAY_VARS) --- 385,388 *** ../bash-4.3-patched/subst.c 2014-08-11 11:16:35.0 -0400 --- subst.c 2014-09-12 15:31:04.0 -0400
Re: Bash-4.3 Official Patch 25
Oops, sorry, this issue is nothing to do with the bash43-025 patch - I just verified that the same issue occurs with bash 4.1.2(1) . The issue was that a script that does an 'stty' command was failing when run in a 'while read ... ' loop. It wasn't using 'stty -F', so was trying to stty on stdin, which was the list file. Sorry, my mistake - a nasty coincindence that it was the first thing I tried with the new bash version. Regards, Jason On 9/25/14, Jason Vas Dias jason.vas.d...@gmail.com wrote: Good day Chet, bash-list - I just checked out the latest git head, applied the bash43-025 patch, and built $ ./bash --version GNU bash, version 4.3.25(3)-release (x86_64-unknown-linux-gnu) ... which PASSED its 'make check' test suite, both under Ubuntu 14.04.1 LTS and under RHEL-6.5+ , on an x86_64 (Haswell) 8-core platform . But now there is an issue - bash seems to lose its idea of stdout / stderr being a terminal within read loops, as illustrated by this test script (/tmp/t.sh): quote #!/bin/bash tty echo $'1\n2' test.list; while read line; do tty; done test.list /quote Its output illustrates the problem: quote $ ./bash /tmp/t.sh /dev/pts/6 not a tty not a tty /quote This bug seems to have infected the latest Ubuntu bash release also, which was created and pushed out today with the bash43-025 fix for the CVE-2014-6271 issue : quote $ /bin/bash /tmp/t.sh /dev/pts/6 not a tty not a tty /quote (/bin/bash is from the bash-4.3-7ubuntu1.1 package) . But /dev/fd/1 remains the same file : quote #!/bin/bash tty ls -l /dev/fd/1; echo $'1\n2' test.list; while read line; do tty; ls -l /dev/fd/1; done test.list /quote Its output under Ubuntu bash: $ /bin/bash /tmp/tsh /dev/pts/6 lrwx-- 1 jvasdias jvd 64 Sep 25 14:47 /dev/fd/1 - /dev/pts/6 not a tty lrwx-- 1 jvasdias jvd 64 Sep 25 14:47 /dev/fd/1 - /dev/pts/6 not a tty lrwx-- 1 jvasdias jvd 64 Sep 25 14:47 /dev/fd/1 - /dev/pts/6 This is rather confusing ! Any ideas what may the the issue here ? Thanks Regards, Jason On 9/24/14, Chet Ramey chet.ra...@case.edu wrote: BASH PATCH REPORT = Bash-Release:4.3 Patch-ID:bash43-025 Bug-Reported-by: Stephane Chazelas stephane.chaze...@gmail.com Bug-Reference-ID: Bug-Reference-URL: Bug-Description: Under certain circumstances, bash will execute user code while processing the environment for exported function definitions. Patch (apply with `patch -p0'): *** ../bash-4.3-patched/builtins/common.h2013-07-08 16:54:47.0 -0400 --- builtins/common.h2014-09-12 14:25:47.0 -0400 *** *** 34,37 --- 49,54 #define SEVAL_PARSEONLY0x020 #define SEVAL_NOLONGJMP 0x040 + #define SEVAL_FUNCDEF 0x080 /* only allow function definitions */ + #define SEVAL_ONECMD 0x100 /* only allow a single command */ /* Flags for describe_command, shared between type.def and command.def */ *** ../bash-4.3-patched/builtins/evalstring.c2014-02-11 09:42:10.0 -0500 --- builtins/evalstring.c2014-09-14 14:15:13.0 -0400 *** *** 309,312 --- 313,324 struct fd_bitmap *bitmap; + if ((flags SEVAL_FUNCDEF) command-type != cm_function_def) +{ + internal_warning (%s: ignoring function definition attempt, from_file); + should_jump_to_top_level = 0; + last_result = last_command_exit_value = EX_BADUSAGE; + break; +} + bitmap = new_fd_bitmap (FD_BITMAP_SIZE); begin_unwind_frame (pe_dispose); *** *** 369,372 --- 381,387 dispose_fd_bitmap (bitmap); discard_unwind_frame (pe_dispose); + + if (flags SEVAL_ONECMD) +break; } } *** ../bash-4.3-patched/variables.c 2014-05-15 08:26:50.0 -0400 --- variables.c 2014-09-14 14:23:35.0 -0400 *** *** 359,369 strcpy (temp_string + char_index + 1, string); ! if (posixly_correct == 0 || legal_identifier (name)) !parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST); ! ! /* Ancient backwards compatibility. Old versions of bash exported ! functions like name()=() {...} */ ! if (name[char_index - 1] == ')' name[char_index - 2] == '(') !name[char_index - 2] = '\0'; if (temp_var = find_function (name)) --- 364,372 strcpy (temp_string + char_index + 1, string); ! /* Don't import function names that are invalid identifiers from the ! environment, though we still allow them to be defined as shell ! variables. */ ! if (legal_identifier (name)) !parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST
need ability to tell if array is associative or not - bug?
Good day list - There seems to be no way of testing if an array variable is associative or not , yet attempting to make associative assigments to a normal array results in a syntax error . I have something like: declare -xr TYPE_ARRAY=0 TYPE_ASSOC=1 function f() { declare -n an_array=$1; local type=$2; case $type in $TYPE_ASSOC) an_array['some_value']=1; }
Re: need ability to tell if array is associative or not - bug?
Sorry - forget the bit about indirect expansion - the error only occurs if the array is originally declared not associative : $ ( function f() { local an_array=$1; local value='1.0'; local ev='['''value''']=''$value'; eval ${an_array}='('$ev')'; }; declare -a my_array; set -x; f my_array ) + f my_array + local an_array=my_array + local value=1.0 + local 'ev=['\''value'\'']='\''1.0'\''' + eval 'my_array=(['\''value'\'']='\''1.0'\'')' ++ my_array=(['value']='1.0') bash: 1.0: syntax error: invalid arithmetic operator (error token is .0) $ ( function f() { local an_array=$1; local value='1.0'; local ev='['''value''']=''$value'; eval ${an_array}='('$ev')'; }; declare -A my_array; set -x; f my_array ) + f my_array + local an_array=my_array + local value=1.0 + local 'ev=['\''value'\'']='\''1.0'\''' + eval 'my_array=(['\''value'\'']='\''1.0'\'')' ++ my_array=(['value']='1.0') $ (no error) . And just evaluating : $ ( declare -a my_array; my_array=(['value']='1.0') ) gives no error either, but there is no 'value' subscript. On 8/29/14, Jason Vas Dias jason.vas.d...@gmail.com wrote: Actually, this appears to be a bit more involved. What I was actually trying to work out was the bash error that occurs with : $( function f() { local an_array=$1; local value='1.0'; local v=value; local ev='['''value''']=''${!v}'; eval ${an_array}='('$ev')'; } declare -a my_array; set -x; f my_array; ) + f my_array + local an_array=my_array + local value=1.0 + local v=value + local 'ev=['\''value'\'']='\''1.0'\''' + eval 'my_array=(['\''value'\'']='\''1.0'\'')' ++ my_array=(['value']='1.0') bash: 1.0: syntax error: invalid arithmetic operator (error token is .0) This error does not happen if the array was originally declared associative: $ ( function f() { local an_array=$1; local value='1.0'; local v=value; local ev='['''value''']=''${!v}'; eval ${an_array}='('$ev')'; }; declare -A my_array; set -x; f my_array ) + f my_array + local an_array=my_array + local value=1.0 + local v=value + local 'ev=['\''value'\'']='\''1.0'\''' + eval 'my_array=(['\''value'\'']='\''1.0'\'')' ++ my_array=(['value']='1.0') Nor does the error happen if indirect expansion is not used: $ ( function f() { local an_array=$1; local value='1.0'; local v=$value; local ev='['''value''']=''$v'; eval ${an_array}='('$ev')'; }; declare -A my_array; set -x; f my_array ) + f my_array + local an_array=my_array + local value=1.0 + local v=1.0 + local 'ev=['\''value'\'']='\''1.0'\''' + eval 'my_array=(['\''value'\'']='\''1.0'\'')' ++ my_array=(['value']='1.0') $ So I was wondering if there is any way to determine if an array was originally declared associative or not. But it appears that there is a bash bug here that is triggered only if the array was originally declared not associative and an indirect expansion is involved in setting an array member. The end result expression being evaluated: ++ my_array=(['value']='1.0') should never involve an arithmetic expression, and should be valid regardless if the array is associative or not . Any ideas what might be going on here ? Thanks in advance, Jason On 8/29/14, Jason Vas Dias jason.vas.d...@gmail.com wrote: Sorry, mailer sent previous mail before I was ready. Reposting. Good day list - There seems to be no way of testing if an array variable is associative or not , I have something like: declare -xr TYPE_ARRAY=0 TYPE_ASSOC=1 function f() { declare -n an_array=$1; local type=$2; case $type in $TYPE_ASSOC) an_array['some_value']=1; ;; $TYPE_ARRAY) an_array[0]=1; esac } Now, if I call : declare -a my_array(); f my_array $TYPE_ASSOC; I'll end up with no 'some_value' subscript in array. It would be great if bash could provide some '-A' conditional expression operator to test if a variable is an associative array or not . Or perhaps 'declare -A identifier' could return non-zero if 'identifier' was not previously defined as an associative array, as declare -F does for functions ? Or is there some way to test if a variable is an associative array or not? Thanks Regards, Jason On 8/29/14, Jason Vas Dias jason.vas.d...@gmail.com wrote: Good day list - There seems to be no way of testing if an array variable is associative or not , yet attempting to make associative assigments to a normal array results in a syntax error . I have something like: declare -xr TYPE_ARRAY=0 TYPE_ASSOC=1 function f() { declare -n an_array=$1; local type=$2; case $type in $TYPE_ASSOC) an_array['some_value']=1; }
'declare' does not honor '-e' in command substituted assignments - a bug ?
Good day bash list - I don't understand why this emits any output : $ ( set -e; declare v=$(false); echo 'Should not get here'; ) Should not get here $ While this does not: $ ( set -e; v=$(false); echo 'Should not get here'; ) $ Shouldn't declare / typeset behave like the normal variable assignment statement wrt command substitution ? It does not seem to be documented anywhere if it is not. I'm using bash-4.3.18(1)-release , compiled from GIT under RHEL 6.4 (gcc-4.4.7) for x86_64 - I've also tested the default RHEL 6.4 bash-4.1.2(1)-release and the latest 4.3.22(1)-release with the same results. Actually , this problem seems to apply to all built-ins - $ ( set -e ; echo $(false); echo 'not ok') not ok $ I can't seem to find this behaviour documented anywhere . The same behaviour happens in posix mode . I'd appreciate an explanation as to why this behavior is not a bug . Thanks Regards, Jason test_-e.sh Description: Bourne shell script
unhelpful effect of '!' prefix on commands in conditionals - a bug ?
Having defined a function _F to return a non-zero return status : $ function _F () { return 255; } I'd expect to be able to test this return status in an if clause - but this is what happens: $ if ! _F ; then echo '_F returned: '$?; fi _F returned: 0 whereas if I just run F inline, the return status is available: $ _F; echo $? 255 Interestingly, if I don't use '!' in the conditional, I can access the the return status: $ if _F ; then echo OK; else echo '_F returned: '$?; fi _F returned: 255 This is with bash: $ bash --version GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu) ... from an up-to-date copy of Scientific Linux release 6.5 for X86_64 . This behavior seems to me to be an instant source of confusion and bugs - does anyone agree with me that this is a bug ? Is this really mandated by the standards ? Is there some other variable I could test to retrieve $? if it has been mangled by a '!' in the conditional ? Is there any other conclusion than : if you want to access the return status of a function in an if clause , don't use '!' in the conditional ? Any responses / suggestions gratefully received. Thanks Regards, Jason
bash-4.2(42) on AIX 6.1 has broken command expansion in double-quoted strings
Hi bash gurus - bash appears to be broken on AIX 6.1 - I'd really appreciate some advice : With bash-4.2 patchlevel 42 (the latest available as of today from ftp.gnu.org ), on AIX, the resultant bash built with gcc-4.7.2 and system ld + as, ends up being broken wrt to command output in strings: $ ./bash bash-4.2$ echo $(echo hello) bash: command substitution: line 2: syntax error near unexpected token `)' bash: command substitution: line 2: `echo hello)' bash-4.2$ v=$(date) bash: command substitution: line 13: syntax error near unexpected token `)' bash: command substitution: line 13: `date)' I've tried compiling with / without libiconv and either to use AIX /usr/lib/libcurses or latest libncurses (from invisible-island.net), with GCC - 4.7.2 and the latest version of m4 + bison and/or byacc, and with / without all combinations of the '--disable-nls --without-libiconv-prefix --enable-readline --with-installed-readline' configuration options, but with same result. Also, the stock IBM /opt/freeware bash 4.1 comes with a bug that disables TAB-completion when the path resolves to an NFS mount point . Does anyone know if there is a way to disable this ? Thanks in advance for any response - Best Regards, Jason Vas Dias jason.vas.d...@gmail.com
Re: why must non-standard $IFS members be treated so differently ?
Thanks Dan - The plot thickens - Yes, you're right, I had $IFS mistakenly set to ':' in the shell in which I ran 'count_args' . Without this IFS setting, I get a count of 4: $ env -i PATH=/bin:/usr/bin HOME=${HOME} /bin/bash --norc $ count_args 1 2 3\ 4 4 $ IFS=: count_args 1 2 3\ 4 3 This to me is strange , as I've asked bash not to use ' ' as a delimiter, when $IFS==: , but it is doing so ! And shouldn't '3\ 4' be a single string in any case, regardless of IFS ? If word splitting is not doing any escaping, why not - shouldn't it be doing so? Escaping works in filenames, so why not in word-splitting ? Thanks Regards, Jason On Sun, Jul 29, 2012 at 4:19 PM, Dan Douglas orm...@gmail.com wrote: On Sunday, July 29, 2012 03:23:29 PM Jason Vas Dias wrote: echo $(count_args 1 2 3\ 4) I should also have mentioned that I couldn't reproduce this case. You should be getting 4 here in your example, not 3. I have the same Bash version. Are you sure you were echoing `${#v[@]} ' and not `${#@}', and also that you did not set IFS=: for count_args? If you use exactly the function you sent with the default IFS then you should get 4 here. -- Dan Douglas RE: why must non-standard $IFS members be treated so differently ? Jason Vas Dias 3:23 PM (4 hours ago) Good day Chet, list - I'm concerned about the difference in output of these functions with the example input given on the '$' prefixed line below (with 4.2.29(2)-release (x86_64-unknown-linux-gnu)): function count_args {v=($@); echo ${#v[@]}; } function count_colons { IFS=':' ;v=($@); echo ${#v[@]}; } $ echo $(count_args 1 2 3\ 4) $(count_colons 1:2:3\:4) 3 4 It appears to be impossible for an item delimited by 'X' to contain an escaped 'X' ('\X') if 'X' is not a standard delimiter (' ', 'TAB') . Quoting doesn't seem to help either: $ echo $(count_args 1 2 3\ 4) $(count_colons 1:2:3':4') 3 4 To me, this appears to be a bug. But I bet you're going to tell me it is a feature ? Please explain. Thanks Regards, Jason BTW, documentation on $IFS does not appear to mention this issue: Word Splitting The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting. The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words on these characters. If IFS is unset, or its value is exactly spacetabnewline, the default, then sequences of space, tab, and newline at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS charac- ters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters space and tab are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.
Re: why must non-standard $IFS members be treated so differently ?
Thanks Andreas - I guess your answer mostly explains my issue - except for one thing: And shouldn't '3\ 4' be a single string in any case, regardless of IFS ? It is. But if field splitting is applied to it it will be split in two words when $IFS contains a space. This was really the point of my question - why, if escaping is permitted and an escape is shell syntax, does word-splitting not honor escapes, ESPECIALLY if the character being escaped is a character in $IFS ? Would it be much work to make word-splitting honor escapes ? Why is this issue of escaping not being enabled during word-splitting not documented anywhere ? Thanks Regards, Jason On Sun, Jul 29, 2012 at 9:05 PM, Andreas Schwab sch...@linux-m68k.org wrote: Jason Vas Dias jason.vas.d...@gmail.com writes: Thanks Dan - The plot thickens - Yes, you're right, I had $IFS mistakenly set to ':' in the shell in which I ran 'count_args' . Without this IFS setting, I get a count of 4: $ env -i PATH=/bin:/usr/bin HOME=${HOME} /bin/bash --norc $ count_args 1 2 3\ 4 4 $ IFS=: count_args 1 2 3\ 4 3 This to me is strange , as I've asked bash not to use ' ' as a delimiter, when $IFS==: , but it is doing so ! IFS does not change the shell syntax. It only controls field splitting as applied to the result of expansions. Compare: $ bash -c 'IFS=:; echo a:b:c' a:b:c $ bash -c 'IFS=:; a=a:b:c; echo $a $a' a:b:c a b c $ bash -c 'IFS=:; a=a:b:c; b=$a; echo $b $b' a:b:c a b c In the last example the assignment b=$a doesn't undergo field splitting, so the colons are still preserved. And shouldn't '3\ 4' be a single string in any case, regardless of IFS ? It is. But if field splitting is applied to it it will be split in two words when $IFS contains a space. If word splitting is not doing any escaping, why not - shouldn't it be doing so? Escape characters are part of the shell syntax. They are never special when they result from expansions, unless they are reinterpreted as shell input through eval. Andreas. -- Andreas Schwab, sch...@linux-m68k.org GPG Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5 And now for something completely different.
Re: readline expansion: 'cp $v/TAB' - 'cp \$v/' (no expansion) - why?
Sorry, I tried 4.2-29(2) at home and it DOES correctly complete '$SRC/TAB' - of courrse, it helps if $SRC contains the name of an existing directory, which it did not at work. Thanks ! a most interesting list . Next Question : - any plans to allow us to export array or associative store variables ? ie. represent them in *environ passed to programs with an integration with glibc getenv() to support this ? Thanks Regards, Jason On Thu, Jul 5, 2012 at 12:59 PM, Jason Vas Dias jason.vas.d...@gmail.com wrote: hi Eric - thanks for your response, but bash-4.2.29(2), just built from latest patches from gnu on x86_64 ubuntu, does NOT restore the old behavior , but now refuses to tab-expand at all - 'cat $SRC/TAB' on the command line still produces no output. Is there some option I need to give to enable it? Thanks! On 5 Jul 2012 13:24, Eric Blake ebl...@redhat.com wrote: On 07/05/2012 04:58 AM, Jason Vas Dias wrote: Now it changes the input string into 'less \$SRC/' and prevents tab expansion as would be done without use of any variables. Would anyone know how to restore the old behavior with bash 4.2.2 + readline 6.2 (linux ubuntu 12.04) ? Yep - upgrade to bash 4.2.29 (that is, patchlevel 2 is missing the fix in patchlevel 29 that restores the behavior you want). ftp://ftp.gnu.org/pub/gnu/bash/bash-4.2-patches/bash42-029 This topic comes up frequently on this list. -- Eric Blake ebl...@redhat.com+1-919-301-3266 Libvirt virtualization library http://libvirt.org
readline expansion: 'cp $v/TAB' - 'cp \$v/' (no expansion) - why?
my #1 bash gripe is that newer versions do not expand command lines containing '$' in emacs readline editing mode. I used to be able to do: $ export SRC=../somedir $ less ${SRC}/TAB (TAB meaning press horizontal tab key ) and emacs mode readline would expand, displaying the contents of ../somedir . Now it changes the input string into 'less \$SRC/' and prevents tab expansion as would be done without use of any variables. Would anyone know how to restore the old behavior with bash 4.2.2 + readline 6.2 (linux ubuntu 12.04) ? Thanks Regards, Jason
Re: readline expansion: 'cp $v/TAB' - 'cp \$v/' (no expansion) - why?
hi Eric - thanks for your response, but bash-4.2.29(2), just built from latest patches from gnu on x86_64 ubuntu, does NOT restore the old behavior , but now refuses to tab-expand at all - 'cat $SRC/TAB' on the command line still produces no output. Is there some option I need to give to enable it? Thanks! On 5 Jul 2012 13:24, Eric Blake ebl...@redhat.com wrote: On 07/05/2012 04:58 AM, Jason Vas Dias wrote: Now it changes the input string into 'less \$SRC/' and prevents tab expansion as would be done without use of any variables. Would anyone know how to restore the old behavior with bash 4.2.2 + readline 6.2 (linux ubuntu 12.04) ? Yep - upgrade to bash 4.2.29 (that is, patchlevel 2 is missing the fix in patchlevel 29 that restores the behavior you want). ftp://ftp.gnu.org/pub/gnu/bash/bash-4.2-patches/bash42-029 This topic comes up frequently on this list. -- Eric Blake ebl...@redhat.com+1-919-301-3266 Libvirt virtualization library http://libvirt.org