Re: With DEBUG trap, resizing window crashes script

2023-05-25 Thread L A Walsh

On 2023/05/10 13:13, Eduardo Bustamante wrote:

If you wish for the current shell to continue running after a terminal
resize, then set the signal disposition for SIGWINCH to ignore.
  


---
You can also display the new size:

alias my=declare
showsize () {\
 my s=$(stty size); s="(${s// /x})"  ;\
 printf "%s" "$s${s//?/$'\b'}"   ;\
}
trap showsize SIGWINCH




Re: [sorry for dups] Re: why difference between interactive+script doing same thing?

2023-03-28 Thread L A Walsh

On 2023/03/27 16:52, Greg Wooledge wrote:


Each function has its own private set of positional parameters ("$@" array)
independent of the main script's "$@".  If you want the funtion to see
a copy of the script's arguments, you need to pass "$@" to it.
  

---
Yeah, forgot that.  Fact was in area of self-memory insufficiently redundant
to survive ischemic event 16 months ago.

Changed to:

fnname "$@"

fnname () {
 my -a cmd=("$@")
 ... ssh ... "${cmd[@]}"
}




Re: [sorry for dups] Re: why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

On 2023/03/27 13:05, L A Walsh wrote:

That "$@" is not going to work the way you want it to in the general
case. 


---
While I got rid of $@ in some test versions, it was back in
in later version, so that may be the main flaw at this point.
Will need time to clean this mess up...




Re: [sorry for dups] Re: why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

On 2023/03/27 13:28, Greg Wooledge wrote:

On Mon, Mar 27, 2023 at 01:05:33PM -0700, L A Walsh wrote:
  

filter_ssh() {
   ign0='ssh: connect to host \w+ port 22: Connection refused'
   ign1='(agent returned different signature type ssh-rsa)'
   ign2='(ssh_exchange_identification: read: Connection reset by peer)'
   ign3='(packet_write_wait: Connection to 192.168.3.12 port 22: Broken
pipe)'
   #ign="$ign1|$ign2|$ign3"
   ign="$ign1"
#ssh -n -T "$user@$host" "$@" |& readarray output |&
#grep -Pv "$ign" &1)
   echo "Read ${#output[@]} lines"
   for o in "${output[@]}"; do
   if [[ $o =~ $ign ]]; then continue; fi
   printf "%s" "$o"
   done
}



  

filter_ssh



You're calling filter_ssh with no arguments, but trying to use "$@"
inside it to generate the ssh command.
  
Isn't "$@" still valid?  Originally I didn't have func-filterssh, it was 
inline.

I put it in a func to allow further debugging -- but you are right,
it's messed up.

You're also using dots inside a regex without backslashing them.  And
I'm not sure about the parentheses -- can't tell whether you meant those
to be literal or not.  They don't appear to serve any purpose if they're
not literal (there's no | inside them for example), but you didn't
backslash them either... very confusing.
  
Finally, using \w here is a libc extension and will only work on your

system, not necessarily other systems.  Just FYI.
  


As for the regex, I was originally trying to use grep -P to filter
the lines, again, as that didn't work it got migrated inside the function.

Note, this hacked version only searches for the 1st ignore msg, so the one
with the dots isn't included -- the parens were for grouping only, not 
capturing.


The '|' was to make a union of all the msgs, but it is commented out
to "simplify" things. (""?).  Also note, the \w was from attempt to use
grep-P

Will have to look over and see what simplifications are causing what --\
that's why I went with a separate test script -- apart from the script I had
tried to simplify -- since my problem occurs in the separate test script
apart from all the problems you point out in the original src script.

FWIW -- the multiple copies of that note that got sent to list -- try
to only look at latest DT-stamp, since as my post didn't appear, I examined
it and tried to fix some of the problems I saw...including hard coding the
command sent to ssh (so $@ isn't the problem).





oh, there they are! :-( (was: Re: missing emails posted to list)

2023-03-27 Thread L A Walsh

On 2023/03/26 10:40, L A Walsh wrote:
This is mostly a test to see if this makes it through to the list.  
Something else I tried to

post recently never showed up, and ...
  

... (sigh)
All showed up at once, once the config was fixed...*sigh*




[sorry for dups] Re: why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

On 2023/03/27 12:39, Greg Wooledge wrote:

You aren't showing the actual commands that the script is running, so
we have no way to verify that whatever the script is doing is identical
to what you were doing interactively.

Also:

  

   readarray output< <(ssh -n -T "$user@$host" "$@" 2>&1)



That "$@" is not going to work the way you want it to in the general
case. 


   Well, I'm first trying to get the specific case to work.  Not doing
anything other than running scripts on the remote machine, like 
"bin/sfc_check".


Script follows(tnx):

#!/bin/bash -u
# gvim=:SetNumberAndWidth
cd ~
# run a command 'on_host' as user
set -o pipefail
shopt -s expand_aliases
PS4='>${BASH_SOURCE:+${BASH_SOURCE/$HOME/\~}}#${LINENO}${FUNCNAME:+(${FUNCNAME})}> 
'

alias my='declare ' int='my -i '  array='my -a '



host_resolvable() {
   if dig $host|grep -P 'IN\sA\s\d' >& /dev/null; then
   return 0
   fi
   return 1
}


host_up () {
   if (($# < 1)); then
   echo "Internal usage error: host_up needs hostname" >&2
   exit 1
   fi
   my host="$1"
   int stat=0
   int dflt_chks=3
   int num_chks=dflt_chks
   if (($#>1)); then
   num_chks=$2
   # if num checks not reasonable, silently set to default value
   if ((num_chks<1)); then num_chks=dflt_chks; fi
   fi
   while ((num_chks-- >=1)); do
   if ping -c 1 -w $((1+dflt_chks-num_chks)) $host >& /dev/null; then
   rm -f "$sf"
   return 0
   fi
   done
   if [[ ! -f $sf ]]; then
   echo $(date +%s) > "$sf"
   echo "ping $host failed.  Is it up?"
   return 1
   fi
}


filter_ssh() {
   ign0='ssh: connect to host \w+ port 22: Connection refused'
   ign1='(agent returned different signature type ssh-rsa)'
   ign2='(ssh_exchange_identification: read: Connection reset by peer)'
   ign3='(packet_write_wait: Connection to 192.168.3.12 port 22: Broken 
pipe)'

   #ign="$ign1|$ign2|$ign3"
   ign="$ign1"
#ssh -n -T "$user@$host" "$@" |& readarray output |&
#grep -Pv "$ign" &1)
   echo "Read ${#output[@]} lines"
   for o in "${output[@]}"; do
   if [[ $o =~ $ign ]]; then continue; fi
   printf "%s" "$o"
   done
}


check_bins() {
   for i in "${needed_bins[@]}"; do
   alias $i="$(type -P "$i")"
   if ((${#BASH_ALIASES[$i]}==0)); then
   printf >&2 "%d: error: $i not found\n" $LINENO
   exit 1
   fi
   done
}



array needed_bins=(dig grep ping rm ssh date)
check_bins

export HOME=/Users/law.Bliss

my user='Bliss\law'
my host=""

if [[ $0 =~ on_host ]]; then
   host=$1
   echo >&2 "Using host $host"
   shift
elif [[ $0 =~ on_(a-zA-Z0-9) ]]; then
   host=${BASH_REMATCH[1]}
fi

my sf=~law/.on_$host

host_resolvable || exit 1

if ! host_up $host; then
   printf "%d: host $host, not up\n" $LINENO
   echo "host $host, not up"
   exit 1
else
   printf "%d: host $host, up\n" $LINENO
fi

my -a output

filter_ssh

my -p output





why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

Don't know that this is a bug -- there is likely some reason why there's
a  difference in interactive vs. script execution. Certainly is annoying!



I'm trying to develop a script to help me run commands on a remote
system. Seems obvious -- it is ssh based, but for me ssh generates
1 warning message 'reliably' that I want to filter out.

Thus the infrastructure.

Thing is, -- the code to read output from ssh, stops after the
error message if the script (or ssh|filter) is running automatically
(in a script).  When I run the same commands interactively, I get the
full output.

The error that gets filtered (in an array):

declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


The salient part is

   readarray output< <(ssh -n -T "$user@$host" "$@" 2>&1)
   echo "Read ${#output[@]} lines"
   for o in "${output[@]}"; do
   if [[ $o =~ $ign ]]; then continue; fi
   printf "%s" "$o"
   done

Interactively, I get the error followed by the expected output, but in 
script,

I get 1 line -- the error message.  Example:

interactive:

readarray out< <(ssh -n -T "$user@$host" 'printf "1\n2\n3\n4\n5\n"' 
2>&1); printf "%s" "${out[@]}"
warning: agent returned different signature type ssh-rsa (expected 
rsa-sha2-512)

1
2
3
4
5

In script:

on_host athenae 'printf "1\n2\n3\n4\n5\n"' 
Using host athenae

103: host athenae, up
Read 1 lines
declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


(but no 1,2,3,4,5 output)

Why would this work interactively, but not in script?

I can post the entire script, but didn't want to go overboard if possible.

Thanks.
















why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

Don't know that this is a bug -- there maybe some reason why there's
a  difference in interactive vs. script execution...certainly isn't helpful
in trying to develop a script though.


I'm trying to develop a script to help me run commands on a remote
system. Seems obvious -- it is ssh based, but for me ssh generates
1 warning message 'reliably' that I want to filter out.

Thus the infrastructure.

Thing is, -- the code to read output from ssh, stops after the
error message if the script (or ssh|filter) is running automatically
(in a script).  When I run the same commands interactively, I get the
full output.

The error that gets filtered (in an array):

declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


The salient part is

   readarray output< <(ssh -n -T "$user@$host" "$@" 2>&1)
   echo "Read ${#output[@]} lines"
   for o in "${output[@]}"; do
   if [[ $o =~ $ign ]]; then continue; fi
   printf "%s" "$o"
   done

Interactively, I get the error followed by the expected output, but in 
script,

I get 1 line -- the error message.  Example:

interactive:

readarray out< <(ssh -n -T "$user@$host" 'printf "1\n2\n3\n4\n5\n"' 
2>&1); printf "%s" "${out[@]}"
warning: agent returned different signature type ssh-rsa (expected 
rsa-sha2-512)

1
2
3
4
5

In script:

on_host athenae 'printf "1\n2\n3\n4\n5\n"' 
Using host athenae

103: host athenae, up
Read 1 lines
declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


(but no 1,2,3,4,5 output)

Why would this work interactively, but not in script?

I can post the entire script, but didn't want to go overboard if possible.

Thanks.












why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

Don't know that this is a bug -- there is likely some reason why there's
a  difference in interactive vs. script execution. Certainly is annoying!



I'm trying to develop a script to help me run commands on a remote
system. Seems obvious -- it is ssh based, but for me ssh generates
1 warning message 'reliably' that I want to filter out.

Thus the infrastructure.

Thing is, -- the code to read output from ssh, stops after the
error message if the script (or ssh|filter) is running automatically
(in a script).  When I run the same commands interactively, I get the
full output.

The error that gets filtered (in an array):

declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


The salient part is

   readarray output< <(ssh -n -T "$user@$host" "$@" 2>&1)
   echo "Read ${#output[@]} lines"
   for o in "${output[@]}"; do
   if [[ $o =~ $ign ]]; then continue; fi
   printf "%s" "$o"
   done

Interactively, I get the error followed by the expected output, but in 
script,

I get 1 line -- the error message.  Example:

interactive:

readarray out< <(ssh -n -T "$user@$host" 'printf "1\n2\n3\n4\n5\n"' 
2>&1); printf "%s" "${out[@]}"
warning: agent returned different signature type ssh-rsa (expected 
rsa-sha2-512)

1
2
3
4
5

In script:

on_host athenae 'printf "1\n2\n3\n4\n5\n"' 
Using host athenae

103: host athenae, up
Read 1 lines
declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


(but no 1,2,3,4,5 output)

Why would this work interactively, but not in script?

I can post the entire script, but didn't want to go overboard if possible.

Thanks.














why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

Don't know that this is a bug -- there is likely some reason why there's
a  difference in interactive vs. script execution. Certainly is annoying!



I'm trying to develop a script to help me run commands on a remote
system. Seems obvious -- it is ssh based, but for me ssh generates
1 warning message 'reliably' that I want to filter out.

Thus the infrastructure.

Thing is, -- the code to read output from ssh, stops after the
error message if the script (or ssh|filter) is running automatically
(in a script).  When I run the same commands interactively, I get the
full output.

The error that gets filtered (in an array):

declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


The salient part is

   readarray output< <(ssh -n -T "$user@$host" "$@" 2>&1)
   echo "Read ${#output[@]} lines"
   for o in "${output[@]}"; do
   if [[ $o =~ $ign ]]; then continue; fi
   printf "%s" "$o"
   done

Interactively, I get the error followed by the expected output, but in 
script,

I get 1 line -- the error message.  Example:

interactive:

readarray out< <(ssh -n -T "$user@$host" 'printf "1\n2\n3\n4\n5\n"' 
2>&1); printf "%s" "${out[@]}"
warning: agent returned different signature type ssh-rsa (expected 
rsa-sha2-512)

1
2
3
4
5

In script:

on_host athenae 'printf "1\n2\n3\n4\n5\n"' 
Using host athenae

103: host athenae, up
Read 1 lines
declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


(but no 1,2,3,4,5 output)

Why would this work interactively, but not in script?

I can post the entire script, but didn't want to go overboard if possible.

Thanks.















why difference between interactive+script doing same thing?

2023-03-27 Thread L A Walsh

Don't know that this is a bug -- there maybe some reason why there's
a  difference in interactive vs. script execution...certainly isn't helpful
in trying to develop a script though.


I'm trying to develop a script to help me run commands on a remote
system. Seems obvious -- it is ssh based, but for me ssh generates
1 warning message 'reliably' that I want to filter out.

Thus the infrastructure.

Thing is, -- the code to read output from ssh, stops after the
error message if the script (or ssh|filter) is running automatically
(in a script).  When I run the same commands interactively, I get the
full output.

The error that gets filtered (in an array):

declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


The salient part is

   readarray output< <(ssh -n -T "$user@$host" "$@" 2>&1)
   echo "Read ${#output[@]} lines"
   for o in "${output[@]}"; do
   if [[ $o =~ $ign ]]; then continue; fi
   printf "%s" "$o"
   done

Interactively, I get the error followed by the expected output, but in 
script,

I get 1 line -- the error message.  Example:

interactive:

readarray out< <(ssh -n -T "$user@$host" 'printf "1\n2\n3\n4\n5\n"' 
2>&1); printf "%s" "${out[@]}"
warning: agent returned different signature type ssh-rsa (expected 
rsa-sha2-512)

1
2
3
4
5

In script:

on_host athenae 'printf "1\n2\n3\n4\n5\n"' 
Using host athenae

103: host athenae, up
Read 1 lines
declare -a output=([0]=$'warning: agent returned different signature 
type ssh-rsa (expected rsa-sha2-512)\r\n')


(but no 1,2,3,4,5 output)

Why would this work interactively, but not in script?

I can post the entire script, but didn't want to go overboard if possible.

Thanks.













missing emails posted to list

2023-03-27 Thread L A Walsh
This is mostly a test to see if this makes it through to the list.  
Something else I tried to
post recently never showed up, and I didn't get back an error message.  
Also didn't show in

bash list archives.

Subject line of missing email:
 Why difference between interactive+script doing same thing?


Strange.




Re: curiosity: 'typeset -xr' vs. 'export -r'

2022-12-12 Thread L A Walsh




On 2022/12/11 20:47, Lawrence Velázquez wrote:

This happens because "declare"/"typeset" creates local variables
within functions.  Using -g works around this...

$ Export() { declare -gx "$@"; }
$ Export -r foo=1
$ declare -p foo
declare -rx foo="1"

...but now "Export" always creates global variables, rather than
scoping as "declare" and your alias-based version does.  On the
other hand, "export" also creates global variables, so in a sense
the workaround version is more consistent.

$ f() { export "$@"; }
$ f var=1
$ declare -p var
declare -x var="1"
  


I see, but you still can't use "-r" w/export, though I think
the -r flag would get dropped in any exported shell, though
in that case, one might expect an error if one uses "typeset -xr"
along the lines of "-r flag won't be exported".

Side curious: If one uses -g, does it cause the var to be defined
in all intervening functions as well as the top(global)
and current scope?

NOTE: The original question was about allowing "-r" with export:

$ f() { export "$@"; }
$ f -r var=1
export: -r: invalid option
export: usage: export [-fn] [name[=value] ...] or export -p

Seems like it would get rid of an unnecessary error message
and maybe, an inconsistency with "typeset -xr".  


Thanks for the info on using -g in the function.  I haven't
used -g too much since one of my machines still used bash-v3 and
I don't think -g appeared until 4.x (don't quote me on that though).








curiosity: 'typeset -xr' vs. 'export -r'

2022-12-11 Thread L A Walsh

 This is mostly a 'nit', but I noticed I had
   "typeset -xr"
 in one of my scripts to mean export+read-only and
 was wondering why
   "export -r"
 was disallowed (err message):

bash: export: -r: invalid option
export: usage: export [-fn] [name[=value] ...] or export -p

This seems to be an unnecessary "make-wrong", no?  I.e.
would it cause some syntactic or semantic problem in bash,
if it were allowed?

I suppose one could create an alias (despite advice that
functions are "better" -- in this case a function doesn't work).
I'm using ':;' for PS1, so cut/paste works:

 PS1=':; '

:; Export () {
:;   typeset -x "$@"
:; }
:; Export -r foo_xr=1

:; typeset -p foo_xr
-bash: typeset: foo_xr: not found

#   vs. alias implementation:

:; alias Export='typeset -x '
:; Export -r al_foo_xr=1

:; typeset -p al_foo_xr
declare -rx al_foo_xr="1"

Please forgive the noise if this has already been addressed as my bash
is not fully up to date.  Thanks!
-linda




Re: Handling files with CRLF line ending

2022-12-06 Thread L A Walsh

On 2022/12/06 10:57, Chris Elvidge wrote:

Yair, how about using the Python installed in the WSL instance.
  

---
   Oh, I wondered why Python used CRLF, but nothing else did.

   What version of python are you using?  The Python for WSL,
the python for cygwin, or the python for Windows?  If you are
using python for Windows, I'd *sorta* expect it to use CRLF, but
would expect WSL or Cygwin versions to use just 'LF'.  Similarly w/bash --
I haven't tested it, but I'd expect bash compiled for windows
(using mingw toolchain) to use CRLF, but LF for WSL or Cygwin.

Are you using both tools for the same OS and subsys and having
them conflict?





Re: Parallelization of shell scripts for 'configure' etc.

2022-06-18 Thread L A Walsh

On 2022/06/13 15:39, Paul Eggert wrote:
In many Gnu projects, the 'configure' script is the biggest barrier to 
building because it takes s long to run. Is there some way that we 
could improve its performance without completely reengineering it, by 
improving Bash so that it can parallelize 'configure' scripts?
  


I don't know what type of instrumentations you've done over configure,
but before investing a much time in optimization, it might be interesting
to know where most of the time is being spent.

I.e cpu, I/O -- what types of I/O -- actual test-I/O or executable load 
time.


The reason I say that is that having run configure for the same projects
on linux and on cygwin -- I note that the cygwin version is MUCH slower
doing the same work on the same machine. 


A big slowdown in cygwin is loading & starting a new executable/binary.
I.e loading 100 programs 10x each will take a disproportionately higher time
on cygwin due to its exec-load penalty (Since windows has no fork, all 
of the

memory space duplication (and later copies on write) has to be done manually
in cygwin -- very painful.  But noting that one of the big boosts in
shell scripts can come from using a parallel option of a util vs. feeding in
file/pathnames one at a time like using 'find -exec "rm {}" \; or similar.

Similarly a big speed up in configure might be to use the bundled version of
coreutils (all binaries in 1 image invoked via different command names), and
put that in the same binary as bash, perhaps via a loadable command, with
any following core-util calls being routed "in-binary" to the already loaded
version.  Of course it would likely not be trivial assuring all the
commands can be re-invoked to assure they had their necessary 
initializations

redone on each "in-image" launch, but keeping all the coreutil binaries
"in-memory", I think would be a big win even if it wasn't multi-threaded. 

Of course it might be of benefit if the various utils were all thread 
safe, so

a more powerful dispatcher could use multi-threading w/o worries about
thread safety, but just eliminating most of the redundant util-loads 
might be
a huge win by itself.  That's sorta why I was wondering how much 
perf-profiling

had been done on config(.sh)...

Anyway -- just some random thoughts...


For ideas about this, please see PaSh-JIT:

Kallas K, Mustafa T, Bielak J, Karnikis D, Dang THY, Greenberg M, 
Vasilakis N. Practically correct, just-in-time shell script 
parallelization. Proc OSDI 22. July 2022. 
https://nikos.vasilak.is/p/pash:osdi:2022.pdf


I've wanted something like this for *years* (I assigned a simpler 
version to my undergraduates but of course it was too much to expect 
them to implement it) and I hope some sort of parallelization like this 
can get into production with Bash at some point (or some other shell if 
Bash can't use this idea).


  




Re: parameter expansion null check fails for arrays when [*] or [@] is used

2022-03-23 Thread L A Walsh




On 2022/03/23 09:49, Chet Ramey wrote:

On 3/23/22 7:56 AM, Robert Elz wrote:

  

You might not like the terminology, but it is what it is,
and you don't get to arbitrarily redefine it, unless you
change your name to Humpty Dumpty.



Bonus points for the "Through the Looking Glass" reference.

  

Alice will be just fine...





Re: parameter expansion null check fails for arrays when [*] or [@] is used

2022-03-23 Thread L A Walsh




On 2022/03/23 00:25, Ilkka Virta wrote:



The POSIX phraseology is that "null" means the empty string.


   POSIX phraseology applies to the POSIX language.

In 'C'

char *s = NULL
is not the same as
char *s="";

They aren't the same at the machine level nor at the language level.

In perl:
my $s = undef;
is not the same as
my $s = "";

Off hand, I don't know of any computer language that
equates them (doesn't mean there is none).

undef = unset
null may be set but
ANSI defines '\0' as NUL.  Is that the same as null?





Re: for loop over parameter expansion of array can miss resulted empty list

2022-03-22 Thread L A Walsh




On 2022/03/22 14:04, Lawrence Velázquez wrote:

On Tue, Mar 22, 2022, at 4:53 PM, L A Walsh wrote:
  

On 2022/03/21 03:40, Alex fxmbsw7 Ratchev wrote:


i solve this by shopt -s nullglob
  
  

Repeat-By:
   Code: x=("/"); for i in "${x[@]%/}"; do echo "i is '$i'"; done
   Result: none
   Expected result: i is ''



if you have nullglob set, then that is not the correct result.



The original bug report said nothing about nullglob.
  


Sorry, I got sidetracked.  In this case it wouldn't matter,
if you have no directories where you are running this,
then nothing will match, and it
will be a null (empty) expression. The
for i in X [Y [Z]], statement will execute once for each
non-null value after the 'in'.  If there are no expressions,
then it won't execute.  Thus there should be no output.

If you wanted to gather up dir names in the current directory, I'd
use something like:


readarray -t -d '' dirs< <(find . -maxdepth 1 -type d -print0)
dirs=${dirs[@]#./}

Sorry for my getting sidetracked (it happens alot.



find . -type d -print





Re: for loop over parameter expansion of array can miss resulted empty list

2022-03-22 Thread L A Walsh

On 2022/03/22 13:53, L A Walsh wrote:

On 2022/03/21 03:40, Alex fxmbsw7 Ratchev wrote:
  

i solve this by shopt -s nullglob
  


Repeat-By:
   Code: x=("/"); for i in "${x[@]%/}"; do echo "i is '$i'"; done
   Result: none
   Expected result: i is ''
  

BTW -- try adding "-u" on your bash line.
Then you'll see what is really null vs. containing a ''.




Re: parameter expansion null check fails for arrays when [*] or [@] is used

2022-03-22 Thread L A Walsh

On 2022/03/21 05:45, Andreas Luik wrote:

Description:
Bash fails to correctly test for parameter to be unset or null when the 
parameter is an array reference [*] or [@].

Repeat-By:

myvar[0]=
echo "${myvar[0]:+nonnull}"
 -> OK
echo "${myvar[*]:+nonnull}"
nunnull -> not OK, because "${myvar[*]}" is null
  

myvar[*] = ('', )  element 0 contains an empty string, so not null.


echo "${myvar[@]:+nonnull}"
nunnull -> likewise not OK, because "${myvar[@]}" is null
  

myvar[@] = ''  (displays all elements, each quoted.  Since
element 1 contains an empty string, it is quoted and isn't null.




  




Re: Corrupted multibyte characters in command substitutions fixes may be worse than problem.

2022-02-06 Thread L A Walsh





On 2022/02/06 09:26, Frank Heckenbach wrote:

On 2022/01/02 17:43, Frank Heckenbach wrote:



Why would you? Aren't you able to assess the severity of a bug
yourself? Silent data corruption is certainly one of the most severe
kind of bugs ...
  

---
That's debatable, BTW, as I was reminded of a similar
passthrough of what one might call 'invalid input' w/o warning,



I think you misunderstood the bug. It was not about passing through
invalid input or fixing it. It was about bash corrupting valid input
(if an internal buffer boundary happened to fall within a UTF-8
sequence)
  


I see that the cause of the bug you reported was due to entirely different
circumstances, they question I might have, is if bash was returning
input -- should bash scan that input for validity.  For example, if it
bash read these values, from a 'read' (spaces separating separate
bytes):
bytes
read:  | returned:
1) first case is relatively clear:
read (len=2)
 0x41 0x31
returned:
A1
2)
read (len=4):
0x41 0x31 0x00 0x00
returned: ???  A1 or nothing?
error or warning message?

In the case of bash with environment having LC_CTYPE: C.UTF-8 or 
en_US.UTF-8

read:
0xC3 (len=1) i.e. Ã ('A' w/tilde in a legacy 8-bit latin-compatible 
charset),

but invalid if bash processes the environment setting of en_US.UTF-8.

Should bash process it as legacy input or invalid UTF8?
Either way, what should it return? a UTF-8 char
(hex 0xc30x83) transcoded from the latin value of A-tilde, or
keep the binary value the same (return 0x83),
should it return a warning message?  If it does, should
it return NUL for the returned value because the input was erroneous?

I.e. should bash try to scan for validity of input? 
Should it use legacy ANSI or 8-bit charsets for such or

should it try to decode legacy inputs into Unicode if the environment
indicates it should be using unicode values?)
on decode-errors should it issue a warning message if so, should
it return the original unencoded value, NUL, or a decoded Unicode value?

If bash is returning a value corrupted by a memory overlap (overlapping 
stack values)

should it be testing the returned value as valid (especially if the
environment suggests it should be returning unicode values?).

I.e. if there was corruption -- either from reading a NUL
unexpectedly, or incorrectly encode Unicode values, if warnings
were "on", the corruption might be noticed -- even if noticed,
what should bash return -- a binary DWORD value that makes no sense as
a string: either ASCII or unicode, like
0x00 0x41 0x00 0xC1 -- maybe an attempt at 'AÀ' in UTF-16 on Windows --
where my original bug occurred in reading a registry value that could
easily be UTF-16 encoded where the user-shell was being run under
cygwin running a Unicode C.UTF-8 environment.

I.e. Bash might be expected to return different results based on
the environment it was running in and the environment specified encoded
or whether bash was expecting the reduced-ASCII character set.

Depending on what one thinks bash 'should do' and what environment it
was running in can result in very different results, which is why I
balked at bash issuing warnings in some cases and not others and
whether it returned the original binary values or some sanitized version.

At the time, due to the warning being issued, the read 'failed' and
a sanitized version was returned -- both responses preventing reading
the desired value.  If bash detected invalid Unicode sequences it might
help detect memory-based corruption, or might sanitize such sequences before
returning them -- either way possibly causing harm due to silence or due
to breaking compatibility.

Just thought it might be desirable to be consistent about what was done or
having controlled via an option (be strict+warn or ignore+don't warn).

If its decided to ignore (don't test for validity) and don't issue a warning
as the default action, then the warning for null bytes seems like it should
be removed -- with the idea of bash not testing read input for validity.


which was very unhelpful.

Or more basically should
 based character set -- as in legacy input)
returned: ???  should bash return à (U+00C3) or hexbytes 0xc3\0x83
if





Re: Corrupted multibyte characters in command substitutions fixes may be worse than problem.

2022-02-05 Thread L A Walsh

On 2022/01/02 17:43, Frank Heckenbach wrote:

Chet Ramey wrote:

  

After all, we're talking about silent data corruption, and now I
learn the bug is known for almost a year, the fix is known and still
hasn't been released, not even as an official patch.
  

If you use the number of bug reports as an indication of urgency,



Why would you? Aren't you able to assess the severity of a bug
yourself? Silent data corruption is certainly one of the most severe
kind of bugs ...

---
That's debatable, BTW, as I was reminded of a similar
passthrough of what one might call 'invalid input' w/o warning,
resulting in code that worked in a specific circumstance to a change
in bash issuing a warning that resulted in breaking code, that, at that
point, worked as expected.

Specifically, it involved reading a value typically in the range
50 <=x <=150 from an active file (like a value from /proc that varies
based on OS internal values) where the data was stored in a
quad, or Little-Endian DWORD value, so the value was in the the
2 least significant bytes with the most significant bytes following
(in a higher position) in memory, like:
Byte# => 00 01 02 03, for value 100 decimal:
hex   => 64 00 00 00

The working code expected to see 0x64 followed by 0x00 which it
used as string terminator.

Chet "fixed" this silent use of 0x00 as a string terminator to no longer
ignore it, but have bash issue a warning message, which caused the
"read < fn" to fail and return 0 instead of the ascii character 'd', which
the program had interpret as the DPI value of the user's screen.

It took some debugging and hack arounds to find another way to access
the data.  So what some might have called silent data corruption because
bash silently passed through the nul terminated datum as a string
terminator, my program took as logical behavior.  I complained about
the change, remarking that if bash was going to sanitize returned values
(in that case checking for what should have been an ascii value with NUL
not being in the allowed value of string characters), that bash might
also be saddled with checking for invalid Unicode sequences and warning 
about

them as well, regardless of the source of the corruption, some programs
might expect to get a raw byte sequence rather than some encoded form
with the difference in interpretation causing noticeable bugs.

For example, the name name part of the an email address
that Chet replied to was "Ángel" where the first char in an 8-bit
Latin code page starts with a "Latin Capital Letter A with Acute".
While this worked and was passed through as a binary 0xc1 in perl 5.8.0,
Was "fixed" in 5.8.1 and later to result in the binary being
translated in to perl's internal form as U+00C1.  On output, that
gets translated to 0xc1 translated to a binary 0xC181 which is invalid
unicode (should have been 0xc381, but it's written to the first byte
position so the error is propagated throughout the field as
""

In the 5.8.0 version perl's non-conversion of the 8-bit latin input
resulted in a working filter.  The fixed version resulted in
the widely touted "perl-unicode bug", which exists to this day (for
backwards compatibility).

So silently returning values as-is-without modifying them may result
in working code, but modify the returned values after programs are written
that already depend on the literal byte-stream, can cause a different set
of annoying problems.  In that conversation, the idea of sanitizing UTF-8
input was raised, but as a costly endeavor for existing code.






Re: Incorrect alias expansion within command substitution

2022-02-03 Thread L A Walsh




On 2022/02/03 11:02, Chet Ramey wrote:

On 2/2/22 10:18 PM, Robert Elz wrote:
  

 Date:Wed, 02 Feb 2022 17:18:08 -0800
 From:L A Walsh 
 Message-ID:  <61fb2d50.7010...@tlinx.org>

   | My posix non-conformance issue has to do with bash not starting with
   | aliases enabled by default in all default invocations.

If you're using aliases in scripts, then just stop doing that.
There's no need for it, it just makes your script harder to
follow.  Simply expand any aliases you"d use interactively,
and you will no longer care about this.



There's no problem with using aliases in private scripts you write for your
own purposes.

Going against the POSIX standard in enabling aliases on startup would be no
problem if it was in your private shell for your own purposes...

 If you want to impose some personal style you find easier to
read, so be it. Even advocating for that style is ok, if you don't expect to be 
taken seriously unless you provide evidence of concrete benefits.
  

---
   If you want to impose your personal bash startup options on the general
shell used by default on most systems, so be it.  But please don't
expect to be taken seriously when you arbitrarily disable default
posix options at startup unless you provide evidence of concrete benefits.

   In a similar way when one does a 'read var < fn' and you decide to add
a warning if 'fn' ended with a binary '0', that would be fine if it only 
affected
you, but you added the warning claiming it was solving some problem 
complained about
by some users.  When I asked for concrete evidence of benefits or the 
large number
of users askign for that, I was attacked.  Yet you ask to be taken 
seriously when

you implement changes that no one had asked for on bug-bash.


The issue is expecting others to understand or be able to help with scripts
written in that style, or expect them to invest time learning it. That's
just not reasonable.  


If you are claiming it takes them significant time to learn what:


shopt -s expand_aliases; alias int='declare -i '
means, How do you expect them to learn shell basics from 'man bash'?

More than one person has commented the level of clarity of bash 
documentation.


I've have yet to find someone who doesn't understand what
'int var=1' means.

Documentation that has confused more than one poster on this list
is hardly a standard one should aspire to.  It is, at best, 'terse'.

versus my alias usage that attempts to improve clarity.






Re: Incorrect alias expansion within command substitution

2022-02-03 Thread L A Walsh




On 2022/02/03 07:02, Alex fxmbsw7 Ratchev wrote:



On Thu, Feb 3, 2022, 04:20 Robert Elz <mailto:k...@munnari.oz.au>> wrote:


Date:Wed, 02 Feb 2022 17:18:08 -0800
From:    L A Walsh mailto:b...@tlinx.org>>
Message-ID:  <61fb2d50.7010...@tlinx.org
<mailto:61fb2d50.7010...@tlinx.org>>

  | My posix non-conformance issue has to do with bash not
starting with
  | aliases enabled by default in all default invocations.

If you're using aliases in scripts, then just stop doing that.



   Uh, No.


u're tralla

There's no need for it, it just makes your script harder to
follow.  


---
   I write scripts for myself.  aliases make scriptes easier to create
modify and maintain.  You can type 'declare -i x=1', I prefer 'int x=1' 
I find 'declare -xxyz' harder to write, read and parse than 'my', 
'array' 'map'


We aren't talking 100 aliases, but these:
alias my='declare ' int='my -i ' array='my -a ' map='my -A '

are used in nearly all my scripts.

I use functions frequently I start off many of my scripts:
nalias include >& /dev/null
if [[ $(type -t include) != function ]]; then source ~/bin/include; fi
include stdalias

Even though I have an stdalias.shh file, I rarely use the aliases in it 
-- except

for the ones listed above.  It depends on the script, but having aliases
matching types can help in more complex scripts.  I can define something 
with an alias, then check if a parameter matches its 'type'.  my 
Types.shh include file even

includes a builtin-test suite:

 Types.shh

Types v0.2 include file. Usage (in a script):

include Types

Cmdline options:
-h for this help
-t for self-test


 Types.shh -t

test01: Define & Identify Basic Types: PASS
test02: define w/aliases+combos+identify: ...
 0. type(myarray)=array  : PASS
 1. type(mymap)=map  : PASS
 2. type(myint)=int  : PASS
 3. type(int_array)=array Int: PASS
 4. type(int_map)=map Int: PASS
 5. type(exp_int_array)=array Int Export : PASS
 6. type(exp_int_map)=map Int Export : PASS
 7. type(low_array)=array Lower  : PASS
 8. type(up_array)=array Upper   : PASS
 9. type(exp_int_const)=int Const Export : PASS

---
The above routines break down type-defining aliases and
I can check that parameters passed to a subroutine are of the right type.
like:
if I had passed something declared with
alias 'int_map' to a function, I can test if it
is a 'map' (hash) with keys that point to integers.

For dual types (like declare -al), I can use
array Lower foo=(...)


Simply expand any aliases you"d use interactively,
and you will no longer care about this.


---
   Except every time I type or read them, which is sorta the point.

There is no way you can tell me that:
declare var='v'
declare -i ivar=1

are more clear than:

my var='v'
int ivar=1




Re: Incorrect alias expansion within command substitution

2022-02-02 Thread L A Walsh




On 2022/02/02 08:50, Chet Ramey wrote:

On 2/2/22 8:25 AM, L A Walsh wrote:

  

I.e. My bash is posix compliant by default w/r/t aliases:



It's not, and that's how this whole issue got started. You're running
bash-4.4. POSIX requires the following to work:

alias switch=case
echo $(switch foo in foo) echo ok 2;; esac )

and it simply doesn't, whether you run in posix mode or not.
  

-
You are right in that I was entirely in left field. However w/r/t starting
with aliases being enabled by default when bash starts (interactive or not),
I would prefer bash follow posix rules.

While I compile my bash to follow posix rules, I can't quite write my
general scripts to expect that as bash at the trunc level

I missed the original problem being talked about here.

My posix non-conformance issue has to do with bash not starting with
aliases enabled by default in all default invocations.

While BASH_ALIASES is inherited I can't specify a set of aliases that I can
expect to just 'work' when bash starts.

For that matter I can't expect my own maps (arrays with non-integer or 
integer

to work in child processes.

I've tried to suggest various improvements over the years, and don't
understand the resistance of all the suggestion.

I will admit that my focus is utility and usability rather than 
security, ever since

the attack on bash function injections, but would have suggested using a
shared memory file owned by root to hold a key (checksum key) of 
functions and secured variables.  Perhaps not ideal, but, I believe 
workable.


Unfortunately, all of my ideas/works after last Thanksgiving have 
suffered from a

decrease in mental function due to a nasty stroke that affected my visual
cortext -- affecting both eyes and image processing.  Since I have been 
highly
visually oriented, many of my memories, and ability to visualize my code 
and even
see or read a line at a time are impaired, requiring me to read 
word-by-word which
horribly slows down reading and virtually eliminates ability to skim 
text -- the result being I often miss entire phrases, even sections.  
Apparently from cat scan and MRI's that stroke as only one of the worse 
was only 1 of several picked up

by the dianostics.



So if it looks like I missed something -- I probably did.  I also 
sometimes have gaps
in a logical chain of though, because I thought it, but missed putting 
it into words.


So most sorry for missing key key points in arguments, as well as 
missing under- standing, what to you are obvious points of joining logic.


I will try to continue my to increase due-diligence, but will most 
assuredly fail.


My apologies.
Ms. Linda Walsh
(aka Astara)
(at) tlinx.org




Sadly this gives some rampant examples to point out my logical flaws and 
my missing basic. points in a discussion.


I apoligize in advance for my many



Re: Incorrect alias expansion within command substitution

2022-02-02 Thread L A Walsh

On 2022/02/01 07:50, Chet Ramey wrote:



"Historically some shells used simple parenthesis counting to find the
terminating ')' and therefore did not account for aliases. However, such
shells never conformed to POSIX, which has always required recursive
parsing (see XCU 2.3 item 5)."
  


   It doesn't require recursive parsing if the aliases declared
in the parent shell are expanded in the child shell -- i.e. if
bash followed the posix requirement that alias expansion default to
on, then the aliases in the BASH_ALIASES data structure would
automatically be imported into the child shell and expanded and this
type of complicated parsing problem wouldn't occur.  It seems
like the reason this error is occurring, is because someone is trying to
work around bash's incompatibility with the posix standard by not starting
with aliases turned on by default.



So this seems like behavior that should be conditional on posix mode to
preserve backwards compatibility.
  

You mean continue to to create workaround(s) for bash's non-conformance
w/r/to initial alias expansion.

Why not nip the problem at the source -- make bash posix conformant by 
starting

w/alias expansion turned on?

In the long run it will prove simpler and provide
compatibility with existing posix-compliant shells.





Re: How to display parameters in a function backtrace?

2022-02-02 Thread L A Walsh




On 2022/02/02 06:43, Alex fxmbsw7 Ratchev wrote:

i had gdb new version

---
This is about looking at a backtrace of shell functions in shell

gdb may show a backtrace of shell functions, but gdb isn't
a shell debugger, but one for the source code of bash.

If you look at my backtrace function -- it shows a backtrace of
shell functions, not a backtrace of bash itself.

In that regard, I'm asking how to look at the parameters to a function
written in shell (not parameters to the 'c' functions inside the bash
source.

Sorry if this is/was confusing.






Re: Bash not escaping escape sequences in directory names

2022-02-02 Thread L A Walsh

BTW, thinking about how this problem would arise

On 2022/01/20 22:43, Lawrence Velázquez wrote:

Depends what you consider to be an issue. Personally, I would be
less than pleased if my whole terminal turned red just because I
changed into a directory that happened to have a weird name.
  

Since we are talking about directory names making their way
into PS1 via output of 'cd'.  One might ask why you have a
directory name on your system that would include such escape
sequences.

For directory name used in a prompt to be a problem, you have to
have created that directory or allowed someone else to create that
directory on your system.

Instead of worrying about effects of dirnames on PS1, one might worry
about how the directory names were created in the first place, and then
worry about why one would deliberately 'cd' into such a directory.








How to display parameters in a function backtrace?

2022-02-02 Thread L A Walsh

I was trying to find parameters to a function that gave an error, so
I 'turned on' (included my backtracing routine), which showed:

./24bc: line 46: printf: `txt=\nRed,': not a valid identifier
STOPPING execution @ "printf ${_:+-v $_} '\x1b[48;2;%s;%s;%sm' $1 $2 $3" 
in "setBgColor()" at ./24bc #46

 called in setBackBlack() at ./24bc #56
 called in title() at ./24bc #74
 called in main() at ./24bc #81


I was all, 'rats, its not showing the params', I wondered why and looked 
at the code,

and realized that while I can backtrace function, source line+file, I didn't
know how to display the params that were used when the function was called.

Is there, or maybe 'can there', be a BASH var like
FUNCPARMS[1]="(P1 P2 P3)"

Of course it would be handy if bash could parse this w/o using an
intermediate var:

echo "${${FUNCPARMS[@]}[1]}"   =>
  #  (tmp=${FUNCPARMS[@]}   tmp=(P1 P2 P3) )
  #  echo "${tmp[1]}" 
P1


---
But even without the enhanced parsing output, it would still
be useful if bash had & filled in the param vars.

FUNCPARMS would parallel 'FUNCNAME', i.e.
FUNCPARMS[1] would hold the quoted array "()" PARMS for FUNCNAME[1].

FWIW, my primitive backtrace function (source or "include")  follows:


#!/bin/bash -u

shopt -s expand_aliases
alias my='declare ' int='my -i '

backtrace () {
 trap  ERR
 int   level=0
 mycmd fn src ln
 int   iter=0
 while {
 cmd=$BASH_COMMAND; fn=${FUNCNAME[level+1]:-}
 src=${BASH_SOURCE[level+1]:-} ln=${BASH_LINENO[level]:-}
 }; do
   [[ $fn && $src && $ln ]] && {
 if ((iter++==0)); then
   printf 'STOPPING execution @ "%s" in "%s()" at %s #%s\n' "$cmd" 
"$fn" "$src" "$ln"

 else
   printf '  called in %s() at %s #%s\n' "$fn"  "$src" "$ln"
 fi
 level+=1
 continue;
   }
   exit 1
 done
}
set -o errtrace
trap backtrace ERR
trap backtrace QUIT
set -E





Re: Incorrect alias expansion within command substitution

2022-02-02 Thread L A Walsh

On 2022/01/31 20:40, Martijn Dekker wrote:

On the latest code from the devel branch:
GNU bash, versie 5.2.0(35)-alpha (x86_64-apple-darwin18.7.0)

Reproducer script:

shopt -s expand_aliases
alias let='let --'
set -x
let '1 == 1'
: $(let '1 == 1')

Output:

+ let -- '1 == 1'
++ let -- -- '1 == 1'
foo: line 5: let: --: syntax error: operand expected (error token is "-")
+ :

The alias is incorrectly expanded in the command substitution, 
duplicating the '--' argument and causing a syntax error.
  


I can't say for sure, but it would be interesting if anyone else
has this result in a bash with aliases on by default:

I.e. My bash is posix compliant by default w/r/t aliases:

 env -i /bin/bash --noprofile --norc

bash-4.4$ shopt -p expand_aliases
shopt -s expand_aliases

and it doesn't show the above error:

bash-4.4$ alias let='let --'
bash-4.4$ set -x
bash-4.4$ let '1 == 1'
+ let -- '1 == 1'
bash-4.4$ : $(let '1 == 1')
++ let -- '1 == 1'
+ :

It may not be the case, but to me, looked like the alias for 'let' had 
been disabled

in the $() subshell as per standard bash behavior of disabling aliases
on startup.

I.e. if you configure bash to be posix compliant w/r/t aliases on
shell startup, this seems to fix the above problem.







Re: Bash not escaping escape sequences in directory names

2022-01-26 Thread L A Walsh

On 2022/01/22 12:48, Andreas Kusalananda Kähäri wrote:
The shell even keeps the PS1 variable's value from its inherited 
environment

without sanitizing it.
  




This is a requirement of the unix/posix model that has 'fork'

create a new process that is a new unfiltered, unsanitized copy of the
original.

All memory, not just environment vars are exact copies of the
parent. 


If a child process did not contain an exact copy of its parent after
a fork, it would be a serious problem.

What you are insinuating as a  problem "the shell even keeps the PS1
variable's value from its inherited environment without sanitizing it",
is a requirement not just for PS1, but all memory.

If you don't like that feature, you move to an entirely different
OS -- like 'NT' (at base of windows), where new processes are created
by a special OS-function that has no automatic importation of anything
from the previous process.  Everything is created 'anew' with nothing
in the new process being automatically inherited from the previous process.

If a shell or any process didn't inherit it's parent's memory, it would be
a serious error.






Re: Bash not escaping escape sequences in directory names

2022-01-22 Thread L A Walsh

On 2022/01/20 22:20, Lawrence Velázquez wrote:

On Fri, Jan 21, 2022, at 12:22 AM, L A Walsh wrote:
  

On 2022/01/18 22:31, Alex fxmbsw7 Ratchev wrote

Fix:  [sanitizing the prompt].  


Sanitizing? What's that?
Especially in a way that won't break existing legal usages?



Curious what "existing legal usages" there are for allowing a change
of working directory to result in arbitrary escape sequences being
sent to your terminal.
  


   Arbitrary?  Are you asking me?  I asked for a definition of "sanitary"
that wouldn't break existing legal usages.  If path->prompt transformations
resulted in "random" escape sequences, I wouldn't find them very useful, 
but

whether or not my path transformations would fit your definition of
"sanitary" is another matter.

Someone gave an example of crafting a prompt that changed color (say to 
red) so as to
suggest a root prompt.  Where does anyone get the idea that a red-prompt 
= a root prompt?  That's a recent _feature_ created by altering the path 
prompt.  My pathprompt code turns the path prompt red when it detects 
UID==0.  I could just as easily have it turn orange if your current 
directory was based in "/sbin". 

I also put my tty, username, host and my "spwd" in my tty's titlebar.  
That and
the color include tty-specific escape sequences to set color, set the 
titlebar and
to return from those settings.  Those sequences are specific to each 
terminal.


So I would ask which user-controlled prompts are "illegal" such that 
they would
be sanitized?  The user controls their own prompt.  What transformations 
would

you disallow that wouldn't trample on some user's choice of a prompt?

My current prompt *includes* the output of my 'spwd' function (and has 
for several
years).  It is _included_ in the prompt.  There is code in my prompt to 
change

it's color, change the window title, and include the hostname among other
things:

/etc/local/bash_prompt.sh
#!/bin/bash -u
# vim=:SetNumberAndWidth

shopt -s expand_aliases
alias my='declare ' int='my -i ' array='my -a ' map='my -A '
setx() { trap unsetx EXIT; set -x; } ; unsetx() { set +x;}



# spwd
# - return a shortened path when displayed path
#   would take up > 50% width of the screen
array _als=( "_e=echo -En"  "ret=return" )
alias "${_als[@]}"
#   dpf =  string of print formats to use
#  in printing out path-parts for prompt
#  (eval'd in spwd to make array)
export __dpf__='local -a PF=(
   "/$1/$2/$3/…/\${$[$#-1]}/\${$#}"
   "/$1/$2/…/\${$[$#-1]}/\${$#}"
   "/$1/…/\${$[$#-1]}/\${$#}"
   "/$1/…/\${$#}"
   "…/\${$#}"
   "…" )'

spwd () {  my _f_=""  ;\
 [[ ${_f_:=${-//[^x]/}} ]] && set +$_f_  ;\
 (($#))|| { set "${PWD:=$(echo -En $( \
 eval "{,{,/usr}/bin/}pwd 2>&-||:" ))}"  ;\
 (($#)) || ret 1; }  ;\
 int w=COLUMNS/2 ;\
 ( printf -v _p "%s" "$1" ; export IFS=/ ;\
   set $_p; shift; unset IFS ;\
   t="${_p#$_home_prefix}"   ;\
   int tl=${#t}  ;\
   if (($#<=6 && tl

Re: Bash not escaping escape sequences in directory names

2022-01-20 Thread L A Walsh

On 2022/01/18 22:31, Alex fxmbsw7 Ratchev wrote

Fix:
Haven't looked deeply into the bash internals but sanitizing the directory
name (along with other user-controlled substitutions in the prompt) should
work.


Sanitizing? What's that?
Especially in a way that won't break existing legal usages?





Re: bash conditional expressions

2021-12-25 Thread L A Walsh

On 2021/11/12 01:36, Mischa Baars wrote:

Hi All,

All of my makefiles only compile source files and link object files that
are NEW, as in the modification timestamp is newer than OR EQUAL TO the
access timestamp, such that when I include a new source file into a project
or produce a new object file during compilation of the project, it does not
take more time than required.
  

-
   You realize the updating of the atime field is variable based on file
system mount options, like 'noatime', 'strictatime' 'relatime', 'atime'.

You mention bash's -N flag which compares mtime+atime, but more 
important to how

that works is how the file system is mounted.

If the defaults changed in your distribution, the behavior of bash's -N 
would

change as well, completely independent of bash.

If you are running on a file system mounted with 'noatime', bash's -N 
would be
of little use...  Not sure about the other atime variants.  If you use 
'strictatime'
I'd expect it should work as you expect, but there are performance 
penalties with that.

Looking forward to a fix!
  


You might want to try 'strictatime' on your file systems where you need
strict behavior, but that does create a performance penalty across the whole
file system.

Better is to use bash's (-nt or -ot)  That will compare modification 
fields and not suffer from 'atime' behavior differences.






Re: Zero-length indexed arrays

2021-12-22 Thread L A Walsh






On 2021/12/21 20:07, Greg Wooledge wrote:

On Tue, Dec 21, 2021 at 10:48:07PM -0500, Dale R. Worley wrote:
  

Lawrence Vel�zquez  writes:


Did you mean to say that ${#FOO[*]} causes an error?  Because
${FOO[*]} does not, a la $*:
  

The case that matters for me is the Bash that ships with "Oracle Linux".
Which turns out to be version 4.2.46(2) from 2011, which is a lot older
than I would expect.  But it *does* cause an error in that verison:

$ ( set -u ; FOO=() ; echo "${FOO[@]}" )
bash: FOO[@]: unbound variable




I would recommend not using set -u.  It's not as bad as set -e, but it's
still pretty bad.  It breaks what would otherwise be valid scripts, and
the breakage is not always easy to predict, as you've now seen.


   Not using '-u' in bash is akin to not using 'strict' in perl.  you 
can put int misspelled variables

and not detect them -- and then wonder why your scripts don't work.

If you adhere to always requiring a definition of a variable, then using 
'-u' is never a problem.
But not using '-u', in a script where all variables are defined will 
show unintended errors.


They key is to always require definitions, which is why I always rely on
alias my='declare '
to shorten my declares.





Re: Why should `break' and `continue' in functions not break loops running outside of the function?

2021-10-30 Thread L A Walsh




On 2021/10/30 09:07, Robert Elz wrote:

oguzismailuy...@gmail.com said:
   | `break' is not a keyword in the shell, but a special command.

That's true.   However, 99%**1 of script writers don't see 
it that way,g they believe it is just like "if" or

 "while" or "done" or "return".

That's why the "This is counterintuitive." I would guess - in most
cases, the issue isn't quite like yours, but more like

  

---
   Something that supports break/continue being dynamically
scoped: the variables used in the loop are dynamically
scoped.  As long as the function can access the local variables,
it is "part" of the loop.

   Bash/shell provides dynamic scoping, by default, for
referenced (but undeclared) variables.  So why wouldn't
someone expect the functions that can change all the variables
in the loop to also be able to modify loop params?


ps: the NetBSD shell continues to work the way that 
you want, and does so by deliberate choice**2 - our test

suite has a whole stack of tests to make sure this
continues to all work "correctly" (doesn't
accidentally get changed).
  


**1+**2:
So you are saying that NetBSD shell script writers are less
than 1% of script writers?  Is NetBSD's market share that
low?  Just curious.

Regardless -- it points, at least, to it being something
that I would think should be shopt'd if nothing else.

I find the inconsistent application of loop parameters
to be, at least, a wart -- i.e. loop variables are dynamically
propagated to called functions, but loop control "verbs" aren't.


Perl is a bit schizoid in this area:

#!/usr/bin/perl
use warnings; use strict; use P;
my $x;
sub foo() {
   if ($x>=2 && $x<4) { next; }
   if ($x==5) { $x=9; }
   if ($x>=11) { last;}
}
for ($x=0;$x<20;++$x){
   P "b4 foo x=%s", $x;
   foo;
   P "after foo x=%s", $x;
}
---
results in dynamically scoped execution with commentary:

b4 foo x=0
after foo x=0
b4 foo x=1
after foo x=1
b4 foo x=2
Exiting subroutine via next at /tmp/lex.pl line 5.
b4 foo x=3
Exiting subroutine via next at /tmp/lex.pl line 5.
b4 foo x=4
after foo x=4
b4 foo x=5
after foo x=9
b4 foo x=10
after foo x=10
b4 foo x=11
Exiting subroutine via last at /tmp/lex.pl line 7.

-
I think a shopt would be more flexible.  Having
loop vars be dynamic, but verbs not, seem inconsistent.




Re: Arbitrary command execution in shell - by design!

2021-10-29 Thread L A Walsh




On 2021/10/29 12:33, Greg Wooledge wrote:

On Fri, Oct 29, 2021 at 11:59:02AM -0700, L A Walsh wrote:
  

How much lameness Chet has introduced into bash to accommodate
the wrong users.



This is quite unfair.  

Huh?  It's true--look at how functions have to be stored in
the environment because someone was able to hack "their
own system" where they already have unrestricted shell access.

If that isn't ugly or lame, what is?  But it was the fault
of taking actions that a hacker could do on a system where
they already had shell access to corrupt their own environment.

If permissions and paths were correctly set to never execute
files owned by that user, I don't see how that exploit
would gain root access and affect anyone other the user
who was injecting the code into their own function.

Asking how much lameness Chet had to introduce, says nothing
about fault.  Think about functions and how they look in the
environment.  I never thought those things were a mis-design
in bash, but an abuse by people who shouldn't have access to
the system they supposedly could exploit.

There was nothing wrong with functions on a system
where the only shell user was one's self.  Giving shell access
to untrusted people is a problem of a specific system's
security "policy". 


I don't bash functions should have been hacked to try to
disable malformed functions as introduced from the environment.
It may be true that doing so gives a site using bash better
defense from attackers, but functions were not at fault --
since for the malicious user -- they need access to the system
shell to use their exploit.  If you give out shell access in
a stock unix/linux system, it has generally been considered you've
given up a large part of your security.

Maybe if your system has mandatory security features like
FLASK, Windows mandatory integrity labeling, SMAC (linux
security option), where getting root doesn't equal game over,
then it might be possible to allow shell access "in general",
but the unix/linux shell was designed for a time of trusted
users on a relatively closed or private academic system
where the focus was on making things easier, not needing to
post guards at every port, etc.

There have been other similar bugs (that weren't really bugs)
involving symlinks that the hacker could "magically" place
in root's path (right...), or in samba -- where using linux
extensions on a linux-based file server, one could create
symlinks on a system where samba "widelinks" were enabled
to allow symlinks to be followed across partitions (but only
on partitions where widelinks were enabled and if the user
was able to use samba-extended features on a particular
Share. 


People threw a hissy fit, but a minority of samba users (self
included) didn't see it as a problem since our security policy
gave shell access to users (mainly me & few others) who
had "shares" on the linux-based domain server.  I.e. anything
the user could do with samba they could do in the server's
shell which they could also log into. I use my linux server
as an extension to my desktop, so full access isn't a big deal.

But in Windows, it is far more rare to let users have server-shell
access to the servers their users import shares from.  Their
security policy didn't allow for users having server access, so
that a user could control the links on the server was a problem.

I suggested the name for the feature 'allow user controlled symlinks'
or something similar.  The samba folks eventually re-enabled
the feature under the name "allow-insecure-widelinks"*bleh*.
Most inelegant.  It's that type of inelegance that had to be
inserted into bash to make bash more secure in an unsecure
environment that I refer to.  If someone wants to hack their
own shell, so what?  I'm just not going to let them write
shell scripts for me.






Re: Arbitrary command execution in shell - by design!

2021-10-29 Thread L A Walsh




On 2021/10/29 05:01, Greg Wooledge wrote:

On Fri, Oct 29, 2021 at 12:48:57PM +0300, Ilkka Virta wrote:
  

Not that I'm sure the upper one is still safe against every input. I think 
issues with associative array keys have been
discussed on the list before.


Sadly, yes.  Bash is the exploding barbed wire death match of programming 
languages.  Every single feature of bash is capable of hurting you if you use 
it in the naive or obvious way.
  
Bash is a command_line console language designed to execute commands 
locally in the context of the user.  Local user access to a console,

from a security standpoint, is generally equated with root-access,
game over as far as being secure.  People have the wrong expectations,
if they expect the 'language that allows you all-access to your machine'
to be 'secure' when random users are permitted to use it.

Perl was developed with features that encompass the commands in shell
with facilities (like taint) that help the developer to catch
mis-use of raw-environment (including user) input.

If you need to develop a script for generic-untrusted third parties,
a shell-script of any sort is not a good solution.

Shell is good for non-malicious control of local system events.  As
soon as anyone untrusted can execute a shell script or has
generic shell access, you have lost system security. 


Stop blaming and shaming shell for doing what it was intended to
do by using it  in a hostile-user environment.  If you aren't
smart enough to choose the right language for a given purpose,
then you need to hire a trained programmer who does.  Note --
universities train computer scientists in design -- like collecting
requirements (including security) and making decisions for
further development.  That doesn't mean all graduates are equal.
Experience helps, but experience w/o any context in theory,
context, and what has been done before is very often wasted.

Note -- perl isn't a panacea.  It was developed as a tool, but
is, in its later life, being randomly changed to suite the wants
of its current set of developers (some who think documentation is
the same thing as code, and that the perl language should read the
documentation of those who write extensions, modules or packages).
I wouldn't personally recommend using a perl > 5.16.3, since with
5.18 and above, various incompatibilities with older perls were
(and are being) introduced.

Also, note, python isn't a shell-scripting language.  It may be
getting popular, but it was designed more for applications than for
operating system control.  It's more of a learning-language -- and
is being increasing used for such to replace 'pascal' -- another
teaching language that has been historically used in schools.

But stop thinking bash has security bugs because it lets users
(who have shell access and arbitrary root access to their machine
to "do things".  Because that's what it was designed for.

How much lameness Chett has introduced into bash to accommodate
the wrong users.





Re: Incorrect LINENO with exported nested functions with loops

2021-10-06 Thread L A Walsh




On 2021/10/05 16:25, Tom Coleman wrote

Repeat-By:
Below is a sample script to replicate the bug in LINENO.
  

---
1st prob: scrip doesn't work as is.

 /tmp/lnno

/tmp/lnno: line 23: syntax error near unexpected token `done'
/tmp/lnno: line 23: ` done'

added the line numbers indenting and uncommenting the 2nd for loop to 
prevent the above syntax error:


01 #!/bin/bash
02 # vim=:SetNumberAndWidth
03 setx() { trap unsetx EXIT; set -x; } ; unsetx() { set +x;}
04 export 
PS4='>${BASH_SOURCE:+${BASH_SOURCE/$HOME/\~}}#${LINENO}${FUNCNAME:+(${FUNCNAME})}> 
'

05 setup() {
06   DO_BACKUP() {
07 for d in 1 2 3; do
08   break
09 done
10   }
11   export -f DO_BACKUP
12 }
13 run() {
14  for i in 1; do
15#for j in 1; do  # Uncomment this 'j' loop instead of the next and 
LINENO is correct
16for ((j=1; j<=2; j++)); do  # Uncomment this 'j' loop and LINENO 
is incorrect

17true
18done
19  done
20 }
21 setx
22 setup
23 run
24
25 # vim  ts=2 sw=2


The LINENO variable for the 'i' loop which is printed out by the
PS4 is incorrect when that DO_BACKUP function is exported, and the second
'j' loop is uncommented.

-like this, right?:

 /tmp/lnno
/tmp/lnno#22> setup
/tmp/lnno#11(setup)> export -f DO_BACKUP
/tmp/lnno#23> run
/tmp/lnno#25(run)> for i in 1
/tmp/lnno#16(run)> (( j=1 ))
/tmp/lnno#16(run)> (( j<=2 ))
/tmp/lnno#17(run)> true
/tmp/lnno#16(run)> (( j++ ))
/tmp/lnno#16(run)> (( j<=2 ))
/tmp/lnno#17(run)> true
/tmp/lnno#16(run)> (( j++ ))
/tmp/lnno#16(run)> (( j<=2 ))
/tmp/lnno#1> unsetx
/tmp/lnno#3(unsetx)> set +x





 If you instead uncomment the first 'j' loop, the
LINENO variables are correct. If DO_BACKUP does not have a loop inside it, the 
LINENO variables are all correct.
  


   So you are saying that using the 1st j loop instead of the 2nd,
then whether or not "DO_BACKUP" has a loop inside, LINENO is correct?
So that is what one would expect, right?.

   So the main problem above is the line after
">/tmp/lnno#23 run", where the "for i in 1" is listed as being on
">/tmp/lnno#25"  #(line 25)?

And if DO_BACKUP is not exported, then all the line numbers are
correct, like:

 /tmp/lnno
/tmp/lnno#23> setup
/tmp/lnno#24> run
/tmp/lnno#14(run)> for i in 1
/tmp/lnno#15(run)> for j in 1
/tmp/lnno#16(run)> DO_BACKUP
/tmp/lnno#7(DO_BACKUP)> for d in 1 2 3
/tmp/lnno#8(DO_BACKUP)> break
/tmp/lnno#18(run)> true
/tmp/lnno#1> unsetx
/tmp/lnno#3(unsetx)> set +x



Yeah, that's interesting, have you tried the latest version
of bash to see if it does the same thing?  I still need to compile it.

If you export functions, they can do weird things with
numbering.

Like at your bash prompt, try:


imafunc() { echo $LINENO; }; export -f imafunc
echo $LINENO; imafunc; echo $LINENO







#!/bin/bash
export PS4='+L$LINENO + '
setup() {
DO_BACKUP() {
  for d in 1 2 3; do
break
  done
}
export -f DO_BACKUP
}
run() {
for i in 1; do
#for j in 1; do  # Uncomment this 'j' loop instead of the next and
LINENO is correct
#for ((j=1; j<=2; j++)); do  # Uncomment this 'j' loop and LINENO
is incorrect
true
done
done
}
set -x
setup
run



Example output with first 'j' loop uncommented:

$ bash run.sh
+L20 + setup
+L9 + export -f DO_BACKUP
+L21 + run
+L12 + for i in 1
+L13 + for j in 1
+L15 + true

Example output with the second 'j' loop uncommented:

$ bash run.sh
+L20 + setup
+L9 + export -f DO_BACKUP
+L21 + run
+L5 + for i in 1
+L14 + (( j=1 ))
+L14 + (( j<=1 ))
+L15 + true
+L14 + (( j++ ))
+L14 + (( j<=1 ))


Regards,
Tom Coleman
  




Re: ?maybe? RFE?... read -h ?

2021-09-06 Thread L A Walsh




On 2021/09/05 20:54, Lawrence Velázquez wrote:

On Sun, Sep 5, 2021, at 11:11 PM, Dale R. Worley wrote:
  

L A Walsh  writes:


I know how -h can detect a symlink, but I was wondering, is
there a way for bash to know where the symlink points (without
using an external program)?
  

My understanding is that it has been convention to use the "readlink"
program for a very long time, so there's never been much demand to add
it to bash. 


   ???  convention has nearly all of the builtins as local
programs. Since 'read' (or "read -l") isn't a local program, what
are you saying?

 Of course, looking at the options to readlink shows that
there are several different meanings of "where a symlink points".



   Irk! I just wanted the raw data! (sigh), like 'ls-l' gives:

# /bin/ls -l named.pid
... named.pid -> named.d/named.pid
# /bin/ls -l named.d
... named.d -> ../lib/named/var/run/named

Sure I could (and usually do when I need to) parse output of
ls -l, but thats tedious and error prone.

The distribution ships with a "realpath" loadable builtin, FWIW.
  


I didn't know that... um, my bash isn't quite there yet:

Ishtar:/> enable -f /opt/local/lib/bash/realpath realpath

 -bash: enable: cannot open shared object /opt/local/lib/bash/realpath: 
/opt/local/lib/bash/realpath: cannot open shared object file: No such 
file or directory

Ishtar:/> whence realpath
realpath is /usr/bin/realpath





?maybe? RFE?... read -h ?

2021-09-04 Thread L A Walsh

I know how -h can detect a symlink, but I was wondering, is
there a way for bash to know where the symlink points (without
using an external program)?

Like if I'm running a script and check if something
is a symlink to a dir that isn't there, is there a way
to read the value of a symlink like a "read -h":

chk_val_bin_bash_lnk () {
 if ! [[ -f /bin/bash ]]; then
   if [[ -h /bin/bash ]]; then
 read -h target /bin/bash
 if [[ $target == /usr/bin/bash ]]; then
   /bin/mount /usr
   return 0
 fi
   fi
 fi
 return 1
}





Re: LINENO is affected by where it is

2021-09-04 Thread L A Walsh




On 2021/09/01 02:36, David Collier wrote:

Version:

GNU bash, version 5.0.3(1)-release (arm-unknown-linux-gnueabihf)

Raspberry Pi using Raspbian.

Installed from repo?

LINENO goes backwards when run sync
  

LINENO isn't the number of lines executed, but is the
linenumber in the source file it is running in (or the line
in a function for functions already in memory before you
access it.  Some examples:

Ishtar:/tmp/d> echoln() {

 echo "LINENO=$LINENO"
 }

Ishtar:/tmp/d> echoln
LINENO=1
Ishtar:/tmp/d> echoln
LINENO=1
Ishtar:/tmp/d> echo $LINENO
262
Ishtar:/tmp/d> saveln="val of lineno=$LINENO"
Ishtar:/tmp/d> echo run some stuff
run some stuff
Ishtar:/tmp/d> echo "lineno=$LINENO, not $saveln"
lineno=265, not val of lineno=263



echo "== At this point \$LINENO has correctly counted

---
LINENO doesn't "count", it is a passive 'count' of  where
you used the variable "LINENO" (based on some restrictions, like
resetting to 1 in functions).


Does that clarify anything?




Re: efficient way to use matched string in variable substitution

2021-08-24 Thread L A Walsh




On 2021/08/24 05:06, Greg Wooledge wrote:

Looks like the efficiency of "read -ra" vs. a shell loop just about makes
up for the system calls used for the here string (f6 and f7 are almost
tied in overall speed, with f6 just a *tiny* bit faster).  Good to know.

  


If you set your TIMEFORMAT -- I put this in my login scripts:
export TIMEFORMAT="%2Rsec %2Uusr %2Ssys (%P%% cpu)"

the 4th field can give a feel for parallelism:

fn1() { /usr/bin/ls -1 /usr/bin >/tmp/tt && wc } 
fn2() { /usr/bin/ls -1 /usr/bin  | wc ; }

time fn1 >/dev/null
0.01sec 0.01usr 0.00sys (101.56% cpu)
time fn2 >/dev/null
0.01sec 0.00usr 0.00sys (134.03% cpu)




Re: efficient way to use matched string in variable substitution

2021-08-23 Thread L A Walsh




On 2021/08/23 12:10, Greg Wooledge wrote:

On Mon, Aug 23, 2021 at 11:36:52AM -0700, L A Walsh wrote:
  

Starting with a number N, is there
an easy way to print its digits into an array?



"Easy"?  Or "efficient"?  Your subject header says one, but your body
says the other.
  

Efficient, in my vocabulary, also includes my time in coding, typing
and remembering ... i.e. it's not just limited to computer time. :-)

However, thanks for the examples!  I do appreciate them!

The problem with using timing tests on interpreted code, is that
often what takes 'X' time today, may take more or less tomorrow
after many refactorings, patches and code-restructuring.

That isn't to day I don't use the same methods at times on specific
problems...oh well...

Computer algorithms, coding styles, languages and benchmarks are pretty
fleeting these days...not to mention different on different platforms and
by different compilers...sigh.

I remember counting clock cycles in assembler code...oi!



efficient way to use matched string in variable substitution

2021-08-23 Thread L A Walsh

Starting with a number N, is there
an easy way to print its digits into an array?
I came up with a few ways, but thought this
would be nice (with '\1' or '$1' being what was matched
in the 1st part), this could be statement:

arr=(${N//[0-9]/\1 })
 or
arr=(${N//[0-9]/$1 })

Instead of using loops (my=declare):


 n=988421
 for x in 0 1 2 3 4 5 6 7 8 9;do n=${n//$x/$x }; done
 arr=($n)
 my -p arr

declare -a arr=([0]="9" [1]="8" [2]="8" [3]="4" [4]="2" [5]="1")

or w/substrings:


 for ((d=0; d<${#n};d+=1)); do arr+=(${n:$d:1}); done
 my -p arr

declare -a arr=([0]="9" [1]="8" [2]="8" [3]="4" [4]="2" [5]="1")

Not a big thing, but having some way for the match of an RE
to be specified in the output would be handy...





Re: use-cases promote thinking of limited application

2021-08-22 Thread L A Walsh




On 2021/08/22 19:14, Koichi Murase wrote:

I'd guess Ilkka has asked the use case for this particular output
format, i.e., the quoted fields inside a single word.  If the purpose
is organizing the data, I would naturally expect the result in the
following more useful format in separate words without quoting:


<456 789>

<123 456>
  


   Exactly -- "you" would expect that "some other format" would
better meet the specific use-case - which is often used as a reason
to not implement the specific feature.

   Example (with a different util): with 'ls', I wanted it to
list the filename and  file-size (in bytes or scaled with a binary
prefix) in a multi-column format similar to "ls -s" (except that
"ls -s" shows the 'allocated size' not the actual file size. 


   To get bytes I tried "--block-size=1" and found it ignored the
user-specified block-size.  I asked for ls to be "upgraded"(fixed) to use
the block-size. (https://debbugs.gnu.org/cgi/bugreport.cgi?bug=49994).
However, they wanted to know my "use-case" and I was informed that
my use-case (of having ls list name+size) was "too rare" to justify
fixing the problem.  Oi!


  

Anyway, in my experience, asking 'why' or for 'use-cases' seems more often
a way to rule out or determine relative importance, but is almost always
an inaccurate way to do so.



I think it is still valid to ask the known/existing use cases when
someone doesn't know the existing use cases, which doesn't rule out
the other use cases.  In particular, I think Ilkka has asked about the
intended use case, i.e., the original motivation of adding this
feature in Bash 5.1.  It doesn't rule out other usages.
  

Perhaps not, but it often rules out a need to address a specific
use-case until others run into similar or more onerous examples
of the problem.





use-cases promote thinking of limited application

2021-08-22 Thread L A Walsh

On 2021/08/19 02:15, Ilkka Virta wrote:

$ declare -A A=([foo bar]="123 456" [adsf]="456 789")
$ printf "<%s>\n" "${A[@]@K}"


Interesting. I wonder, what's the intended use-case for this?
  

---
Does it matter?: Organizing data.

In this case, the data may be organized by pairs.

If you have a list of data:

   (name sally age 38 name joe age 39 name jill age 14 name jane age 13),

one can infer several possible ways to interpret the data.  Some ways
seem very likely, while others are less clear.

   One could assert a pair or duple relation, but one could infer
several other N-tuples with variable sizes of 'N', most likely even.
It's not hard to see quadruples, but it could be that N=8, 16, or
a more complex variation.

To iterate (over the data), one could place all the meaning
in how the 'for' statement is listed.  One could also supply meaning in
how the data is listed.

It's not likely one would have variably-sized tuples but one could.  Though,
ideally, one would have names embedded in the "structure" to help in
interpretation, like:

   ("type" item)   where item may be one "thing", or a [list of things], or
an {association of things} or (another data structure).  Even "type" could
be multiple types of things.

   If you wanted pairs, you  could specify that when you enumerate with
varying degrees of flexibility, like:

for (k, v) in LIST  .. pull 2 members from list at a time
or
for ((a b) (c d)) in LIST... etc.  One could imagine extending such
syntax to provide a variety of data relations -
(PID (path filename) (user group) ...) etc.  If you want to ask for
use-cases, don't expect to come up with a complete list any time soon.

Anyway, in my experience, asking 'why' or for 'use-cases' seems more often
a way to rule out or determine relative importance, but is almost always
an inaccurate way to do so.  Just because someone wants it for purpose
'X', doesn't mean it won't be valuable in 100 other ways.

Think of lightning: it may seem of limited use, but if thought of as
a form of electricity, it might be realized to be more useful than anyone
previously imagined.





Re: An alias named `done` breaks for loops

2021-08-17 Thread L A Walsh




On 2021/08/14 17:05, Kerin Millar wrote:

On Sat, 14 Aug 2021 15:59:38 -0700
George Nachman  wrote:

  
This does not constitute a valid test case for two reasons. Firstly, 
aliases have no effect in scripts unless the expand_aliases shell 
option is set.


1)  I frequently use for loops in an interactive shell to act on arrays.
   Arrays are the easiest way to not have spaces breaking args
i.e.:
2)  I almost always start my scripts with a templated prologue:


 cat ~/bash/template

#!/bin/bash -u
# vim=:SetNumberAndWidth
export PS4='>${BASH_SOURCE:+${BASH_SOURCE/$HOME/\~}}#\
${LINENO}${FUNCNAME:+(${FUNCNAME})}> '
shopt -s expand_aliases
alias my='declare ' int='my -i ' array='my -a ' map='my -A '
setx() { trap unsetx EXIT; set -x; } ; unsetx() { set +x;}

## (prog goes here)
# vim: ts=2 sw=2







Re: bash-5.1.8 does not compile anymore on HP-UX due to invalid shell syntax

2021-08-17 Thread L A Walsh




On 2021/08/17 04:02, Osipov, Michael (LDA IT PLM) wrote:

Folks,

this is basically the same issue as I reported in readline: 
https://lists.gnu.org/archive/html/bug-readline/2021-08/msg0.html


The bad hunk seems not to be POSIX shell compliant.

I think your eyes are fooling you.  I looked at the link below, and it
has both ":+" and "+".
The table on that page has 8 lines after the title line, right?

If you would think of the 8 lines as 4 pairs, like:

lines   pair
1+2 1
3+4 2
5+6 3
7+8 4

The pairs are about 4 related operations.  If you let P = the oPerator
then the odd lines are about ':P' 
and the even lines are about 'P' (no colon).

The Pairs from 1-4 are about the operators: '-', '=', '?', '+'

Pair 4 shows effects of ':+' and '+'.

Isn't that what you are talking about? 


Yeah -- w/o the ':' looks a bit 'off', but it has a separate meaning
and has been around for a long time.

(I first encountered it when porting 1980's era scripts)





should bashdb be included w/bash?

2021-07-24 Thread L A Walsh

Not entirely sure how, but have been running 5.1.8(3)-release which seems
fine...up until I wanted to single step a script...
bashdb ...and saw
/usr/share/bashdb/command/set_sub/dollar0.sh: line 23: enable: dynamic 
loading not available

(/tmp/rearrange_files.sh:4):
4:  shopt -s expand_aliases
bashdb<0>


dynamic loading not available?...poke poke poke...um oh:
bash debugger, bashdb, release 4.4-0.94

doesn't seem to be same as bash...hmm...that's probably not good.

Probably my local distro ...looked for a bashdb package...nep.
oh...it's just out there in the wild, sorta, well on a gnu related
site...similar.

Guess it wasn't ever packaged as a separate thing.

Does it need to be?  (Yeah, maybe?, I don't care, but some might)
But maybe it should be included w/bash (at least as far as source
goes), and become maybe an --enable-bashdb option at build time?

(or --disable-bashdb to include by default?).
Isn't debugging a default in many (most?) languages?
perl, as an interpreted language seems to include perldb.

At least if the option is included @ build time, it isn't so
likely that it would _accidently_ be left out... which would be
useful/convenient IMLBO...(In_My_Lazy_Butt_Opinion) 


Eh?  Doesn't seem like it would be hard... ???





Re: simple prob made into a security tragedy...oh well.... ;^/

2021-07-01 Thread L A Walsh

On 2021/06/29 19:11, Eli Schwartz wrote:


This is a ridiculous argument and you know it. You, personally, are
writing code which does not get used in security contexts, which is your
right. This in no way means that refusing to quote variables which
"cannot be word-split" stops *any* security errors. The "illegal input"
was not related to the security bypass (as Greg points out, removing the
space prevents word splitting and executes the same security bypass code).

Your response should have been:
  


More likely "is", if I needed security I wouldn't likely write in
a script language, but more like with audit, w/Biba integrity and
Bell-LaPadula sensitivity models that we planned to port to linux, I'd have
written it in 'C'.
Trix or Trusted IRIX was certified, for C2+ under the then, orange
book standard.  Even had a 128-bit luid, which later implementers
changed to a less parallel 'loginuid', mainly for auditing.

I'd been presenting sgi's security plan at the linux security conference in
France, as well as some presentation in London. It seems I was good at
explaining what had been a confusing security model in the place of
my then manager.  I wasn't good at politics, but my manager prided himself
on his bookshelf copy of Machiavelli's, 'The Prince' as having everything
a manager needed to know...  among other things, for him to be able to
put a sensitivity+integrity Policy, 'SMACK' in the linux kernel.


Instead you are arguing in bad faith... 

---
   You are arguing about a 1-liner that took unfiltered output
from locate to search for keywords.  You wanna work that up into
bad faith, good luck.

your code is flawed, it doesn't
correctly handle indexed arrays with spaces in the key and doesn't
forbid them either.
  

What are you talking about?

 njobs() { printf ${1:+-v $1} "%s\n" "$(jobs |wc -l)"; }

I don't see any arrays, let alone indexed.



  

This won't protect against all code injections, of course;

---
   It does in the target environment.  The key is to look at the
security policy requirements and environment before going off and making
assumptions about "faith" that might bounce back when used for
design issues relating to a 1-line search expression.




Re: simple prob?

2021-06-30 Thread L A Walsh

On 2021/06/29 16:51, Greg Wooledge wrote:

On Tue, Jun 29, 2021 at 04:29:05PM -0700, L A Walsh wrote:
  

njobs() { printf ${1:+-v $1} "%s\n" "$(jobs |wc -l)"; }



  

   Which is detected as "illegal input" and disallowed.  If you don't enable
some security errors, they can't be as easily introduced.



Are you *still* insisting that your failure to quote is a SECURITY
FEATURE?

Come *on*!
  


   In this case, not quoting was deliberately intended as variable
names wouldn't need it.  Any security consideration was purely
secondary.  I'm an avid quoter where it is needed, but I no longer
quote for the sake of quoting as I once did.  In a similar manner
I try to not overuse parentheses, just for the sake of it.

   As I stated before, my scripts are most often for myself.  If I needed
security, I'd probably write in a compiled language rather than a
scripting one.

unicorn:~$ njobs() { printf ${1:+-v $1} "%s\n" "$(jobs |wc -l)"; }
unicorn:~$ njobs 'x[0$(date>&2)]'
Tue Jun 29 19:49:16 EDT 2021

All I had to do was remove the space.  You're not even trying.

Your failure to quote is simply a failure.  If you want to prevent
code injection attacks, you need to sanity-check the input.

There is no other way.

  




Re: simple prob?

2021-06-29 Thread L A Walsh

On 2021/06/29 15:49, Greg Wooledge wrote:

On Tue, Jun 29, 2021 at 02:58:28PM -0700, L A Walsh wrote:
  

njobs() { printf ${1:+-v $1} "%s\n" "$(jobs |wc -l)"; }

Using that with your input:

njobs 'x[0$(date >&2)]'

bash: printf: `x[0$(date': not a valid identifier



This is because you didn't quote "$1".


   $1 should never be quoted -- it is an identifier, and as such
cannot contain a space.  By quoting it, you are allowing inputs that
would otherwise be filtered out because they are not valid variable
names.


  Since you only ever tested
the cases where $1 was a valid variable name


   It is only designed to work with $1 being an optional, valid 
variable name.

Anything else should fail.  There are times when putting quotes around a
var will enable problems.

, you never ran into that particular result... until now.
  


   I never ran into "invalid input" because I didn't program it to
accept anything other than a variable name there.


As you can see, the unquoted $1 underwent word splitting, so you're
effectively running printf -v 'x[0$(date' '>&2)]' '%s\n' "...".
  


   Which is detected as "illegal input" and disallowed.  If you don't 
enable

some security errors, they can't be as easily introduced.  I assert that you
should never put quotes around something that is supposed to be a 
variable name

since valid variable names can not be word-split.  Many of the items on your
website about bash cautions, are because bash disallows some sketchy 
constructs.

That's not a bash-caveat, but a bash feature!



This won't protect against all code injections, of course; only the
ones that contain a whitespace character.
  


   Nothing protect against "all" 'X' or "everything", especially when 
talking
about security.  Security is best handled as a layered approach with 
different

layers protecting against different things.  There's no such thing as a
'Security Magic Bullet'.  Just because NOTHING protects against "ALL" 
[anything] is
not a reason to take extreme action. In fact realization that NOTHING is 
perfect
is one of the better defenses against over-reacting in response to 
supposed security

flaws.







Re: simple prob?

2021-06-29 Thread L A Walsh

On 2021/06/29 14:02, Greg Wooledge wrote:

declare, printf -v, local -n, eval -- they're mostly equivalent. Some
of them may prevent *some* possible code injections, but none of them
prevent *all* possible code injections.

unicorn:~$ njobs2() { printf -v "$1" %s 42; }
unicorn:~$ njobs2 'x[0$(date >&2)]'
Tue Jun 29 17:00:29 EDT 2021

  

That's not what I see in my version:

njobs() { printf ${1:+-v $1} "%s\n" "$(jobs |wc -l)"; }

Using that with your input:

njobs 'x[0$(date >&2)]'

bash: printf: `x[0$(date': not a valid identifier

Perhaps some solutions provide more resistance to problems than
others.

FWIW, I would be using 'njobs' in a script where I'm giving it
the input. 


No matter which one of these you choose, you still have to sanity-check
the input.  Or else declare that you do not care if the user shoots their
own foot off 

The user has no access to internal functions inside a script. Though I
do take many precautions against my future self(ves).






Re: simple prob?

2021-06-29 Thread L A Walsh

On 2021/06/29 13:35, Greg Wooledge wrote:

unicorn:~$ njobs() { local _n=$(jobs | wc -l); eval "$1=\$_n"; }

---
   ARG...I thought about that and rejected it because I
thought the "jobs|wc-l" would be in a sub-shell and not pickup
the background jobs!  Double arg, this works as well:
sjobs() { local j;read j< <(jobs|wc -l);  
printf ${1:+-v $1} "%s\n" "$j"; }


About the only thing that doesn't work are variations on
jobs|wc-l|read n -- since if read is in current process jobs doesn't
pick up the children, but if jobs is in current process, then read
isn't, and 'n' is lost.

Of course I was focusing on -s/-u lastpipe to provide some variation
but that was the only variation that would seem to be guaranteed not
to work.  Oi!

Now you just need to add sanity-checking on the argument of njobs, to
avoid whatever code injection the malicious caller wants to perform.
  


   That would be 'me', so I'm going to rule out malicious
code injection! :-), but that's also why I used printf to
write the output, the worst that could happen is some varname
is overwritten with the answer, no?


And maybe consider adding "make sure $1 isn't _n" to the list of your
sanity checks, since the caller can't be permitted to use that particular
variable namhe.
  


Simpler -- don't use a variable:

  njobs() { printf ${1:+-v $1} "%s\n" "$(jobs |wc -l)"; }

---
The use of the builtin expansion off of the parameter has the side
benefit of printing the answer to output if no var is given.

So...hmmm...how is it that jobs picks up the right answer in what
would seem to be a subshell?  Special cased?




unicorn:~$ njobs _n
unicorn:~$ echo "$_n"


(At least it didn't go into an infinite loop!)

  




Re: simple prob?

2021-06-29 Thread L A Walsh

On 2021/06/29 13:35, Eli Schwartz wrote:

Well, if you don't think this is a bug in bash, but something you need
help figuring out, maybe you'd prefer to use the "help-bash" list?
  


   Actually

  - 
The original message was received at Tue, 29 Jun 2021 13:06:34 -0700

The following addresses had permanent fatal errors -

   (reason: 550-Callout verification failed:)
---
I had it backwardsoh well.  I did try there first!





simple prob?

2021-06-29 Thread L A Walsh

I hope a basic question isn't too offtopic.
Say I have some number of jobs running:


 jobs|wc -l

3
---
in a function (have tried shopt -s/-u lastpipe; neither way worked)
njobs() {
jobs |wc -l
}

 njobs

3

Would like to pass a varname to njobs to store the answer in, like:
njobs() {
jobs|wc -l
#magic puts val in $v
printf {$1:+-v $1} "%s\n" "$v"
}

So I can run:


 njobs n

echo "$n"
3

---
How can I put the output into '$v'
*without* using a temporary file?

This seems so basic, yet its eluding me.
Could someone throw me a clue-stick? 


Tnx!

p.s. - a trivial util func producing jobs:

resleep() { alias my="declare " int="my -i "
int n=${1:-3} t=${2:-99}; echo "$n jobs @ ${t}s:"
while ((0 < n--)); do sleep "$t" & done; }







Re: Prefer non-gender specific pronouns

2021-06-07 Thread L A Walsh

On 2021/06/06 04:48, Léa Gris wrote:

Le 06/06/2021 à 11:33, Ilkka Virta écrivait :
In fact, that generic 'they' is so common and accepted, that you just 
used

it yourself
in the part I quoted above.


Either you're acting in bad faith, or you're so confused by your 
gender-neutral delusion that you don't remember that in normal 
people's grammar, "they" is a plural pronoun.

Not in America:

They has been officially recognized as correct by several key bodies 
such as the Associated Press. Similarly, the Chicago Manual of Style now 
notes that the singular "they" is common in informal communication 
(while acknowledging that it has yet to attain the same ubiquity in 
formal spaces).


Merriam-Webster:
a  —used with a singular indefinite pronoun antecedent No one has to go 
if they don't want to.Everyone knew where they stood …— E. L. Doctorow


b —used with a singular antecedent to refer to an unknown or unspecified 
person An employee with a grievance can file a complaint if they need 
to.The person who answered the phone said they didn't know where she was.


c —used to refer to a single person whose gender is intentionally not 
revealed A student was found with a knife and a BB gun in their backpack 
Monday, district spokeswoman Renee Murphy confirmed. The student, whose 
name has not been released, will be disciplined according to district 
policies, Murphy said. They also face charges from outside law 
enforcement, she said.— Olivia Krauth


d —used to refer to a single person whose gender identity is nonbinary 
(see nonbinary sense c) I knew certain things about … the person I was 
interviewing.… They had adopted their gender-neutral name a few years 
ago, when they began to consciously identify as nonbinary—that is, 
neither male nor female. They were in their late 20s, working as an 
event planner, applying to graduate school.— Amy Harmon


APA endorses the use of “they” as a singular third-person pronoun in the 
seventh edition of the Publication Manual of the American Psychological 
Association. This means it is officially good practice in scholarly 
writing to use the singular “they.”Oct 31, 2019



3 different sources say 'they' is fine for use in a singular context.

How many more authorities do you think it will take before most people
are convinced "they" is the preferred sex-indefinite pronoun in
current, or modern use?

I remember back when I was in grammar school this not being the case,
deary, the way this non-dead language, English, keeps changing.  You
certainly wouldn't find this happening in Latin! ;-)






Re: Prefer non-gender specific pronouns

2021-06-07 Thread L A Walsh

On 2021/06/06 07:19, Alain D D Williams wrote:

The important thing is that there is no intention to oppress/denigrate/...

But it does _suggest_ that the default user is a male.
or, speaking about historical use, that the default user was
male.  The problem comes when someone reads gendered language
often and long enough, it becomes natural in one's speech and
writing to follow that example.




Re: Prefer non-gender specific pronouns

2021-06-05 Thread L A Walsh

On 2021/06/05 08:35, Oğuz wrote:

5 Haziran 2021 Cumartesi tarihinde Vipul Kumar <
kumar+bug-b...@onenetbeyond.org> yazdı:

  

Hi,

Isn't it a good idea to prefer non-gender specific pronoun (like "their"
instead of "his") at following places in the reference manual?




No it's not.
  


  Perhaps you would be more comfortable saying 'her'?





Re: RFE - support option for curses idea of term [tab]size.

2021-04-29 Thread L A Walsh




On 2021/04/26 17:16, Chet Ramey wrote:

On 4/26/21 7:19 PM, L A Walsh wrote:

I'm not clear if termcap lib has this or not, when the curses
library is in use, it supports the idea of reading and setting
the term [tab]size.


Can't you do this with `stty size' already? 

Setting size: sometimes, depends on the Terminal, but having readline
know about where tabs expand to is only handled through libcurses and
not device driver, which I believe is where stty asserts its effects.





Users can set this with the 'tabs' program included in the
curses package. 


Readline is tab-agnostic, or tab-stop-agnostic, in a sense. It 
performs tab

expansion itself during redisplay, and currently uses a tab stop of 8.
That's not user-settable.

---
   It doesn't always do it correctly because it doesn't always know
where it is in a line.  As such it has a bug that would be fixed by
having it really know where it was at (from talking with libcurses)
as well as what the tabstops were really set to.

   I.e. it would be more user-friendly if readline considered the knowledge
of the terminal[-emulator] that it is running under when possible.  Taking
pride in doing the wrong thing that doesn't match the terminals settings
shouldn't really be considered a plus or a design goal.

That readline can't be used with a variable font or tabsize seems like
a limitation associated with only relating to computers through a fixed-size
window w/fixed-sized characters & fixed tabstops.

[It] Probably isn't going to disappear overnight, but I'm not sure 
relating to

a computer in fixed-sized display units is likely to remain a fixed-property
of human<->computer interactions...






RFE - support option for curses idea of term size.

2021-04-26 Thread L A Walsh

I'm not clear if termcap lib has this or not, when the curses
library is in use, it supports the idea of reading and setting
the term size.

Users can set this with the 'tabs' program included in the
curses package.  Besides supporting  X/Open standards for
tabs for some specific languages, it also supports setting
tabstops to regular sizes and/or clearing them.
A sample manpage: 
https://www.ibm.com/docs/en/zos/2.2.0?topic=descriptions-tabs-set-tab-stops


It can set or clear them on arbitrary terms supported in the
curses library & database.

Very few program writers use an 8-column tab for indentation --
something that was meant to be a default size in lieu of
actually setting the tabstops to values needed for a giving
application.









Re: Changing the way bash expands associative array subscripts

2021-04-13 Thread L A Walsh

On 2021/04/06 08:52, Greg Wooledge wrote:

In that case, I have no qualms about proposing that unset 'a[@]' and
unset 'a[*]' be changed to remove only the array element whose key is
'@' or '*', respectively, and screw backward compatibility.  The current
behavior is pointless and is nothing but a surprise landmine.  Anyone
relying on the current behavior will just have to change their script.
  


   So echo ${a[@]} = expansion of all, but
unset a[@] would only delete 1 element w/key '@'
how do I echo 1 element with key '@'

Creating arbitrary definitions of behavior for the similar syntax
seems like a sign of random feature-ism.





Re: incorrect character handling

2021-04-06 Thread L A Walsh

On 2021/03/30 13:54, Lawrence Velázquez wrote:

Further reading:
https://mywiki.wooledge.org/BashPitfalls#echo_.22Hello_World.21.22
  

---
   I find that disabling history expansion via '!' at bash-build
time is the most ideal solution, since someone preferring 'csh' would
likely still be using csh or some compatible -- and bash isn't
generally recognized as being csh compatible, but rather posix-sh
compatible.







Re: SIGCHLD traps shouldn't recurse

2021-04-06 Thread L A Walsh




On 2021/04/06 00:23, Oğuz wrote:
5 Nisan 2021 Pazartesi tarihinde L A Walsh <mailto:b...@tlinx.org>> yazdı:


On 2021/04/03 00:41, Oğuz wrote:

 but I
don't think it's useful at all because the number of pending
traps keeps
piling up, and there is no way to reset that number. If there
is no real
use case for recursive SIGCHLD traps (which I can't think of
any), I think
this should change; no SIGCHLD trap should be queued while a
SIGCHLD trap
is already in progress.


They have to be queued.  Else how do you process their ending?  It is
something that take very little time.  The parent is given the chance
to collect child accounting info and the child's status.  How long do
you think that should take?  Sigchld handlers do bookkeeping, not
significant computation.  Are you asking what happens if
the application is misdesigned from the beginning to need
more CPU than is available at a child's death?  If that happens then
the application needs to be fixed.





Re: SIGCHLD traps shouldn't recurse

2021-04-05 Thread L A Walsh

On 2021/04/03 00:41, Oğuz wrote:

 but I
don't think it's useful at all because the number of pending traps keeps
piling up, and there is no way to reset that number. If there is no real
use case for recursive SIGCHLD traps (which I can't think of any), I think
this should change; no SIGCHLD trap should be queued while a SIGCHLD trap
is already in progress.
  
  
   So what happens if a child dies while you are servicing the previous

child?

From the perl ipc page:

sub REAPER {
  my $child;
  while (($waitedpid = waitpid(-1, WNOHANG)) > 0) {
  logmsg "reaped $waitedpid" . ($? ? " with exit $?" : "");
  }
  $SIG{CHLD} = \ }
---
   The while loop grabs finished task stats before returning.


With this change:

diff --git a/trap.c b/trap.c
index dd1c9a56..5ce6ab4f 100644
--- a/trap.c
+++ b/trap.c
@@ -643,6 +643,8 @@ void
 queue_sigchld_trap (nchild)
  int nchild;
 {
+  if (sigmodes[SIGCHLD] & SIG_INPROGRESS)
+return;
   if (nchild > 0)
 {
   catch_flag = 1;

bash behaves this way:

$ trap uname chld
$ uname -sr
Linux 5.4.0-66-generic
Linux
$ uname -sr
Linux 5.4.0-66-generic
Linux
$

and I think this is what average user would expect. Whether there's a
better fix is beyond me though.
  


   Looks like your uname is called twice, whereas some langs (perl)
tried to auto-cleanup such things.






Re: Defaults -- any

2021-03-30 Thread L A Walsh

On 2021/03/29 20:04, Greg Wooledge wrote:

On Mon, Mar 29, 2021 at 07:25:53PM -0700, L A Walsh wrote:
  


   I have both /etc/profile and /etc/bashrc call my configuration
scripts.  Are there common paths that don't call one of those?



A vanilla bash compiled from GNU sources with no modifications will
not source /etc/bash.bashrc or /etc/bashrc or any other such file.
  



   So this manpage text is wrong (focusing on paths in /etc):

   When  bash is invoked as an interactive login shell, or as a non-inter-
   active shell with the --login option, it first reads and executes  com-
   mands  from  the file /etc/profile.  Please note that the file
   /etc/profile includes an autodetection shell  code  w(h)ether  it  
has  to

   source /etc/bash.bashrc.

   When  bash is invoked as an interactive login shell, or as a non-inter-
   active shell with the --login option, it first reads and executes  com-
   mands  from  the file /etc/profile, if that file exists.
   Please note that the file /etc/profile includes an autodetection
   shell  code w(h)ether  it  has  to source /etc/bash.bashrc.



The SYS_BASHRC feature is off by default, and must be enabled at
compile time.  Many Linux distributions enable it, but there are
surely systems with bash installed which have not enabled it.  So
that's one thing.
  


   Not really pertinent how my distro compiles bash, as it's on my
short list of programs I compile myself.



a graphical Display Manager does not read /etc/profile.
  

---
   SGI, took pains to have the graphical display manager read the login
scripts.  I think suse does as well.  I login with a Windows GUI (it 
doesn't)

read *nix login scripts.

   I require a specific environment to be setup in my shells or things
don't function the way I want.

   This is true whether I log in as root or as a normal user.




So that's the second thing.  Now imagine a system with a vanilla bash,
and a Display Manager login.
  


   Are you sure about that vanilla thing?  In my shell script that calls
configure, my enable/disable/with and without items are put into arrays
for later expansion.  Just looking at those:

if [[ $pkg == bash ]]; then
 declare -a enable=( alias arith-for-command array-variables
   brace-expansion casemod-attributes casemod-expansions command-timing
   cond-command cond-regexp coprocesses debugger directory-stack
   disabled-builtins dparen-arithmetic extended-glob
   extended-glob-default glob-asciiranges-default help-builtin
   history job-control multibyte net-redirections 
   process-substitution progcomp prompt-string-decoding readline

   select single-help-strings static-link )

 declare -a disable=(nls rpath )

 # declare -a with=( curses pic gnu-malloc ltdl-lib=/usr/lib64)
  
 declare -a with=( gnu-ld )


 declare -a without=(bash-malloc)
else ## readline


Which item is responsible for enabling processing of
/etc/bash.bashrc and /etc/profile?  I'm pretty sure I don't know
which item it is...(sorry).  FWIW, I know net-redirections on linux
is wishful thinking.

Thanks!




Re: Default PS1

2021-03-29 Thread L A Walsh

On 2021/03/29 14:39, Greg Wooledge wrote:

On Mon, Mar 29, 2021 at 01:49:41PM -0700, L A Walsh wrote:
  

Or, what do you mean by 'default'?  Is it sufficient
to set it in the system /etc/profile so it is the default
for all users when logging in?



Sadly, that won't work.  There are plenty of *extremely* common paths
from power-on to shell which do not read /etc/profile.
  


   I have both /etc/profile and /etc/bashrc call my configuration
scripts.  Are there common paths that don't call one of those?

  

  





Re: zsh style associative array assignment bug

2021-03-29 Thread L A Walsh

On 2021/03/28 11:02, Eric Cook wrote:

On 3/28/21 7:02 AM, Oğuz wrote:
  

As it should be. `[bar]' doesn't qualify as an assignment without an equals 
sign, the shell thinks you're mixing two forms of associative array assignment 
there.

In the new form, that a key is listed inside a compound assignment alone 
implies that it was meant to be assigned a value. In my mind, `a=(foo 123 bar)' 
translates to `a=([foo]=123 [bar]=)'. It makes sense.



That is the point that i am making, in typeset -A ary=([key]=) an explicit 
empty string is the value, but in the case of typeset -A ary=([key]) it was 
historically an error. So why should an key without a value now be acceptable?
  

---
   Maybe historically it was an error and now it is fixed?
   As for perl, if you turn on warnings, the 3 element assignment
is treated as a warning:

my %hash=(qw(a b c));'
Odd number of elements in hash assignment at -e line 2.

   But errors usually prevent execution, so you have no choice about
whether to allow an odd number of elements.  However, if it is
not an error, you can check the input to verify that there are an even
number of elements and handle it as you wish. 


Also, note, an assignment with nothing after the '=' sign is valid
syntax in bash.  So why should it fail here?





Re: Default PS1

2021-03-29 Thread L A Walsh

On 2021/03/29 04:04, ილია ჩაჩანიძე wrote:

How can I set default PS1 variable from source code?
  

---
   What do you mean "from source code?" 

E.g I want it to display:
My-linux-distro $
And not:
Bash-5.1 $
  

---
Does the procedure documented in the bash man page not work?

Or, what do you mean by 'default'?  Is it sufficient
to set it in the system /etc/profile so it is the default
for all users when logging in?






Re: Wanted: quoted param expansion that expands to nothing if no params

2021-03-24 Thread L A Walsh

On 2021/03/23 21:41, Lawrence Velázquez wrote:

On Mar 23, 2021, at 11:43 PM, Eli Schwartz  wrote:

It's not clear to me, how you expect this to differ from the existing

behavior of "$@" or "${arr[@]}" which already expands to 
rather than an actual "" parameter.


The original message does recall the behavior of the earliest Bourne
shells [1][2], but that is surely not relevant here, given the use
of ((...)). Right? RIGHT???
  


   Hmmm...Now that I try to show an example, I'm not getting
the same results.  Grrr.  Darn Heizenbugs.

*sigh*




Wanted: quoted param expansion that expands to nothing if no params

2021-03-23 Thread L A Walsh

Too often I end up having to write something like
if (($#)); then  "$@"
else   # = function or executable call
fi

It would be nice to have a expansion that preserves arg boundaries
but that expands to nothing when there are 0 parameters
(because whatever gets called still sees "" as a parameter)

So, example, something like:

$~ == "$@" #for 1 or more params
$~ ==  no param when 0 param, # so for the above if/else/endif
one could just use 1 line:

   $~

My examples used ~, as I didn't think it was used anywhere.

I can't control how called programs will handle / deal with a
present, but empty parameter, which is why I thought something that
expands to nothing in the empty case would seem ideal.

Anyone else have a trivial solution for this problem?





Re: Changing the way bash expands associative array subscripts

2021-03-15 Thread L A Walsh

On 2021/03/15 17:12, Chet Ramey wrote:
I'm kicking around a change 
This means that, given the following script,


declare -A a
key='$(echo foo)'
a[$key]=1
a['$key']=2
a["foo"]=3

What do folks think?
  

---
   Looks like a flexible way to deal with some of the side effects
of the double-dequoting.  While the above might not look straightforward
to those used to the current behavior, it's easier to explain
and gives the variety needed to expand on keys for each of the
situations. 


   That said, I'd want to see those examples in the manpage.
FWIW -- I seem to remember that the manpage could use some more
simple examples in a few places.


Chet

  




Re: is it normal that set -x unset commands dont display special chars in the content

2021-02-28 Thread L A Walsh

On 2021/02/28 14:13, Chet Ramey wrote:

On 2/27/21 6:14 AM, Alex fxmbsw7 Ratchev wrote:
  


These code fragments have nothing to do with each other. Why not include
a self-contained example that includes relevant `stuff' in what you're
passing to `unset'?
  

cuz he's trollin us?




Re: building 5.1.3 -- some probs...

2021-02-24 Thread L A Walsh




On 2021/02/23 14:10, Chet Ramey wrote:

On 2/22/21 10:09 PM, L A Walsh wrote:
  

export _home_prefix=${HOME%/*}/



I can't reproduce it, though I'm sure this is the line where it
crashes for you. What is HOME set to?
  

HOME=/home/law
so  _home_prefix will be '/home'

You are thinking it died there because the trace
stopped there?

Such an innocent looking statement...



Re: building 5.1.3 -- some probs...

2021-02-22 Thread L A Walsh

(Doi!) Forgot it was executing initial rc scripts.
Turned on -x since last statement seems pretty mundane.
Also 6 statements after where it claimed it crashed, is
a custom function for printing pwd for the prompt.

I've tried with different compile ops (optim vs. dbg).
with builtin readline and included readline
with bash-malloc set, and unset..

This isn't blocking anything, so no time pressure.
Thanks! & let me know if you want anything else...


locale has:

LANG=
LC_CTYPE=C.UTF-8
LC_NUMERIC=C
LC_TIME=C
LC_COLLATE=C
LC_MONETARY=C
LC_MESSAGES=C
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=
---

On 2021/02/22 18:22, Chet Ramey wrote:

You could start with the actual command that's dumping core (something in
one of your startup files) and your locale. A backtrace from the core dump
would help, too, but not as much.

attach is output from bash -x .. last line displayed in that
was line#288.
Starting from line 288, about 4 statements down are
the defines for a "short"-pwd (spwd)
for my prompt. 


It might have been in that function -- as it seems it is the
only thing of any complexity. I skipped blank lines and some
comments.  Line number is line after the line# comment.

#line 285:
export LC_COLLATE=C
export MOUNT_PRINT_SOURCE=1
# line 288:
if [[ ! ${qUSER:-""} ]]; then
 printf -v qUSER "%q" $USER
 export qUSER
fi
# line 293:
export _home_prefix=${HOME%/*}/

# return a shortened path when displayed path would
# take up > 50% width of the screen
# line 306:
declare -a _als=( "_e=echo -En"  "ret=return" )
alias "${_als[@]}"
export __dpf__='local -a PF=(
   "/$1/$2/$3/…/\${$[$#-1]}/\${$#}"
   "/$1/$2/…/\${$[$#-1]}/\${$#}"
   "/$1/…/\${$[$#-1]}/\${$#}"
   "/$1/…/\${$#}"
   "…/\${$#}"
   "…" )'
line 314:
spwd () {  my _f_=""  ;\
 [[ ${_f_:=${-//[^x]/}} ]] && set +$_f_  ;\
 (($#))|| { set "${PWD:=$(echo -En $( \
 eval "{,{,/usr}/bin/}pwd 2>&-||:" ))}"  ;\
 (($#)) || ret 1; }  ;\
 int w=COLUMNS/2 ;\
 ( printf -v _p "%s" "$1" ; export IFS=/ ;\
   set $_p; shift; unset IFS ;\
   t="${_p#$_home_prefix}"   ;\
   int tl=${#t}  ;\
   if (($#<=6 && tl gdb ./bash core
Reading symbols from ./bash...done.
[New LWP 42448]
Core was generated by `./bash -x'.
Program terminated with signal SIGABRT, Aborted.
#0  0x003000238d8b in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: zypper install 
glibc-debuginfo-2.29-1.2.x86_64

(gdb) where
#0  0x003000238d8b in raise () from /lib64/libc.so.6
#1  0x003000222612 in abort () from /lib64/libc.so.6
#2  0x00451933 in programming_error ()
#3  0x004e8d93 in xbotch (mem=mem@entry=0x54e460, e=e@entry=2,
   s=s@entry=0x507308 "free: called with unallocated block argument",
   file=file@entry=0x4ef0b0 "subst.c", line=line@entry=4751) at 
malloc.c:376

#4  0x004e8f1a in internal_free (mem=0x54e460,
   file=0x4ef0b0 "subst.c", line=4751, flags=flags@entry=1) at malloc.c:974
#5  0x004e9b58 in sh_free (mem=, file=out>,

   line=) at malloc.c:1380
#6  0x0049e3d5 in sh_xfree ()
#7  0x00467829 in remove_pattern ()
#8  0x00468c8a in parameter_brace_remove_pattern ()
#9  0x00471b4e in parameter_brace_expand ()
#10 0x004726fa in param_expand ()
#11 0x00473a71 in expand_word_internal ()
#12 0x00476fa2 in shell_expand_word_list ()
#13 0x00477292 in expand_word_list_internal ()
#14 0x004760d5 in expand_words ()
#15 0x00445d15 in execute_simple_command ()
#16 0x0043f850 in execute_command_internal ()
#17 0x004a4f54 in parse_and_execute (string=,
   from_file=from_file@entry=0x5f7390 "/etc/local/aliases.sh",
   flags=flags@entry=20) at evalstring.c:489
#18 0x004a4306 in _evalfile (
   filename=0x5f7390 "/etc/local/aliases.sh", flags=14) at evalfile.c:285
#19 0x004a441d in source_file (
   filename=filename@entry=0x5f7390 "/etc/local/aliases.sh",
   sflags=sflags@entry=0) at evalfile.c:380
#20 0x004aebf7 in source_builtin (list=0x5f80b0) at ./source.def:195
#21 0x00446b1e in execute_builtin ()
#22 0x00447a1e in execute_builtin_or_function ()
#23 0x00446420 in execute_simple_command ()
#24 0x0043f850 in execute_command_internal ()
#25 0x0043ecda in execute_command ()
#26 0x00442aeb in execute_connection ()
#27 0x0043fc92 in execute_command_internal ()
#28 0x0043ecda in execute_command ()
#29 0x00444791 in execute_if_command ()
---Type  to continue, or q  to quit---
#30 0x0043fb8f in execute_command_internal ()
#31 0x004a4f54 in parse_and_execute (string=,
   from_file=from_file@entry=0x5fd150 "/etc/local/bashrc.sh",
   flags=flags@entry=20) at evalstring.c:489
#32 

building 5.1.3 -- some probs...

2021-02-22 Thread L A Walsh

I'm trying to build bash 5.1.3, and at first
I tried w/bash-malloc, but got:
/bash-5.1> ./bash

malloc: subst.c:4751: assertion botched
free: called with unallocated block argument
Aborting...Aborted (core dumped)
---

Another prob which seems a bit odd -- more than once, on the
first time after a make clean+ rerun config, then
doing a 'make -j 6', I've gotten a:
/usr/bin/gcc  -DPROGRAM='"bash"' -DCONF_HOSTTYPE='"x86_64"' 
-DCONF_OSTYPE='"linux-gnu"' -DCONF_MACHTYPE='"x86_64-pc-linux-gnu"' 
-DCONF_VENDOR='"pc"' -DLOCALEDIR='"//share/locale"' -DPACKAGE='"bash"' 
-DSHELL -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib-fpic 
-march=native -pipe -fpic -march=native -pipe -fpic -march=native -pipe 
-fpic -march=native -pipe -fpic -march=native -pipe -fpic -march=native 
-pipe -fpic -march=native -pipe -fpic -march=native -pipe -flto 
-Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb 
-flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 
-ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed 
-Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb -flto 
-Wl,--no-as-needed -Og -g3 -ggdb  -c stringlib.c
bashline.c:65:10: fatal error: builtins/builtext.h: No such file or 
directory

#include "builtins/builtext.h"  /* for read_builtin */
 ^
compilation terminated.
make: *** [Makefile:101: bashline.o] Error 1
make: *** Waiting for unfinished jobs
/usr/lib64/gcc/x86_64-suse-linux/8/../../../../x86_64-suse-linux/bin/ld: 
total time in link: 0.020174

./mkbuiltins -externfile builtext.h -structfile builtins.c \
   -noproduction -D .   ./alias.def ./bind.def ./break.def 
./builtin.def ./caller.def ./cd.def ./colon.def ./command.def 
./declare.def ./echo.def ./enable.def ./eval.def ./getopts.def 
./exec.def ./exit.def ./fc.def ./fg_bg.def ./hash.def ./help.def 
./history.def ./jobs.def ./kill.def ./let.def ./read.def ./return.def 
./set.def ./setattr.def ./shift.def ./source.def ./suspend.def 
./test.def ./times.def ./trap.def ./type.def ./ulimit.def ./umask.def 
./wait.def ./reserved.def ./pushd.def ./shopt.def ./printf.def 
./complete.def ./mapfile.def

make[1]: Leaving directory '/home/tools/bash/bash-5.1/builtins'

However, then I find if I do a 'make' and no -j, it finally ends w/no 
error, but running I still get a core dump:

bash-5.1> ./bash

malloc: subst.c:4751: assertion botched
free: called with unallocated block argument
Aborting...Aborted (core dumped)

The config I tried:
declare -a enable=(
   alias arith-for-command array-variables
   brace-expansion
   casemod-attributes casemod-expansions command-timing
   cond-command cond-regexp coprocesses
   debugger
   directory-stack disabled-builtins
   dparen-arithmetic
   extended-glob extended-glob-default function-import
   glob-asciiranges-default
   help-builtin history
   job-control
   multibyte
   net-redirections
   process-substitution progcomp prompt-string-decoding
   readline select single-help-strings
)

declare -a disable=( nls rpath )
declare -a with=( gnu-ld installed-readline bash-malloc)
declare -a without=()


FWIW, I just tried a make clean followed by a plain make (no
parallel, and it did build, but same core dump.  Also,
my final 'link' step for bash:
make[1]: Leaving directory '/home/tools/bash/bash-5.1/lib/malloc'
rm -f bash
/usr/bin/gcc -L./builtins -L./lib/readline -L./lib/readline -L./lib/glob 
-L./lib/tilde -L./lib/malloc -L./lib/sh  -fpic -march=native -pipe -fpic 
-march=native -pipe -fpic -march=native -pipe -fpic -march=native -pipe 
-fpic -march=native -pipe -fpic -march=native -pipe -fpic -march=native 
-pipe -fpic -march=native -pipe -flto -Wl,--no-as-needed -Og -g3 -ggdb 
-flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 
-ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed 
-Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb -flto 
-Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb 
-Wl,--default-imported-symver -Wl,--default-symver -Wl,--stats -fpic 
-march=native -pipe -fpic -march=native -pipe -fpic -march=native -pipe 
-fpic -march=native -pipe -fpic -march=native -pipe -fpic -march=native 
-pipe -fpic -march=native -pipe -flto -Wl,--no-as-needed -Og -g3 -ggdb 
-flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 
-ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed 
-Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb -flto 
-Wl,--no-as-needed -Og -g3 -ggdb -Wl,--default-imported-symver 
-Wl,--default-symver -Wl,--stats -fpic -march=native -pipe -fpic 
-march=native -pipe -fpic -march=native -pipe -fpic -march=native -pipe 
-fpic -march=native -pipe -fpic -march=native -pipe -flto 
-Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb 
-flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed -Og -g3 
-ggdb -flto -Wl,--no-as-needed -Og -g3 -ggdb -flto -Wl,--no-as-needed 
-Og -g3 -ggdb 

Re: Are these bash.v.posix diffs still useful? They _seem_ dated...

2021-01-31 Thread L A Walsh




On 2021/01/31 10:54, Chet Ramey wrote:

On 1/30/21 6:50 PM, L A Walsh wrote:

First behavior: How is it beneficial for bash to
store a non-executable in the command-hash?



Probably not very, but it's not all that harmful. The `checkhash' option 
overrides this.
  

---
   Does checkhash do anything other than correct these two
cases where bash doesn't auto-correct an invalid entry in
the hash of commands?


   Perhaps checkhash is no longer necessary either if bash
removes the older functionality.

   Didn't think it was harmful other than being an opportunity
to clean up something no longer needed.  I believe one of the
problems you listed with bash is that it was too big & slow.
Surely eliminating unused/unneeded quirks would be a,
albeit small, step in addressing that.

   Or are you arguing to keep cruft and unnecessary complexity
where possible? ;^)





Are these bash.v.posix diffs still useful? They _seem_ dated...

2021-01-30 Thread L A Walsh



Since this "https://tiswww.case.edu/php/chet/bash/POSIX; doesn't
seem to be version specific, I'm assuming these are
in the latest bash version. 


I don't understand the benefit of the differences involving
hashed-commands and recovery behavior. It seemed like these
behaviors may have served a purpose at one time, but now seem
more likely to create an unnecessary failure case.

First behavior: How is it beneficial for bash to
store a non-executable in the command-hash?

And second, related behavior: Not searching for an alternative
in the PATH if the old hashed value stops working.

Is there a reason why the non-posix behavior should remain
or might it be, perhaps, more desirable for the bash behavior
to match the posix behavior?








Re: . and .. are included where they were excluded before

2021-01-26 Thread L A Walsh

On 2021/01/26 09:08, Chet Ramey wrote:


That's the real question: whether or not `.' should match @(?|.?), even
when dotglob is enabled (and yes, both patterns have to be in there). There
isn't really any other. Since it doesn't match ? when dotglob is enabled,
there's an obvious inconsistency there, and that's what I'll look at.
  


I don't see it as being inconsistent, since
shopt -u dotglob

will match the literal 1st byte (dot).  From there we are no
longer looking at "default behavior for "." to match '?' but
at normal "glob-RE" rules.

.foo won't be match by , but will be matched by .???

Note, it's not just about . and .., all files
starting with 1 or 2 dots.






Re: Feature Request: scanf-like parsing

2021-01-21 Thread L A Walsh

On 2021/01/21 21:29, William Park wrote:

Since you're dealing with strings, only %s, %c, and
%[] are sufficient.
  

You can't read numbers in sscanf?
_Might_ be nice to be able to read a float as well
even though it would need to be access/stored as a
string.  Would compliment ability to write out a
floating point value using %f from a string.

Why would you do that?  To use the floating point
round of printf to get float-vals to round up.

Eh, would prolly want to scan it in with 2 integers now
that I think about it (before  + after decimal).




Re: History -r breaks on FIFOs

2021-01-21 Thread L A Walsh

On 2021/01/21 11:43, Chet Ramey wrote:

On 1/21/21 11:18 AM, Merijn Verstraaten wrote:
  

The history command doesn't seem to work when given a FIFO instead of a file. I was trying to load 
history from FIFO using either 'history -r <(echo "$hist")' or 'echo "$hist" | 
history -r /dev/stdin', but neither of these seem to work, due to the command being unable to handle 
FIFOs.



Correct. The history file reading code requires the history file be a
regular file. This is because the technique it uses to read and process
the file (mmap or read the file in one read into a large buffer, then
move around in the buffer) doesn't work with non-regular files.
  

There are two stages in readline -- being able to read previous
history and then recording to a history file.  Isn't it possible
to load from one place (which would get copied to the 2nd as older history)
and then bash move around to delete dups as needed in the new file?





Re: non-executable files in $PATH cause errors

2021-01-12 Thread L A Walsh

On 2021/01/09 23:52, n952162 wrote:

Hello,

I consider it a bug that bash (and its hash functionality) includes
non-executable files in its execution look-up

But bash doesn't have an execution lookup.
It has a PATH lookup, and a completion lookup (for executables
when appropriate), but the closest thing to an execution
lookup might be "type -a" to look for how a command is executed.

In the completion lookup and in the execution lookup (type -a),
it only lists executable files. 




 and then (inevitably)
simply reports an error, because its such files aren't  executable.

But it is not inevitable. Using 'cp' as an example.  Assuming
you have /usr/bin in your PATH, but ~/bin is in your PATH before
/usr/bin, then try:
"touch ~/bin/cp", then
"hash -u" (to clear the hash lookup), then type
"cp", you will find that it returns the
value in /usr/bin, ignoring the non-executable file that was
first in your PATH.  So if an executable is in your PATH, it will
return that in preference to a non-executable.  Only when it can't
find an executable does it return the non-executable.

As for why this is useful?  Perhaps someone just created a
script script 'foo' in "~/bin", but forgot to toggle
the execution bit. Then they know that they forgot to
toggle the execution bit.

So it only reports the non-executable when there is no other
option -- not 'inevitably', which is useful because it reminds
people they need to toggle the 'x' bit.

Make sense? 







Re: bash doesn't display user-typed characters; can interfere with COPY/PASTE

2020-12-09 Thread L A Walsh




On 2020/12/08 06:07, Chet Ramey wrote:

On 12/7/20 8:02 PM, L A Walsh wrote:

  

The problem is that bash isn't displaying a 'tab' character where
one was typed.  



It's readline and redisplay. Readline expands tabs to spaces using an
internal tab stop of 8. This allows it to be sure of the physical cursor
location, especially when you're doing things like wrapping lines, and
insulates it from varying terminal behavior.
  

*snark* That's nice, why not just display 'X' instead of spaces?  Wouldn't
that also insulate readline from varying terminal behavior? *not really, 
but...*


I'm not sure it is the place of a an in-line-editor to override terminal 
features.


However, as readline is an editor and most editors allow setting the 
tabs (as well as
whether or not to use hard-tabs or expand them).  If readline has to 
"insulate",

just like vi/vim -- have the tabstop and whether to expand be in a startup
file like .inputrc.  Right now, .inputrc has the facility to define how 
characters
are to be interpreted.  Why not put the option to expand w/spaces in 
there, as well
as what a tab character expands (or maps to). 

Bash also overrides existing standards with regards to tabs wrapping.  
It seems that
many or most terminals (xterm compat, linux console-compat, etc) don't 
wrap to the
next line when a tab is pressed. The reasoning for that was that tab was 
supposed to
skip to the next field in the same line.  Wrapping is beyond the scope 
of function

for tabbing.
  

With many (most?) terminal windows these days, especially
Unicode-enabled ones, the terminal has to read what is on the screen to
be able to read the binary code of whatever is displayed on the screen,
Otherwise, it wouldn't be able to read typed unicode.



This is not relevant to the issue.
  
  
   It was meant to illustrate that terminals are using the binary 
representation
of the characters typed -- and that arbitrarily changing the binary 
representation

(like tabs->spaces) will mess up / corrupt the user's output stream.





Re: [EXT] Re: Bash 5.1: rl_readline_version = 0x801 (shouldn't it be 0x0801 ?)

2020-12-09 Thread L A Walsh

On 2020/12/08 04:40, Greg Wooledge wrote:

On Tue, Dec 08, 2020 at 09:47:05AM +0100, Andreas Schwab wrote:
  

On Dez 07 2020, Testing Purposes wrote:



From an integer standpoint, I know that 08 (with one leading zero) is the
same as 8.
  

Nope, it is a syntax error.



In a bash math context or in C, 08 is indeed a syntax error, as it is
an octal integer constant with an invalid digit ("8").

In mathematics, 08 is just 8.
  


But the subject says 0x08(01), not 08.





Re: bash doesn't display user-typed characters; can interfere with COPY/PASTE

2020-12-08 Thread L A Walsh

On 2020/12/08 06:28, Greg Wooledge wrote:

The end result is that it's basically impossible to preserve the original
whitespace of your source material across a terminal copy/paste operation.
So don't count on that.
  


   If you use a random terminal to copy/paste, sure, but if you use a 
specific

product that maintains fidelity, then it's not true.


   Especially nice are term-progs that automatically reflow text *as 
you resize*
the terminal.  Like if you see 100 characters written to an 80-column 
terminal,

that line wraps the text, when you expand the width, you retain the original
white space.  That's why programs that don't preserve what you wrote are 
annoying.
And note -- you see the reflow in real-time as you change dimensions -- 
not just

at the end.

   Imagine working on a terminal that only displayed upper case even if the
differentiation was saved when stored.  You can't really see the text as 
"it is"

when you enter it or re-edit it.  With it putting in something different on
display than what is really there, you get things like (tab chars between
each letter):
 echo "a b   c   d   e   f   g   h   
i   j   k   l   m   n   o   p   q   
r   s   t   a   b   c   d   e   f   
g   h   i   j   k   l   m   n   o   
p   q   r   s   t"
a b c d e f g h i j k l m n o p q r s t a b c d e f g h i j k l m n o p 
q r s t


If you re-edit a line with tabs in it that displays like it does in the 
bottom

line, above (tabs ever 2 spaces), the re-edited "line" takes up 4 lines.





bash doesn't display user-typed characters; can interfere with COPY/PASTE

2020-12-07 Thread L A Walsh

If I type in ( +  are keypresses)

if [[ '' == $'\t' ]]; then echo ok; else echo notok; fi 

bash displays:

if [[ ' ' == $'\t' ]]; then echo ok; else echo notok; fi
ok


if I now copy the 'if' line and paste it

if [[ ' ' == $'\t' ]]; then echo ok; else echo notok; fi
notok

if I take the same line from an editor like gvim, it works.
If the test line is in a file, and I use 'cat file' and copy/past the
resulting line, it works.

It is only when bash is displaying the line that it doesn't work.

The problem is that bash isn't displaying a 'tab' character where
one was typed.  With many (most?) terminal windows these days, especially
Unicode-enabled ones, the terminal has to read what is on the screen to
be able to read the binary code of whatever is displayed on the screen,
Otherwise, it wouldn't be able to read typed unicode.

Can this be fixed -- maybe with an option in 'shopt', for those who might
be using a non-expanding terminal, but anyone using an xterm/linux 
compatible
terminal should get the expansions from their terminal. 


Where this can be even more annoying is if your terminal's response to a tab
is different than that used on old-hardware terminals.

Thanks,
-l









Re: is it a bug

2020-11-16 Thread L A Walsh

On 2020/11/16 11:02, Alex fxmbsw7 Ratchev wrote:

on my way for a new paste

Anytime you start going over multiple lines in an alias, you
need to consider the use of a function, where 'need' would ideally
increase in proportion to the number of lines you are including.

For increased readability, I took out 'aal' being prepended to
every variable except the two usages of 'aal[...]' where I substituted
'foo' for 'aal'.  NOTE: the code seems to use 'foo' (or 'aal') without it
being initialized. 


The main oddity I found was that if (in my version),
   t=""
is on the same line as 'res=()', I get the error with unexpected ';'
since it doesn't seem to parse the 'for' statement's "for", so the
semi after it is unexpected.

For some strange reason, the parsing of the array doesn't break off
at the space, in fact, when the lines are split, the alias seems
to lose some spacing (note 'res=t='):

 executing alias +ax
 res=t=
 alias -- "t="

This seemed to be the minimum difference between a working & non working
case.  Either (for no error):
 an_alias='res=()
   t=""
   for ci in "${!foo[@]}"; do \

or (to reproduce error):
 an_alias='res=() t=""
   for ci in "${!foo[@]}"; do \

error I got was:
./aldef.sh: array assign: line 23: syntax error near unexpected token `;'
./aldef.sh: array assign: line 23: `res=() t=""

It is doing something weird -- I suspect that
alias expansion is expanding the right side as 'text' and not
as code that gets interpreted when the substitution occurs.

Try reserving aliases where it is needed (something a function can't do)
or where it helps legibility.

Hope this helps...ohincluding the version that gives an error.
To see version that works, split the 't=""' to be on the line
below the 'res=()'.

-linda





aldef.sh
Description: Bourne shell script


Re: [ping] declare -c still undocumented.

2020-11-13 Thread L A Walsh

On 2020/11/13 09:01, Chet Ramey wrote:

On 11/12/20 6:19 PM, Léa Gris wrote:
  
declare -c to capitalize first character of string in variable 



Thanks for the reminder. I keep forgetting to turn this off. It's too late
for bash-5.1, but I have it tagged to flip to disabled by default in
config-top.h in bash-5.2.
  

---
   It is replaced with something else?






Re: find len of array w/name in another var...(bash 4.4.12)

2020-10-20 Thread L A Walsh




On 2020/10/20 01:29, Andreas Kusalananda Kähäri wrote:


In bash 4.3+, I would manke your "ar" variable a name reference variable
instead:

$ ar1=(1 2 3 44)
$ declare -n ar=ar1
$ echo "${#ar[@]}"
4

  

Ya, I was trying to use the 'byname' feature for older/wider support...sigh





find len of array w/name in another var...(bash 4.4.12)

2020-10-20 Thread L A Walsh

There's got to be an easier way to do this, but not remembering or finding
it:

First tried the obvious:
declare -a ar1=([0]="1" [1]="2" [2]="3" [3]="44")
an=ar1
echo ${#!an[@]}
-bash: ${#!an[@]}: bad substitution

This works but feels kludgy

an=ar1
eval echo \${#$an[@]}
4


I thought the !name was supposed to take the place
of using $an, but haven't seen a case where !an works where
an points to an array name.

Is there a place in the bash manpage that gives an example of using !name
where name points to an array?

Thanks...
-l





Re: How get last commnda history line number directly

2020-09-10 Thread L A Walsh

On 2020/09/09 11:32, Greg Wooledge wrote:

No. I said PS1='\! ' and that's what I meant. Not some other random
variable.

unicorn:~$ PS1='\! '
502 echo hi
hi
503 
  

===
Thanks for clarifying. That does work for me.  I still had
it stuck in my head as having to do with general variable
usage as you mentioned in one of your earlier responses.

Moderately amusing short story -- when first encountering
question of how to get number of last command, I wrote
a shell script to do it -- way too much work for much
too little benefit.

Never let it be said that I still can't overwhelm
myself with trees in looking for a forest -- though
maybe not for long if one is living on the west coast --
there will be quite a few less of them (trees and forests)
to get in the way unless we get some early winter rains.

Can't believe how yellow the sky has been the past few days
due to the smoke.






Re: How get last commnda history line number directly

2020-09-09 Thread L A Walsh

On 9/8/2020 5:11 AM, Greg Wooledge wrote:

On Sun, Sep 06, 2020 at 01:18:22PM -0700, L A Walsh wrote:
  

as it's pure & directly viable in PS1 env. var.
PS1=`echo \!`
  

---
Doesn't work if you don't have '!' support (I generally found
'!' more often did unwanted things, but I never used csh):



You're talking abuot csh-style history expansion (set -o histexpand),
right?  That has nothing to do with the expansion of \! within PS1.

I've got histexpand disabled, and PS1='\! ' gives the number just fine.
  

Like this?
# t='\! '
# echo $t
\!
---
As I said, doesn't work if you don't have '!' support. 



Re: How to do if any (if no so a feature request) to do break continue

2020-09-09 Thread L A Walsh

On 9/2/2020 2:31 AM, almahdi wrote:
As break 2 is  disrupting and exiting loop twice, 

How is breaking something not somewhat inherently disrupting?


from current and next outer
scope then how to do if any (or as a feature request) to do
/break continue/
that is to break then immediately continue on reach its next outer scope ?
  

---
   Ick
   What is wrong with break 2?  Breaks out loop enclosing current loop.
Same with continue 2 -- continues with next item in loop enclosing
current loop.


   Additionally, for greater clarity, why not allow:

loopA:
 for((i=0;++i<7;); then

loopB:
   while read nstring restline;do
  ...
   if [[ $nstring =~ ^[0]+7$ ]]; then break loopA;
   done

# labels would be immediately before a loop to be target
of a break/continue
Maybe mroe clear?

  





Yup!: Re: Incrementing variable=0 with arithmetic expansion causes Return code = 1

2020-09-07 Thread L A Walsh
On 8/28/2020 1:00 AM, Gabriel Winkler wrot
> # Causes error
> test=0
> ((test++))
> echo $?
> 1
> echo $test
> 1
>   
"((...))" is the "test form" for an integer expression.  That means
it will return true if the value of the interior, "...", of "((...))"
is non-zero and return false if ((... == 0)).  Remember "$?==0" == true
and "$?!=0" == false,

so if "..." is an expression then for:
   ...   is 0  is non-0
then
((...))  is false  is true
   $?is 1  is 0

deciding that ((non-zero)) = true was consistent with most programmers'
expectations from other languages that 0 was false and non-zero was
true (which is opposite of the final value of "$?").






Re: How get last commnda history line number directly

2020-09-06 Thread L A Walsh

On 2020/09/01 05:32, Greg Wooledge wrote:

On Tue, Sep 01, 2020 at 02:14:33AM -0700, almahdi wrote:


> How to get last commnda history line number?

as it's pure & directly viable in PS1 env. var.
PS1=`echo \!`
  

---
Doesn't work if you don't have '!' support (I generally found
'!' more often did unwanted things, but I never used csh):

 echo thisline

thisline

 x='\!'; echo "${x@P}"

2993

 history 3

2990  0906@125642:echo thisline
2991  0906@125713:x='\!'; echo "${x@P}"
2992  0906@125721:history 3
---
i.e. it was on line 2990, not 2993


unicorn:~$ x='\!'; echo "${x@P}"
590


On 2020/09/01 06:05, Chet Ramey wrote:

On 9/1/20 5:14 AM, almahdi wrote:
  
How to get last commnda history line number 



HISTCMD
   The history number, or index in the history list, of the current
   command.  

---
   Seem to need to subtract 2 from that num (is this right?)?
   (in "4.4.12"):


 echo "this is on what history line? ($HISTCMD)"

this is on what history line? (2985)

 echo $(($HISTCMD-2))

2983
history 3
2983  echo "this is on what history line? ($HISTCMD)"
2984  echo $(($HISTCMD-2))
2985  history 3

Seems to only give correct line from history by subtracting 2?
Maybe its different in 5.x?
-l




Re: About speed evaluation

2020-09-06 Thread L A Walsh

On 2020/09/06 05:52, Oğuz wrote:

6 Eylül 2020 Pazar tarihinde almahdi  yazdı:

  

How slower find -exec repetitively calling bash -c'' for invoking a
function
inside, compare to pipe repeated




   How I read that (dunno if it is right) was:
"How much slower is 'find -exec' than repetitively calling 'bash -c' for
the purpose of invoke a function inside (the script or target of what
'exec' or '-c' executes) compared to piping all the answers to something
like 'xargs' receiving all of the args and either:
1) executing as many as possible/invocation, or
2) executing 1 script for each 'item' that is listed by find?

   i.e. seems to be wanting to know tradeoffs between

   find   -exec func_in_script "{} ;"   AND
   find   -exec "bash -c func_in_script \"{}\" ;"
 AND
   find   printf0 | xargs -0 func_in_script where
  func is called in a loop for each arg passed to it

   That was my interpretation.  One note...in most cases, you aren't
calling 'pipe repeated'.  The pipe is at the end of the find, and is
created *once* between the output of find and the input of the next
program (ex. xargs)
   Given my interpretation (which may be incorrect), note that
the first 2, above, both require starting a new copy of bash
for each object found.  That load+start time will vary depending
on what OS you are using and what version of each OS.  You can't
really answer "how much" as there isn't a general answer, but
in almost all cases (likely all), the answer that only invokes
one post-processor on all the args (the one where all answers
are piped into xargs) will be ***significantly*** faster than any
implementation that calls the processing function repeatedly (as in
the first two examples, above).

   So the cryptic, one-word answer would be "***significantly***".


   This is my interpretation and answer to what appears to be
the question.


all work correctly, so this ask all just on speed evaluation or comparison



   Using relative speed, invoking the post-processor only once
would normally be the fastest answer, by far, "comparatively" speaking.


(1st answer suggest function inside script to be invoked by -exec
2nd answer suggest solution by some core utils pipings ).


--
This needs some grammar, I didn't understand it all.
  


- Linda





Syntax error in a Map/Hash initializer -- why isn't this supported?

2020-08-10 Thread L A Walsh
I wanted to use a map that looked like this:

declare -A switches=([num]=(one two three)), where '(one two three)'
is an associated list.  Ideally, I could access it like other arrays:
for types in  ${switches[num][@]}; do...
or
switches[num]=(one two three)#gives:
  -bash: switches[num]: cannot assign list to array member
or
echo ${switches[num][0]}  (="one").

I defaulted to going around it by making it a string, like:
switches[num]="one|two|three"
or
switches[num]="(one two three)" but why?  It seems obvious that bash
knows what I'm trying to do, so why not just do it?

Some nested constructs seem to work:
> b=(1 2 3)
> a=(4 5 6)
> echo ${a[${b[1]}]}
6

but more often than not, they don't.  Is there a reason to disallow such?



Re: problem with extra space; setting? cygwin only?

2020-06-25 Thread L A Walsh
But that wouldn't follow the email response instructions of posting your
response above the previous email or lists where attachments are not
allowed.  It also requires putting the 'to-be-protected-text' in a separate
file, on the same computer** or on the local computer (depending on which
email system you are using) so it can be attached by the
mailer.

** - if your examples are on one computer and your desktop is on another
computer, you have to make sure the example ends up on the same computer
your email client is running on (even if they might look the same as in
the same versioned client running on a remote machine via 'X' as the one
you normally run on.

Sure it's all doable, if you aren't in a hurry and none of your defaults
have changed, which is unlikely when you are using a temporary (hopefully)
email system like gmail.

Life is rarely perfect.

on your desktop.

On Wed, Jun 24, 2020 at 7:12 PM Dale R. Worley  wrote:

> If you have code to send and formatting is important, put it in a file
> and attach it to the messages.  Almost all mail systems transmit
> attached files without damaging them.
>
> Dale
>


  1   2   3   4   >