Re: Suggested BASH Improvement

2024-09-18 Thread Greg Wooledge
On Wed, Sep 18, 2024 at 10:51:51 -0400, Chet Ramey wrote:
> On 9/17/24 7:50 PM, BRUCE FOWLER via Bug reports for the GNU Bourne Again
> SHell wrote:
> > An interesting problem I ran into recently:
> > I have a shell script that I run about once a month that
> > "screen-scrapes" from the output of another program using the
> > substring capability, e.g. ${data_line:12:2}. This is pulling
> > out the two-digit month ranging from "01" to "12".
> > This worked fine, even giving the right answers, for
> > months earlier in the year. Then came August, and it went
> > sideways because the leading "0" was forcing the number to be
> > interpreted as octal. My first reaction was, What's going on,
> > this has run just fine for months. The second reaction was,
> > WTF, who uses octal anymore? But I understand it is because
> > of C-language compatibility. I could use the [base#]n form
> > but that gets awkward.
> 
> Thanks for the proposal. I think the [base#]n syntax is reasonable
> here without adding a new shell option.

Do recall, however, that negative numbers break the workaround suggested
by Andreas.

$((10#${line:12:2}))

works only for unsigned numeric values.  With a hyphen in the first
column, you get

hobbit:~$ line=-1
hobbit:~$ echo $((10#$line))
bash: 10#: invalid integer constant (error token is "10#")

In ,
Pierre gives a workaround that includes handling of the (optional) sign:

$((${num%%[!+-]*}10#${num#[-+]}))

Whether this is "reasonable" is of course subjective.



Re: Question on $IFS related differences (Was: Question on $@ vs $@$@)

2024-09-18 Thread Greg Wooledge
On Wed, Sep 18, 2024 at 08:05:10 +0300, Oğuz wrote:
> On Wed, Sep 18, 2024 at 4:19 AM Steffen Nurpmeso  wrote:
> >
> 
> It boils down to this:
> 
>   f(){ echo $#;}; set "" "" ""; IFS=x; f $*
> 
> bash, NetBSD and FreeBSD sh, and ksh88 all agree and print 2. pdksh
> prints 3 but mksh and oksh print 1. dash, ksh93, yash, and zsh print
> 0.

At the risk of sounding like a broken record, using an unquoted $* or $@
in a context where word splitting occurs is just *begging* for trouble.
Please don't do this in your scripts.  All of these implementation
differences and possible bugs will just stop mattering, if you stop
using questionable shell features.

If you want to pass along your positional parameters to a function,
use "$@" with quotes.  This will pass each parameter as a separate
argument to the function, with no modifications.  It should work in
every post-Bourne shell (if it doesn't, that's a bug).  This is almost
always what you want.

If you want to join all of your positional parameters together into
a single string, use "$*" with quotes.  The first character of IFS
will be inserted between each pair of parameters.  This is sometimes
useful when writing messages to log files, or to produce a simple
row of delimited fields (not a full-blown CSV file, though).



Re: (question) fast split/join of strings

2024-09-17 Thread Greg Wooledge
On Tue, Sep 17, 2024 at 17:00:16 +0200, alex xmb sw ratchev wrote:
> plz what does 'local -' do , its newer to me
> i forgot all about it already

   local [option] [name[=value] ... | - ]
  For  each  argument, a local variable named name is created, and
  assigned value.  The option can be any of the  options  accepted
  by declare.  When local is used within a function, it causes the
  variable name to have a visible scope restricted to  that  func‐
  tion  and  its children.  If name is -, the set of shell options
  is made local to the function in which local is  invoked:  shell
  options  changed  using  the set builtin inside the function are
  restored to their original values  when  the  function  returns.

It's supposed to let you make things like "set -f" local to the function
in which they appear.  I've never actually used it yet.



Re: (question) fast split/join of strings

2024-09-17 Thread Greg Wooledge
On Tue, Sep 17, 2024 at 16:07:58 +0200, alex xmb sw ratchev wrote:
> savedifs=${IFS@A} savedifs=${savedifs:- unset -v IFS }
> str=1,2,3 IFS=, arr=( $str ) joined=${arr[*]}
> eval "$savedifs"

Using unquoted $str in an array expansion to do the splitting has a
couple drawbacks:

1) Globbing (filename expansion) is done unless you turn it off.

hobbit:~$ str=1,2,*.xml,4
hobbit:~$ IFS=, arr=( $str ); declare -p arr
declare -a arr=([0]="1" [1]="2" [2]="passwd.5.xml" [3]="4")

   Workaround: set -f, but now you have an extra shell setting to
   manage (do you do a set +f later, or do you wrap it in a function
   and try to use "local -", or do you use a subshell, or do you simply
   leave globbing disabled for the whole script...).

2) Pitfall 47 still applies.

hobbit:~$ bash
hobbit:~$ str=1,2,,4,
hobbit:~$ IFS=, arr=( $str ); declare -p arr
declare -a arr=([0]="1" [1]="2" [2]="" [3]="4")

   Same workaround as all the others -- add an extra delimiter to the
   end of the input string before splitting it.  If there is no empty
   field at the end, then the extra delimiter gets eaten.  If there
   is an empty field, then the extra delimiter preserves it before
   being eaten.



Re: (question) fast split/join of strings

2024-09-17 Thread Greg Wooledge
On Tue, Sep 17, 2024 at 02:56:05 -0400, Lawrence Velázquez wrote:
> This question is more appropriate for help-bash than bug-bash.
> 
> On Tue, Sep 17, 2024, at 2:21 AM, William Park wrote:
> > For splitting, I'm aware of
> >  old="a,b,c"
> >  IFS=, read -a arr <<< "$old"

 is relevant here.

The other thing you need to watch out for when using IFS and <<< to do
array splitting is an internal newline character.  The read command
will stop reading when it sees a newline.  The workaround for that is to
use -d '' to set the line delimiter to the NUL byte.  Without a trailing
NUL byte in the input stream, read returns a status of 1 (failure).
This is not a problem... unless you were insane enough to use set -e.

Of course, you can add a NUL byte to the input stream, but that
means you need to replace <<< "$old" with something like
< <(printf '%s\0' "$old") .

Another way to do splitting is to use readarray/mapfile with -d.
Pitfall 47 still applies here, as demonstrated:

hobbit:~$ mapfile -t -d , array < <(printf %s "a,b,c,"); declare -p array
declare -a array=([0]="a" [1]="b" [2]="c")

hobbit:~$ input="a,b,c,"
hobbit:~$ mapfile -t -d , array < <(printf %s, "$input"); declare -p array
declare -a array=([0]="a" [1]="b" [2]="c" [3]="")

A third way to do splitting is to loop over the input string using
parameter expansions.

# Usage: split_str separator input_string
# Stores results in the output array 'split'.
split_str() {
local sep=$1 str=$2
[[ $1 && $2 ]] || return
split=()
while [[ $str = *"$sep"* ]]; do
split+=("${str%%"$sep"*}")
str=${str#*"$sep"}
done
split+=("$str")
}

The biggest advantage of looping like this is that the separator may be
a multi-character string, instead of just one character:

hobbit:~$ split_str ' - ' 'foo - bar - cricket bat - fleur-de-lis - baz'
hobbit:~$ declare -p split
declare -a split=([0]="foo" [1]="bar" [2]="cricket bat" [3]="fleur-de-lis" 
[4]="baz")

This is probably the slowest choice, but it's by far the safest (has
the fewest surprise pitfalls) and the most flexible.



Re: bash builtins mapfile issue - Unexpected parameter passing of causes rce

2024-09-14 Thread Greg Wooledge
On Sat, Sep 14, 2024 at 19:46:21 +0800, ~ via Bug reports for the GNU Bourne 
Again SHell wrote:
> Dear bug-bash team:
>   I hope this email finds you well. During my recent security 
> assessment of bash, I identified a potential security vulnerability that I 
> believe may impact the security of your product and its users.
> here is details:
> 1、mapfile -C xxx will call run_callback
> 2、evil "execstr" parameter  passing causes rce
> mapfile.def
> 
> for example in bash shell:
> echo -e 
> "line1\nline2\nline3\nline4\nline5\nline6\nline7\nline8\nline9\nline10" > 
> test.txt
> mapfile -t -C "whoami #111" -c 5 my_array < test.txt 
> 
> I want to assign a CVE ID to the vulnerability

What vulnerability?  If you use an option that passes a command to be
evaluated, and then that command gets evaluated, it's working as you
requested.

If you don't want mapfile to run a callback after reading items, then
don't use the callback option.

Also, please stop posting in HTML.  Your message is difficult to read.



Re: Question on differences on quoted $* (Was: Re: Question on $@ vs $@$@)

2024-08-23 Thread Greg Wooledge
On Sat, Aug 24, 2024 at 02:15:48 +0200, Steffen Nurpmeso wrote:
>   a() {
> echo $#,1=$1,2=$2,"$*",$*,
>   }

You didn't read a word I said, did you

*Sigh.*



Re: Question on $@ vs $@$@

2024-08-22 Thread Greg Wooledge
On Fri, Aug 23, 2024 at 01:28:49 +0200, Steffen Nurpmeso wrote:
>   a() { echo $#,1=$1,2=$2,"$*",$*,; }
>   set -- a b c
>   echo 2
>   IFS=:; echo "$*"$*; a $* "$*";

Your printing function "a" is highly questionable.  It's got unquoted
word expansions that'll do who knows what, and it's also using echo
which may interpret contents and alter them.

If you want to see the arguments given, I recommend this instead (assuming
bash):

args() {
if (($#)); then
printf '%d args:' "$#"
printf ' <%s>' "$@"
echo
else
echo "0 args"
fi
}

If you need it to work in sh, replace (($#)) with something like
test "$#" != 0.  You could also write a script named args which performs
the same steps, so that it can be invoked from anywhere.  Here's mine:


hobbit:~$ cat bin/args
#!/bin/sh
printf "%d args:" "$#"
test "$#" -gt 0 && printf " <%s>" "$@"
echo


> Why does bash (aka sh(1)) word splits
> 
>   IFS=:; echo "$*"$*; a $* "$*";
> ^
> this
> 
> unquoted $* at SPACE into three different tokens?

Unquoted $* is expanded in three passes.  The first pass generates one word
per parameter.  The second pass applies word-splitting (based on IFS) to
each word.  The third pass applies globbing (pathname expansion) to each
of *those* words.

So, you're expanding $* to   .  Since none of those words
contains any IFS characters, or any globbing characters, that's the
final list.

You're therefore calling a with 4 parameters:

a "a" "b" "c" "a:b:c"

> then bash gives me
> 
>   2
>   a:b:ca b c
>   4,1=a,2=b,a:b:c:a:b:c,a b c a b c,

Your printing function contains an unquoted $*, so we repeat the same
procedure.  You have 4 parameters, so $* is first expanded to a list
of one word per parameter:.

Next, since IFS is set to : at this point, the fourth word is split into
a new list of words, and now you have  .

Finally, globbing would apply.  In this case, there are no globbing
characters, so no further changes occur.

Ultimately, you call echo with the following arguments:

echo "4,1=a,2=b,a:b:c:a:b:c,a" "b" "c" "a" "b" "c,"

echo doesn't care about IFS, so it writes these arguments with spaces
between them, and therefore you see the output that you see.

Unquoted $* and $@ are almost always errors.  You should not use them
without an extremely good reason.

In 90% of cases, what you want is "$@" which preserves the parameter
list.  In 9% of cases, you want "$*" which concatenates all the
parameters together into a single string, for reporting.



Re: Bash History Behavior Suggestion

2024-08-19 Thread Greg Wooledge
On Mon, Aug 19, 2024 at 15:52:22 -0400, supp...@eggplantsd.com wrote:
> I would suggest:
> 2. Restrict up-arrow completion to the history of present session.

This is going to be an *extremely* unpopular suggestion.

Though, I must wonder: do you literally mean *only* the up-arrow (or
Ctrl-P or ESC k), or do you also include searching from within
the interactive shell's editing modes (Ctrl-R or ESC /)?

My own usage pattern includes using ESC / to bring up a historic command
from a past session.  I do this a *lot*.  I suspect others may share that
pattern, albeit possibly with the emacs key binding instead of the vi one.

I don't know how common it is to use the up arrow (or equivalent) to
bring up a command from a past session.  I've done it a few times --
usually when I'm trying to answer someone's question, and I launch a
temporary shell window to do so, get my answer, paste it to them, close
the window -- and then suddenly they change the question.  So I open
another new window, and bring up the command(s) I was just using in the
previous window, modify them, etc.



Re: $@ in function gives error

2024-08-17 Thread Greg Wooledge
On Sat, Aug 17, 2024 at 12:41:45 +0200, Freek de Kruijf wrote:
> Apparently I have a problem with the concept of $@, I see it as list of zero 
> or more non-whitespaced elements, and quotes around it makes it into a single 
> element. Like a parameter p with a content of zero or more non-whitespaced 
> elements, where the quotes make in into a single element.
> 
> Thanks for teaching me this concept. It must have some meaning in more 
> complicated situations.

In a nutshell:

"$*"   All of the parameters joined together with spaces between them.
"$@"   All of the parameters expanded as a list of zero or more words.
$* Usually a bug.
$@ Usually a bug.

The unquoted forms undergo THREE rounds of expansion.  First, each
parameter is expanded as a separate word, like "$@" does.  Then, each
of those words undergoes word splitting using IFS, to generate a second
list of words.  Finally, each of those words undergoes filename expansion
(globbing) to generate a third list of words.  The final result is that
third list.

You almost never want that.

Here's an illustration of how they work:

hobbit:~$ set -- "a b" "" "files matching foo.p*"
hobbit:~$ printf '<%s> ' "$*" ; echo
 
hobbit:~$ printf '<%s> ' "$@" ; echo
 <>  
hobbit:~$ printf '<%s> ' $* ; echo
   
hobbit:~$ printf '<%s> ' $@ ; echo
   

As documented, "$*" gives one big string, "$@" gives a list of strings,
and the unquoted forms give... that.



Re: $@ in function gives error

2024-08-16 Thread Greg Wooledge
On Fri, Aug 16, 2024 at 18:59:15 +0200, freek--- via Bug reports for the GNU 
Bourne Again SHell wrote:
> #!/bin/bash
> init() {
> [ -n "$@" ] && echo $@
> }
> init $@

You have multiple errors in this script.

The first error is that you used $@ without quotes, twice.  If you want
to preserve the argument list and pass it along without changing anything,
you need to use "$@" with quotes.

init "$@"

The second error is that you're expanding "$@" in a context where a single
string is expected.  [ -n "$x" ] works, because x is a string variable,
and it expands to exactly one word.

"$@" on the other hand expands to any number of words.  Zero or more.

If you want to check whether you've got any parameters, "$@" is not what
you want to check in the first place.  Use $# instead.  $# expands to the
number of parameters.

[ "$#" != 0 ] && echo "$*"

The third error is that if you want to *print* the argument list, you
most likely don't want echo "$@" or its unquoted variant.  "$*" expands
to a single string, with all of the arguments concatenated together,
with a space (or the first character of IFS, if you've changed IFS)
between arguments.

However, this may not be what you want.  Also, echo performs interpretation
of the data that you pass it, which may *also* not be what you want.

If you want to show the arguments in a human-readable way, you have to
figure out how they should look.  I'm personally fond of putting
 around each argument.

init() {
if [ "$#" != 0 ]; then
printf '<%s> ' "$@"
echo
fi
}
init "$@"

That's one way to write this.  Here's what you get when you call it:

$ init 'argument one' '' arg3 arg4
 <>   



Re: Question on $@ vs $@$@

2024-08-14 Thread Greg Wooledge
On Wed, Aug 14, 2024 at 17:58:15 +0300, Oğuz wrote:
> On Wed, Aug 14, 2024 at 5:23 PM Robert Elz  wrote:
> > However, as ksh93 makes "" from this
> > expansion, and so probably ksh88 might have done as well
> 
> No, both Sun and SCO variants expand "$@$@" to zero fields when $# is 0.

HP-UX 10.20 as well:

# set --
# printf '<%s> ' START "$@" END; echo
 
# printf '<%s> ' START "$@$@" END; echo
 
# uname -a
HP-UX vandev B.10.20 A 9000/778 2000153729 two-user license




Re: Question on $@ vs $@$@

2024-08-14 Thread Greg Wooledge
On Wed, Aug 14, 2024 at 11:04:08 +0200, Marc Chantreux wrote:
> > We know what "$@" is supposed to do.  And something like "x${@}y" is
> > well-defined also -- you simply prefix "x" to the first word, and append
> > "y" to the final word.
> 
> > But I don't know how "$@$@" is supposed to be interpreted.  I do not see
> > anything in the official wording that explains how it should work.
> 
> As the doc you just mention said: let's set A B C
> 
>  "x$@y"yA" "B" "Cy"
> 
> So the only consistent behavior I see is
> 
>  "$@$@"A" "B" "CA" "B" "C"
> ^-("$3$1")
> 
>  "$@ $@"A" "B" "C A" "B" "C"
> ^-("$3 $1")
> 
> I'm really currious: do you see another one ?

The most obvious would be to treat "$@$@" as if it were "$@" "$@",
generating exactly two words for each positional parameter.

 

As a human trying to read this expression and figure out what it means,
I keep returning to the documentation.  "... the expansion of the first
parameter is joined with the beginning part of the original word" and
"... the expansion of the last parameter is joined with the last part
of the original word".

If there are *two* instances of $@ within the same word, then the final
parameter of the *first* $@ is supposed to be "joined with the last
part of the original word".  But the "last part of the original word"
is another list expansion, not a string!  What does it even mean for
the final parameter to be "joined" with a list expansion?

Meanwhile, the first parameter of the *second* $@ is supposed to be
"joined with the beginning part of the original word".  But the "beginning
part of the original word" is once again a list, not a string.

I can't see an unambiguous way to reconcile those.

So, neither of these results would shock me:

  (treat it like "$@$*")

  (treat it like "$*$@")

I also wouldn't be shocked if a shell were to say "screw this, I'm just
going to treat it like "$*$*" and give you one big word".



I wouldn't consider that a *correct* result according to my reading of
the documentation, but also, it wouldn't shock me if some shell did it
that way out of desperation.

The documentation clearly never considered would should happen if the
script uses "$@$@", and I've gotta say, I've been doing shell stuff
for about 30 years now, and this is the first time *I've* ever seen
it come up.

I'd still love to know what the script's intent is.



Re: Question on $@ vs $@$@

2024-08-13 Thread Greg Wooledge
On Wed, Aug 14, 2024 at 02:45:25 +0200, Steffen Nurpmeso wrote:
> I include bug-bash even though i think bash is correct, but there
> lots of people of expertise are listening, so, thus.
> Sorry for cross-posting, nonetheless.
> Given this snippet (twox() without argument it is)
> 
>   one() { echo "$# 1<$1>"; }
>   two() { one "$@"; }
>   twox() { one "$@$@"; }
>   two
>   two x
>   twox
>   twox x

So... what's the question?  You didn't actually ask anything.

As far as I can tell from the bash man page, "$@$@" does not appear to
be well-defined.  From the man page:

   @  Expands to the positional parameters,  starting  from  one.   In
  contexts  where  word  splitting is performed, this expands each
  positional parameter to a separate word; if  not  within  double
  quotes,  these words are subject to word splitting.  In contexts
  where word splitting is not performed, this expands to a  single
  word  with each positional parameter separated by a space.  When
  the expansion occurs within double quotes,  each  parameter  ex‐
  pands  to  a separate word.  That is, "$@" is equivalent to "$1"
  "$2" ...  If the double-quoted expansion occurs within  a  word,
  the  expansion  of the first parameter is joined with the begin‐
  ning part of the original word, and the expansion  of  the  last
  parameter  is  joined  with  the last part of the original word.
  When there are no positional parameters, "$@" and $@  expand  to
  nothing (i.e., they are removed).

We know what "$@" is supposed to do.  And something like "x${@}y" is
well-defined also -- you simply prefix "x" to the first word, and append
"y" to the final word.

But I don't know how "$@$@" is supposed to be interpreted.  I do not see
anything in the official wording that explains how it should work.

Therefore, *my* question is: what are you trying to do?

Given a series of positional parameters, such as

set -- '' "two words" foobar

what do you expect "$@$@" to expand to?  Bash 5.2 gives me

hobbit:~$ set -- '' "two words" foobar
hobbit:~$ printf '<%s> ' "$@$@"; echo
<> 

which appears to concatenate the last word of the list and the first
word of the list -- a reasonable output, I would say.  Here's a clearer
look:

hobbit:~$ set -- a '' b
hobbit:~$ printf '<%s> ' "$@$@"; echo
 <>  <> 

I can't complain about this result.  But at the same time, I can't say
"this is the best possible result".  Other interpretations seem equally
valid.  I just wonder what your intent was, in using the "$@$@" expansion
in the first place.



Re: printf inconsistent results for %.0f

2024-08-12 Thread Greg Wooledge
On Mon, Aug 12, 2024 at 16:30:26 +0200, Laur Aliste wrote:
> Configuration Information:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -g -O2 -Werror=implicit-function-declaration
> -fstack-protector-strong -fstack-clash-protection -Wformat
> -Werror=format-security -fcf-protection -Wall
> uname output: Linux p14s 6.9.12-amd64 #1 SMP PREEMPT_DYNAMIC Debian
> 6.9.12-1 (2024-07-27) x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
> 
> Bash Version: 5.2
> Patch Level: 21
> Release Status: release
> 
> Description:
> One of my old script that's been in use for 10+ years is using
> built-in bash printf
> for rounding float value to int via `printf`; as of today (Aug 12)
> it started
> returning erroneous and inconsistent results.

Arch Linux, by any chance?

https://lists.gnu.org/archive/html/bug-bash/2024-07/msg00034.html

https://gitlab.archlinux.org/archlinux/packaging/packages/bash/-/issues/3



Re: 'wait -n' with and without id arguments

2024-08-09 Thread Greg Wooledge
On Fri, Aug 09, 2024 at 15:20:52 -0400, Zachary Santer wrote:
> On Fri, Aug 9, 2024 at 2:52 PM Greg Wooledge  wrote:
> >
> > The problem is that our entire understanding of what "wait -n" DOES has
> > been annihilated.  We thought it would "trigger" exactly once for every
> > completed background process, regardless of whether they completed
> > before or after calling "wait -n", which would allow the writing of an
> > N-jobs-at-a-time thing.  It turns out this is incorrect.
> 
> My understanding is that it actually does this reliably, but only when
> it's in a script. The attached was basically my take on your example,
> obviously with infinite dummy tasks and sleeping in the parent shell.
> Feel free to mess around with it. The loop only terminates when the
> script is sourced from the interactive shell.
> 
> If it wasn't made clear in the earlier discussion that what bash is
> doing that makes this unreliable is only a factor in the interactive
> shell, then I wasn't the only one missing something.

If that's true, then there are *two* things to complain about.  First,
that the "wait -n" behavior is surprising, in a very bad way.  Second,
that something changes in a subtle and *incredibly* hard to pinpoint way
between interactive and non-interactive shells.

Hell, let's call it three things to complain about.  Third, that there
is no documentation that explains any of this clearly.



Re: 'wait -n' with and without id arguments

2024-08-09 Thread Greg Wooledge
On Fri, Aug 09, 2024 at 13:59:58 -0400, Zachary Santer wrote:
> I don't necessarily understand why someone would call 'wait -n' from
> the interactive shell, so I don't really know what the desired
> behavior would be when they do so. Would be nice if other people want
> to chime in on that point.

The only use case I'm aware of for "wait -n" was implementing a thing
that runs N processes at a time, launching one new process every time
an existing one completes.

I could easily see someone writing a function for this and putting it in
their .bashrc and using it from an interactive shell, though I haven't
done this myself.

The problem is that our entire understanding of what "wait -n" DOES has
been annihilated.  We thought it would "trigger" exactly once for every
completed background process, regardless of whether they completed
before or after calling "wait -n", which would allow the writing of an
N-jobs-at-a-time thing.  It turns out this is incorrect.

Now that we know "wait -n" is a gigantic race condition with no
predictability, it's hard for me to come up with any legitimate use
for it.

The best N-jobs-at-a-time implementation now is probably GNU xargs -0 -P,
but it's a VERY distant second place behind the mythical wait -n loop.



Re: whats wrong , exit code 11 on android termux

2024-08-06 Thread Greg Wooledge
On Tue, Aug 06, 2024 at 20:59:08 +0200, alex xmb sw ratchev wrote:
> maybe its this i completly dunno
> 
> i guess i must start use devel branch ?
> 
> On Tue, Aug 6, 2024, 20:45 Grisha Levit  wrote:
> 
> > On Tue, Aug 6, 2024, 14:19 alex xmb sw ratchev  wrote:
> >
> >> ~ $ alias tm='timemark+=( $EPOCHREALTIME )'
> >> ~ $ tm
> >>
> >> [Process completed (signal 11) - press Enter]

As a temporary workaround, you could write a function instead of an alias.



Re: [bug #66068] built-in printf function not working with float

2024-08-06 Thread Greg Wooledge
On Tue, Aug 06, 2024 at 05:58:56 -0400, anonymous wrote:
> jesusm@liet:[~]$ bash --version
> GNU bash, version 5.2.32(1)-release (x86_64-slackware-linux-gnu)
> Copyright (C) 2022 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later 
> 
> This is free software; you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> jesusm@liet:[~]$ printf "%f\n" 1.2
> -nan

For whatever it's worth, I can't duplicate this with upstream bash 5.2.32
compiled on Debian 12 amd64.  It might be a problem in the poster's libc,
the poster's compiler, or some Linux distribution's patches.

The poster might be using "x86_64-slackware-linux-gnu" as shown in
the output, but there's no way to contact them to get confirmation or
additional details (like "which version of Slackware").



Re: Backticked, nested command will steal piped stdin

2024-07-30 Thread Greg Wooledge
On Tue, Jul 30, 2024 at 14:06:21 -0400, Dale R. Worley wrote:
> In your case, you probably want
> 
> >   $ seq 3 | head -n $(

Re: if source command.sh & set -e issue

2024-07-24 Thread Greg Wooledge
On Wed, Jul 24, 2024 at 20:53:33 +0300, Mor Shalev via Bug reports for the GNU 
Bourne Again SHell wrote:
> *if source command.sh ; then  echo passfi*
> Or, similarly:
> 
> *source command.sh && echo pass*

Remember how -e is defined:

  -e  Exit  immediately  if a pipeline (which may consist of a
  single simple command), a list, or  a  compound  command
  (see SHELL GRAMMAR above), exits with a non-zero status.
  The shell does not exit if the  command  that  fails  is
  part  of  the command list immediately following a while
  or until keyword, part of the test following the  if  or
  elif  reserved  words, part of any command executed in a
  && or || list except the command following the final  &&
  or ||, any command in a pipeline but the last, or if the
  command's return value is being inverted with !.


With that in mind, let's re-establisg our setup, with two variants for
the initial script:


hobbit:~$ cat script1.sh
#!/bin/bash
echo script.sh begin
source command.sh && echo pass
echo script.sh end
hobbit:~$ cat script2.sh
#!/bin/bash
echo script.sh begin
source command.sh
test $? = 0 && echo pass
echo script.sh end
hobbit:~$ cat command.sh
set -e
echo command.sh start
false
echo command.sh end


Now compare:


hobbit:~$ bash-5.2 script1.sh
script.sh begin
command.sh start
command.sh end
pass
script.sh end
hobbit:~$ bash-5.2 script2.sh
script.sh begin
command.sh start


So, we can see what happened here.  In script1.sh, command.sh is sourced
by a command which is "part of any command executed in a && or || list
except the command following the final && or ||".  And therefore, set -e
does not trigger.

In script2.sh, the source command is NOT part of the command list
following while/until, nor part of the test following if/elif, nor any
part of a &&/|| list.  And set -e triggers.

So this would appear to be part of an intended change, to make the
behavior of -e satisfy the documented requirements.

Please remember, -e is *not* intended to be useful, nor is it intended
to be intuitive.  It's intended to be *bug compatible* with whatever
interpretation the POSIX committee has agreed upon this year.  This
interpretation changes over time, so the behavior of -e also changes.



Re: if source command.sh & set -e issue

2024-07-24 Thread Greg Wooledge
On Wed, Jul 24, 2024 at 16:23:35 +0300, Mor Shalev via Bug reports for the GNU 
Bourne Again SHell wrote:
> script.sh contain:
> if source command.sh ; then
>   echo pass
> else
>   echo fail
> fi
> command.sh contain 'set -e' at start. so command.sh should exit once detect
> fail.
> 
> once calling ./script.sh it looks like command.sh dont handle 'set -e'
> correctly and it continues the script till the end anyway. (btw, it works
> correctly at version 4.2.46(2)-release (x86_64-redhat-linux-gnu)

Words like "correctly" lose all their meaning when set -e enters the
picture.  I think what you really meant is "as I expected".

set -e *rarely* works as one expects.

Reproducing what I think you're doing:

hobbit:~$ cat script.sh
#!/bin/bash
if source command.sh ; then
  echo pass
else
  echo fail
fi
false
echo ending script.sh
hobbit:~$ cat command.sh
set -e
false
echo still in command.sh
hobbit:~$ ./script.sh
still in command.sh
pass
hobbit:~$ bash-4.2 script.sh
hobbit:~$ bash-5.0 script.sh
still in command.sh
pass
hobbit:~$ bash-4.4 script.sh
still in command.sh
pass
hobbit:~$ bash-4.3 script.sh
still in command.sh
pass

So, it would appear that the behavior changed between 4.2 and 4.3.  I'll
let someone else try to dig up the reasoning behind the change.  I'm more
concerned with your misconceptions about set -e.

Your bug report implies that you believe "command.sh" is a separate
script, which can "exit".  But this isn't the case.  You're reading the
lines of command.sh within the same shell process that's reading the
parent ./script.sh.

If command.sh were actually to run an "exit" command, your entire script
would exit, not just command.sh.  If you want to terminate the sourced
file but *not* the whole script, you'd need to use "return" instead of
"exit".

Moreover, since you've run "set -e" inside a sourced file, this turns
on set -e for your whole script.  If you examine $- after the source
command returns, you'll see that it contains "e".

Now, look once more at how bash 4.2 behaved:

hobbit:~$ bash-4.2 script.sh
hobbit:~$ 

There's no output at all.  Neither "pass" nor "fail" -- because the
entire script exited.  Presumably as a result of the "false" command
inside command.sh, after set -e had been turned on.

Is that *really* what you wanted?  It certainly doesn't sound like it,
based on your bug report.



Re: [Bug] Array declaration only includes first element

2024-07-18 Thread Greg Wooledge
On Thu, Jul 18, 2024 at 00:00:17 +, Charles Dong via Bug reports for the 
GNU Bourne Again SHell wrote:
> - Declare an array: `a=(aa bb cc dd)`
> - Print this array: `echo $a` or `printf $a`

$a is equivalent to ${a[0]}.  That's not how you print an entire array.

The easiest way to print an array is to use "declare -p":

hobbit:~$ a=(aa bb cc dd)
hobbit:~$ declare -p a
declare -a a=([0]="aa" [1]="bb" [2]="cc" [3]="dd")

If you want something a little less noisy, you can use "${a[*]}" to
serialize the whole array to a single string/word, or "${a[@]}" with
the double quotes to expand it to a list of words.

hobbit:~$ echo "<<${a[*]}>>"
<>
hobbit:~$ printf '<<%s>> ' "${a[@]}"; echo
<> <> <> <> 

See also .



Re: waiting for process substitutions

2024-07-12 Thread Greg Wooledge
On Sat, Jul 13, 2024 at 07:40:42 +0700, Robert Elz wrote:
> Please just change this, use the first definition of "next job to
> finish" - and in the case when there are already several of them,
> pick one, any one - you could order them by the time that bash reaped
> the jobs internally, but there's no real reason to do so, as that
> isn't necessarily the order the actual processes terminated, just
> the order the kernel picked to answer the wait() sys call, when
> there are several child zombies ready to be reaped.

This would be greatly preferred, and it's how most people *think*
wait -n currently works.

The common use case for "wait -n" is a loop that tries to process N jobs
at a time.  Such as this one:

greg@remote:~$ cat ~greybot/factoids/wait-n; echo
Run up to 5 processes in parallel (bash 4.3): i=0 j=5; for elem in 
"${array[@]}"; do (( i++ < j )) || wait -n; my_job "$elem" & done; wait

If two jobs happen to finish simultaneously, the next call to wait -n
should reap one of them, and then the call after that should reap
the other.  That's how everyone wants it to work, as far as I've seen.

*Nobody* wants it to skip the job that happened to finish at the exact
same time as the first one, and then wait for a third job.  If that
happens in the loop above, you'll have only 4 jobs running instead of 5
from that point onward.



Re: pwd and prompt don't update after deleting current working directory

2024-07-11 Thread Greg Wooledge
On Fri, Jul 12, 2024 at 10:26:54 +0700, Robert Elz wrote:
> Is it supposed to continually run "stat($PWD, ...)" forever (after
> all the directory might be removed from elsewhere while you're in
> the middle of typing a command - what do you expect to happen then?)

It's even worse: let's say a new option is added, to have bash stat $PWD
before or after every command is executed.  If the stat fails, then
bash changes directory.

Then let's say you write a script, and run it under bash using this
new option.  If the script's working directory is unlinked, and this
new option triggers, then bash will change its working directory.

This could happen in between *any* pair of commands.  The script won't
even know that it happened, and won't be expecting it.

Essentially, this would make it impossible to use any relative pathnames
safely.  A script has to *know* what its working directory is, or it has
to use only absolute pathnames.  Otherwise, something like this:

cd "$mydir" || exit
touch "$tmpfile"
...
rm -f "$tmpfile"

could end up removing a temp file (accessed via a relative pathname)
from the wrong directory, because the working directory changed before
the rm command was executed.



Re: Local variable can not unset within shell functions

2024-07-11 Thread Greg Wooledge
On Thu, Jul 11, 2024 at 15:39:41 -0400, Lawrence Velázquez wrote:
> I won't speculate about the issue, but your subject goes too far.
> The variable really is unset here:
> 
>   % cat /tmp/x.bash
>   x() {
>   local x=y
>   declare -p x
>   echo "x is ${x-unset}"
>   unset x
>   declare -p x
>   echo "x is ${x-unset}"
>   }
> 
>   x
>   % bash /tmp/x.bash
>   declare -- x="y"
>   x is y
>   declare -- x
>   x is unset

It looks like newer versions of bash retain *some* memory of the unset
local variable's name, but not its flags or prior contents.

hobbit:~$ f() { local -i x=0; declare -p x; unset x; declare -p x; }
hobbit:~$ f
declare -i x="0"
declare -- x

Lawrence is spot on with the semantics, though.  The unset variable
behaves exactly like a variable that was never set.

hobbit:~$ g() { local -i x=0; unset x; echo "plus:${x+plus} minus:${x-minus}";}
hobbit:~$ g
plus: minus:minus

So the question is why Werner's associate cares enough about the output
of declare -p to call this a bug, rather than a simple change of internal
implementation.

Is there some actual semantic difference in behavior between bash versions
that we need to be concerned about here?



Re: Env var feature request

2024-07-09 Thread Greg Wooledge
On Tue, Jul 09, 2024 at 20:14:27 +, Erik Keever wrote:
> A --debug-envvars flag which will, when passed to bash, catch every time an 
> environment variable is set and print the file/line that is setting it. To 
> restrict it, "--debug-envvars FOO,BAR" to catch only instances of FOO or BAR 
> being set.

It's not *exactly* what you're asking for, but you can get most of
this by invoking bash in xtrace mode with PS4 set to a custom value:

PS4='+ $BASH_SOURCE:$FUNCNAME:$LINENO:' bash -ilxc : 2>&1 | grep WHATEVER

That will show you where WHATEVER is being set during an interactive
shell login, for example.  Omit the "l" flag if you want to debug a
non-login shell instead.

Note that if bash is being run as UID 0, it will ignore PS4 coming from
the environment, for security reasons.  So, this only works as a non-root
user.



Re: waiting for process substitutions

2024-07-08 Thread Greg Wooledge
On Mon, Jul 08, 2024 at 22:45:35 +0200, alex xmb sw ratchev wrote:
> On Mon, Jul 8, 2024, 22:15 Chet Ramey  wrote:
> 
> > On 7/8/24 4:02 PM, alex xmb sw ratchev wrote:
> >
> > > hi , one question about ..
> > > if a cmd contains more substitions like >( or <( , how to get all $!
> > > maybe make ${![]} , or is such already .. ?
> >
> > You can't. Process substitutions set $!, but you have to have a point
> > where you can capture that if you want to wait for more than one. That's
> > the whole purpose of this thread.
> >
> 
> so no ${![2]} or so ?
> else i see only half complex start_first stuff
> 
> anywa .. greets  = ))

Bash has nothing like that, and as far as I know, nobody is planning to
add it.

If you need to capture all the PIDs of all your background processes,
you'll have to launch them one at a time.  This may mean using FIFOs
(named pipes) instead of anonymous process substitutions, in some cases.



Re: proposed BASH_SOURCE_PATH

2024-07-07 Thread Greg Wooledge
On Sun, Jul 07, 2024 at 21:23:15 +0200, alex xmb sw ratchev wrote:
> hi ..
> i dont get the BASH_SOURCE[n] one
> the point of prefix $PWD/ infront of relative paths is a static part of
> fitting into the first lines of the script , assigning vars
> .. if u cd first then want the old relative path .. no go .. it must be
> done at early codes

By now, we've had many conflicting ideas proposed by different people,
each trying to solve a different problem.  I've long since lost track of
what all of the proposals and concepts were.

At this point, I'm just going to wait and see what gets implemented, and
then figure out how that affects scripts and interactive shells in the
future.



Re: waiting for process substitutions

2024-07-05 Thread Greg Wooledge
On Fri, Jul 05, 2024 at 15:16:31 -0400, Chet Ramey wrote:
> They're similar, but they're not jobs. They run in the background, but you
> can't use the same set of job control primitives to manipulate them.
> Their scope is expected to be the lifetime of the command they're a part
> of, not run in the background until they're wanted.

Some scripts use something like this:

#!/bin/bash
exec > >(tee /some/logfile) 2>&1
logpid=$!

...

exec >&-
wait "$logpid"

Your expectations might be different from those of bash's users.



Re: printf fails in version 5.2.026-3

2024-07-03 Thread Greg Wooledge
On Wed, Jul 03, 2024 at 22:24:43 +0200, szige...@delg0.elte.hu wrote:
> Description:
>   printf works in version 5.2.026-2, fails in version 5.2.026-3

What exactly are these version numbers?  Are they packages from some
Linux distrbution?

> Repeat-By:
>   contents of tmp.sh begins 
>   #!/usr/bin/env bash
>   a=1
>   printf "%.2f\n" "$a"
>   contents of tmp.sh ends --
> 
>   5.2.026-2> ./tmp.sh
>   1.00
> 
>   5.2.026-3> ./tmp.hs
>   nan
> 
> Fix:
>   I don't understand what is happening, my current fix is downgrading to 
> 5.2.026-2.

You didn't show us the contents of the script that produces "nan" as
output.  You only showed the script that produces "1.00".

I can't reproduce this result using a self-compiled bash 5.2.26 on my
system:

hobbit:~$ bash-5.2.26 -c 'printf "%.2f\n" 1'
1.00
hobbit:~$ bash-5.2.26 -c 'echo "$BASH_VERSION"'
5.2.26(6)-release

If "5.2.026-3" is some Linux vendor's package version, then it sounds like
they introduced a patch which causes the change in behavior.  You should
contact them, via whatever bug reporting system they offer.



Re: feature suggestion: ability to expand a set of elements of an array or characters of a scalar, given their indices

2024-06-28 Thread Greg Wooledge
On Fri, Jun 28, 2024 at 08:50:50 -0400, Zachary Santer wrote:
> Is "${array[@]( "${indeces[@]}" )}" ugly? Does that matter? It seems
> like a good way to write what's happening. I still have to look up
> some of the less-commonly-used parameter expansions every time I use
> them. I think people would kind of "get" this more readily.

I'm still wondering when you'd ever use this in a shell script.

The first thing I can think of is "I'm presenting a menu to the user,
from which zero or more items may be selected.  The user's selection
indices are read into an array.  I want to map those selections to another
array to get the filenames-or-whatever-they-are."

In such a script, I would write a loop, retrieve the filenames one at a
time, and process them or append them to a list, depending on what the
script is supposed to do with them.

The amount of work it would take to support this new syntax seems like it
would exceed the value it adds to the quite rare script that would use it.



Re: feature suggestion: ability to expand a set of elements of an array or characters of a scalar, given their indices

2024-06-26 Thread Greg Wooledge
On Wed, Jun 26, 2024 at 14:09:13 -0400, Zachary Santer wrote:
> > > Imagine this functionality:
> > > $ array=( zero one two three four five six )
> > > $ printf '%s\n' "${array[@]( 1 5 )}"
> > > one
> > > five

> I did want to see if others would find this valuable, and no one spoke
> up, so it's whatever.

It's one of those features that you might expect in a higher-level
programming language, but don't necessarily need in a shell script.



Re: readarray leaves a NULL char embedded in each element

2024-06-24 Thread Greg Wooledge
On Mon, Jun 24, 2024 at 20:01:45 +0200, Davide Brini wrote:
> $ ./printarg "${X[0]}A"
> 65 32 65 0 83
> 
> That is, "A", a space, and "A" again (which is the result of the quoted
> expansion), 0 for the string terminator, and a random 83 which is
> whatever follows in memory (strangely, it seems to be 83 consistently
> though).

As a guess, it might be coming from envp[], the environment variables.
On my system at least, the first environment variable happens to be
SHELL.  And of course 83 is 'S'.

Obviously this isn't a safe thing to do.  The compiler would be within
its rights to arrange for a segfault when reading beyond the end of an
individual argv[] string.  It's just bad luck that it printed garbage
output instead of crashing.



Re: readarray leaves a NULL char embedded in each element

2024-06-24 Thread Greg Wooledge
On Mon, Jun 24, 2024 at 10:50:15 -0600, Rob Gardner wrote:
> Description:
> When using space or newline as a delimiter with readarray -d,
> 
> elements in the array have the delimiter replaced with NULL,
> 
> which is left embedded in each element of the array.

This isn't possible.  Bash doesn't allow the storing of NUL bytes in
variables, and further, Unix/Linux doesn't permit passing NUL bytes as
command-line arguments to programs.

> This
> causes incorrect behavior when using array elements as arguments to
> sub-processes.

(Bash cannot pass a NUL byte as an argument.)

> I first noticed the problem when trying to use an array element as
> part of an
> argument to sed:
> readarray -d ' ' x << "A B"
> sed -e s/X/${x[0]}/

First point, your readarray command is using the wrong redirection
operator.  I'm fairly sure you meant to write <<< instead of <<.  Using
the here-string operator <<<, we can see that the first array element
retains the space delimiter (because -t was not used), and the second
retains the newline character, which is added by <<<.

hobbit:~$ readarray -d ' ' x <<< "A B"
hobbit:~$ declare -p x
declare -a x=([0]="A " [1]=$'B\n')

Second point, your sed command is not using quotes.

> This caused sed to complain "unterminated `s' command".

The space at the end of x[0] causes word splitting to occur, due to the
lack of quotes. The s/X/A part becomes one argument, and the / part
becomes a second argument.

> Using "read -a" instead of readarray produces correct results.

That one uses IFS to separate and trim the input fields.  The default
IFS contains a space, so none of the array elements contains a space.
Therefore, your lack of quoting probably doesn't cause any additional
word splitting.

> With a simple C program to print out the characters in argv[1], one
> can see that a NULL character is left in the argument. Program:
> #include 
> #include 
> void main(int argc, char *argv[])
> {
> int i, n;
> if (argc > 1) {
> n = strlen(argv[1]);
> for (i=0; i }
> }

I'm not at all clear on what this C program is doing.  You're putting a
single character/byte on the stack for printf to process using the %d
operator, which... expects an integer?  And therefore reads more than
one byte from the stack?

Sorry, it's been ages since I did C.

> $ readarray -d ' ' X <<< "A B C"
> $ read -d ' ' -a   Y <<< "A B C"
> $ readarray -td ' ' Z <<< "A B C"
> $ ./printarg ${X[0]}A
> 65 0 65 $

In this command, ${X[0]} is a capital A plus a space character.  You're
not using quotes, so ${X[0]}A becomes the two argument words "A" and "A".

hobbit:~$ readarray -d ' ' X <<< "A B C"
hobbit:~$ declare -p X
declare -a X=([0]="A " [1]="B " [2]=$'C\n')
hobbit:~$ printf '<%s> ' ${X[0]}A ; echo
  

Your C program appears to look only at the first argument word, "A",
and ignores the second word.  It takes strlen("A"), which is 1, and
adds 2 to it, getting 3.  Thus, it loops 3 times, and thus, we see
the three numbers it writes to stdout.

The argument words are stored internally as NUL-terminated strings, so
it's no surprise that the second loop iteration prints a 0.  The
third loop iteration is printing random garbage from beyond the end
of the argument string, unless I'm misreading the situation.

> $ ./printarg ${Y[0]}A
> 65 65 0 83 $

Here, Y[0] contains "A", so you're passing "AA" as your sole argument.
The argument's string length is 2, so you're looping 4 times.  The
numbers 65 65 0 are from the internal storage of the argument words, and
the 83 is garbage from beyond the end of the string.

> $ ./printarg ${Z[0]}A
> 65 65 0 83 $

Here, Z[0] is "A" instead of "A ", because you used -t to trim the space.
So you're passing "AA" as your argument, just like the previous call.

So, in a nutshell, this is what I believe you need to see:

 1) readarray without -t retains the delimiter, even if it's a space
or newline.  It does not convert the delimiter to a NUL byte.

 2) Unquoted ${X[0]} when X[0] ends with a space causes word splitting
to occur, so anything after the ${X[0]} will become a new word
(assuming IFS hasn't been modified).

 3) Arguments passed to a program via the Unix kernel are NUL-terminated
strings.  Therefore, the NUL byte can't be part of the argument
itself.  It's a signpost that the argument string has ended.



Re: bash crashes when splitcurl script try download non existent file

2024-06-21 Thread Greg Wooledge
On Fri, Jun 21, 2024 at 22:26:07 +0500, Mikhail Gavrilov wrote:
> On Fri, Jun 21, 2024 at 10:06 PM Chet Ramey  wrote:
> > Bash allows recursive trap handlers.
> 
> Ok. But it's very suspicious for me because the script ended without
> any issues on macOS.
> 
> mikhail@MBP-Mikhail ~> ./splitcurl.sh "ftp://test.rebex.net/no-file"; 10
> Download failed!
> mikhail@MBP-Mikhail ~>
> mikhail@MBP-Mikhail ~> bash --version
> GNU bash, version 3.2.57(1)-release (arm64-apple-darwin24)
> Copyright (C) 2007 Free Software Foundation, Inc.

That version of bash is 17 years old.

Recursive trap handling was added in version 4.3, if I'm reading this
CHANGES entry correctly:

  n. The shell no longer blocks receipt of signals while running trap handlers
 for those signals, and allows most trap handlers to be run recursively
 (running trap handlers while a trap handler is executing).

There are some newer entries than that for SIGINT specifically, but
that's not relevant in this case.



Re: Question that baffles AI (all of them)

2024-06-15 Thread Greg Wooledge
On Sat, Jun 15, 2024 at 05:30:17PM -0400, Saint Michael wrote:
> in this code:
> data="'1,2,3,4','5,6,7,8'"

> how can I get my (a) and (b) arguments right?
> The length of both strings is unpredictable.
> a="1,2,3,4" and b="5,6,7,8""

This is a parsing problem.  Bash is not a particularly good choice for
writing a custom parser like this.  It'll be slow as hell.

It *looks* like your "data" comes from a CSV file, or some variant of
a CSV file.  If this is the case, then you've brought us an X-Y problem.
There are CSV parsing libraries in several different languages.
You should consider switching to one of those languages, and writing
your program with appropriate tools.

For example, here's a version using Tcllib's csv package.  It assumes the
quote character is " so we have to change it to ' in the call to split:

hobbit:~$ cat foo
#!/usr/bin/tclsh8.6
package require csv

set data {'1,2,3,4','5,6,7'}
set list [csv::split $data , ']
puts "list item 0 is <[lindex $list 0]>"
puts "list item 1 is <[lindex $list 1]>"
hobbit:~$ ./foo
list item 0 is <1,2,3,4>
list item 1 is <5,6,7>

Similar packages probably exist in all of the major scripting languages
that are not shells.

I've Cc'ed help-bash for this; that's where the question belongs.



Re: set -a leads to truncated output from ps

2024-06-15 Thread Greg Wooledge
On Sat, Jun 15, 2024 at 07:48:42AM +0300, Oğuz wrote:
> Right now, if you're dealing with such a program while `set -a' is in
> effect, in order to suppress COLUMNS you need to unexport it before every
> command:

Or just turn off checkwinsize.



Re: set -a leads to truncated output from ps

2024-06-14 Thread Greg Wooledge
On Fri, Jun 14, 2024 at 11:36:19AM +, Alain BROSSARD wrote:
> ps axww   isn’t impacted, but the scripts use ‘ps ax’.

Oh.  Well then, that's the most obvious thing to fix.



Re: set -a leads to truncated output from ps

2024-06-14 Thread Greg Wooledge
On Fri, Jun 14, 2024 at 07:28:41AM +, Alain BROSSARD wrote:
> Honestly, I don't know where to go with this issue. Bash's behavior 
> should be consistent and easily understood in order to have reliable scripts. 
> Clearly this case doesn't respect this.
> I would suggest to put LINES and COLUMN out of scope of the behavior of 
> "set -a", or at the very least make all these interactions explicit within 
> the man page.  If it had been documented, I would have saved myself many 
> hours of work as I did read the documentation once I had pinned down that 
> "set -a" is the command which caused those script to fail. Though the real 
> culprit, in the end, might be checkwinsize default behavior if one wanted to 
> blame a single member of this trio.

The biggest surprise for me in all of this is that checkwinsize is
enabled in *scripts*.  Your demonstrations were all done with interactive
shells, and I'm not surprised by the behavior there.  But now you're
claiming this also happens in a script?  That surprises me.

Maybe the best solution would be to disable checkwinsize in noninteractive
shells by default.  Looking at the CHANGES file, checkwinsize became
enabled by default in bash 5.0.  That's relatively recent.  I'm not sure
what the rationale was, behind making that change.

Meanwhile, for your own scripts which use set -a and then call a broken
implementation of ps that doesn't support "ww" correctly... my best
suggestion is to do a "shopt -u checkwinsize" yourself, prior to the
set -a.  And/or complain to your OS vendor that their version of ps
needs to ignore the COLUMNS environment variable when using the "ww"
option.

The version in Debian 12 seems fine to me:

hobbit:~$ COLUMNS=60 ps ww -fp 192331
UID  PIDPPID  C STIME TTY  STAT   TIME CMD
greg  192331  192024  0 07:11 tty1 Sl 0:00 
/opt/google/chrome/chrome --type=renderer --crashpad-handler-pid=192013 
--enable-crash-reporter=CB81BA40-8F5C-190E-68BD-10B3F798FC39, 
--change-stack-guard-on-fork=enable --lang=en-US --num-raster-threads=4 
--enable-main-frame-before-activation --renderer-client-id=24 
--time-ticks-at-unix-epoch=-1717084382798067 --launch-time-ticks=1279127066717 
--shared-files=v8_context_snapshot_data:100 
--field-trial-handle=3,i,8617856633869464935,3483640836820097713,262144 
--variations-seed-version=20240613-180209.895000


Finally, one last bit of unsolicited advice, which may or may not cause
you to become angry: parsing the output of "ps" in scripts is a dirty
hack, and should not be your best method of solving whatever the
underlying issue is.  If you're trying to "see if it's already running
before you start it" then the preferred solution is to implement "it"
as a managed service (via systemd, runit, daemontools, or whatever
you use for services, preferably not sysv-rc, but even that should
have OS-specific hacks to allow some kind of approximation of service
management).

And yes, there are ugly situations where even the best service managers
in the world can't fully express a solution, and "ps" parsing *might*
have a role to play... but at least you should consider other options
if you haven't already.



Re: Poor messages when the '#!' file isn't found

2024-06-13 Thread Greg Wooledge
On Fri, Jun 14, 2024 at 02:13:37AM +0800, Dan Jacobson wrote:
> $ echo \#!/usr/bin/python > k
> $ chmod +x k
> $ ./k
> bash: ./k: cannot execute: required file not found
> 
> Bash should really mention what file it is talking about.

Bash doesn't KNOW what file is missing.  All it knows is that the
kernel said "No such file or directory", and bash used to report
exactly that.  The message was changed to be more informative (bash
realizes that the *command file* exists, but the kernel still said
ENOENT, so bash gives you this message instead of the generic one).

The only way to know what file is missing is to dive into the operating
system's internals, with system-specific knowledge and tools.  You,
as a human, can do that.  Bash should never be expected to.

(In case you weren't aware, you will get the same message, for the same
reasons, if /usr/bin/python exists but it's missing a shared library or
an architecture-specific ld.so type loader program.  Your current
example may look simplistic, but it's just a tip-of-the-iceberg type
thing.)



Re: set -a leads to truncated output from ps

2024-06-13 Thread Greg Wooledge
On Thu, Jun 13, 2024 at 06:12:21PM +0200, Andreas Schwab wrote:
> Why do you think this is a bug in bash?  You are telling the shell to
> export any modified variable, and you get what you asked for.

I started writing something like this as well but deleted it.  I think
there's still a bit of subtlety to be worked out here with respect to
the COLUMNS variable.  If checkwinsize is enabled (which seems to be
the default), perhaps this is somehow causing COLUMNS to be added to
the set of modified variables and exported after running "set -a" in
an interactive shell?

In my own testing just now, running "set -a" in an interactive shell
doesn't cause COLUMNS to be exported *until* I resize the window.  If I
resize the window after "set -a", then COLUMNS does indeed get exported.
It's not clear to me whether that's intended or desired.

It would be useful to know exactly what steps the OP has performed to
produce the undesired outcome, including the moving and resizing of
terminal windows.

(Then again, running set -a in an interactive shell seems like a fairly
peculiar thing to do.  It might be helpful to know why that was done.  I
won't hold my breath waiting for explanations, though.)



Re: it, soRE: set -a leads to truncated output from ps

2024-06-13 Thread Greg Wooledge
On Thu, Jun 13, 2024 at 06:28:23PM +, Alain BROSSARD via Bug reports for 
the GNU Bourne Again SHell wrote:
>My conclusion is that the variable COLUMNS gets “reset” when there is a 
> pipe in a command and further gets EXPORTED if “set -a” was set before hand :
> 
> host:/$ COLUMNS=80
> host:/$ echo $COLUMNS
> 80
> host:/$ echo $COLUMNS
> 80
> host:/$ echo $COLUMNS | cat
> 80
> host:/$ echo $COLUMNS
> 102

OK, that's interesting.  I can confirm this in Debian's bash 5.2.15:

hobbit:~$ bash
hobbit:~$ set -a
hobbit:~$ env | grep COLUMN
hobbit:~$ true | false
hobbit:~$ env | grep COLUMN
COLUMNS=80
hobbit:~$ echo $BASH_VERSION
5.2.15(1)-release

I did not move or resize the window at any time during this test.

The same thing happens in a compiled upstream 5.2.26:

hobbit:~$ bash-5.2.26
hobbit:~$ set -a
hobbit:~$ env | grep COLUMN
hobbit:~$ env | grep COLUMN
COLUMNS=80

Again, I did not move or resize this window during this test.  It is an
80x24 rxvt-unicode terminal, about as plain as you can get.  I get the
same result in an xterm also.

I still have no idea why you're running set -a in an interactive shell.



Re: Echoing commands

2024-06-13 Thread Greg Wooledge
On Thu, Jun 13, 2024 at 11:51:13AM -0400, Dale R. Worley wrote:
> For instance, how should this be logged?
> 
> $ { echo foo ; echo bar ; } >/dev/null
> + echo foo
> + echo bar

I'm 99% sure I know what answer the OP of this thread will give:
"It should write '{ echo foo ; echo bar ; } >/dev/null', nothing
more and nothing less."

What they want is for the shell to keep track of the actual lines of
code that were read and parsed, and whenever a command is executed in
the future, look up the original line of the script and write that,
without any expansions.

I've seen this same request from several people over the years.

I have no idea how much work it would be to implement, and no idea what
kind of benefits the OP envisions this will give them.



Re: Echoing commands

2024-06-13 Thread Greg Wooledge
On Thu, Jun 13, 2024 at 10:01:16AM +0200, Angelo Borsotti wrote:
> @echo-on
> cat f1.txt f1.txt > f1.tmp
> @echo-off
> 
> I.e. the command is not entirely displayed.

Yeah.  This is what I mentioned originally: set -x does not show
redirections.  Ever.  There is no workaround for this currently.
A new feature would have to be implemented, or you'll just have to
live without it.

> P.S. we can replace "set -x" by "set +v":
> alias @echo-on='set -v'
> alias @echo-off='{ set +v; } 2>/dev/null'
> 
>  This shows properly the command, but also shows @echo-off. I.e.
> '{ set +v; } 2>/dev/null'  shows itself.
> 
> I have no idea how to suppress this.

The problem is deeper than that.  set -v doesn't show *commands* as
they are executed.  It shows *lines* as they are *read from the script*
by the shell.

If your script has any kind of compound commands, such as functions,
loops, if/then, or case statements, set -v shows the lines of the script
being read, but does *not* show each simple command within the compound
command as it's being executed.

#!/bin/sh 
set -v
for i in 1 2 3; do
  echo "This is command number $i"
done

Here, for example, set -v shows you the lines of the loop as they are
read, all at once, and then the loop is executed, during which time you
see only the output from echo, and nothing at all from set -v.

Here's another example, with a conditional branch:

#!/bin/sh 
set -v
if true; then
  : run one thing
else
  : run another thing
fi

When you run this, you'll see all the lines of the if statement written
by set -v, but you have no way to see which branch was actually taken.

One final example, with no explanations.  I'll let you try to figure out
what's happening here:


hobbit:~$ cat foo
#!/bin/sh 
f() {
  set -v
  echo v has been turned on
  set +v
}
f
hobbit:~$ ./foo
v has been turned on


For nontrivial scripts, set -v is almost never useful.



Re: Echoing commands

2024-06-12 Thread Greg Wooledge
On Wed, Jun 12, 2024 at 07:31:13PM +0200, Angelo Borsotti wrote:
> "set -x" makes the ensuing commands be printed, but prefixed
> with "+ ", which makes the result look ugly, not to mention that
> the following "set +x" is echoed too (there are hacks to suppress
> the "set +x" output, but they are just hacks).

If all you want is to remove the "+ ", you can simply set the PS4
variable to an empty string.

>   set -o log
>   cat tmp >tmp1# or any other command
>   set -o nolog
> 
> producing the output on the console
> 
>cat tmp >tmp1

If you want to retain redirections in the -x output, that's a whole
different issue.  There is currently no way to do that.  Chet would
have to implement something.



Re: REQUEST - bash floating point math support

2024-06-05 Thread Greg Wooledge
On Wed, Jun 05, 2024 at 01:31:20PM -0400, Saint Michael wrote:
> the most obvious use of floating variables would be to compare
> balances and to branch based on if a balance is lower than a certain
> value
> I use:
> t=$(python3 -c "import math;print($balance > 0)")
> and the
> if [ "$t" == "False" ];then
> echo "Result <= 0 [$t] Client $clname $clid Balance $balance"
> fi
> There must be a solution without Awk or Python or BC. Internal to bash

The example you show is just comparing to 0, which is trivial.  If
the $balance variable begins with "-" then it's negative.  If it's "0"
then it's zero.  Otherwise it's positive.

For comparing two arbitrary variables which contain strings representing
floating point numbers, you're correct -- awk or bc would be the minimal
solution.



Re: REQUEST - bash floating point math support

2024-06-05 Thread Greg Wooledge
On Wed, Jun 05, 2024 at 09:57:26PM +0700, Robert Elz wrote:
> Also note that to actually put floating support in the shell, more is
> needed than just arithmetic, you also need floating comparisons in test
> (or in bash, in [[ ) and a whole bunch more odds and ends that aren't
> obvious until you need them, and they're just not there (like a mechanism
> to convert floats back into integers again, controlling how rounding happens).

Ironically, that last one is the one we already *do* have.

hobbit:~$ printf '%.0f\n' 11.5 22.5 33.5
12
22
34

As long as you're OK with "banker's rounding", printf does it.



Re: sh vs. bash -xc 'a=b c=$a'

2024-05-22 Thread Greg Wooledge
On Thu, May 23, 2024 at 06:56:01AM +0800, Dan Jacobson wrote:
> It seems these should both make one line "+ a=b c=b" output,
> 
> for s in sh bash
> do $s -xc 'a=b c=$a'
> done
> 
> I mean they give the same results, but bash splits it into
> two lines, so the user reading the bash -x output cannot tell
> if one (correct) or two (incorrect) lines were used.
> They can tell with sh -x.

Does it actually matter?  What makes bash's output "incorrect", exactly?

> By the way, I looked up and down the man page,
> and wasn't sure if it says one should expect
> $c to end up as c= or c=b in fact!

I don't know where it's documented, but assignments and expansions are
always performed left to right.  In your example, a value is assigned
to variable a before $a is expanded.

> And I'm not sure the man page says to expect two lines or one of -x
> output either, when using sh vs. bash.

I don't see why it matters.  The purpose of the -x output is to show
you what the shell is doing, so that you can debug your script.  As
long as the output is *clear*, it's doing its job.

In bash's case,

hobbit:~$ bash -xc 'a=b c=$a'
+ a=b
+ c=b

you can very easily see the order in which the assignments happen, and
the values that are assigned.



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-21 Thread Greg Wooledge
On Tue, May 21, 2024 at 10:12:55AM +, Matheus Afonso Martins Moreira wrote:
> > the schizophrenic nature of the feature
> 
> First the feature was "irritating" ... Now it's "schizophrenic" ?
> I must be mentally ill for trying to contribute this?
> 
> Yeah, I'm done.

I don't think "schizophrenic" was used as an insult.  Rather, it looks
like an attempt to describe the fact that everyone in the thread has a
different concept and/or experience regarding how 'source' and '.'
should work.



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-20 Thread Greg Wooledge
On Mon, May 20, 2024 at 07:43:10PM +0200, Andreas Kähäri wrote:
> On Mon, May 20, 2024 at 05:31:05PM +, Matheus Afonso Martins Moreira 
> wrote:
> > >> Why not add a -p option to '.' to specify the path to search.
> > >> That is
> > >>. -p "${BASH_SEARCH_PATH-${PATH}}" file
> > >> would work if someone decided to use the
> > >> BASH_SOURCE_PATH var name to store the path to use
> > >> (defaulting to $PATH if that one isn't set).
> > 
> > > Believe it or not, I had thought of that as well.
> > > It sidesteps the whole BASH_SOURCE_PATH
> > > variable brouhaha altogether.
> > 
> > I think this is a really good solution. I hadn't thought of it.
> > Users can even make an alias to set a default for themselves.
> > 
> >   -- Matheus
> 
> Or even
> 
>   PATH=${BASH_SEARCH_PATH-$PATH} . file
> 
> without the need to add any options to . or to source.  But maybe that
> too pedestrian?

Are we going in circles yet?  This would clobber the value of PATH for
the duration of sourcing "file", which would potentially cause commands
in "file" to break.

hobbit:~$ cat bar
echo hi | cat
hobbit:~$ PATH=. source bar
bash: cat: command not found



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-18 Thread Greg Wooledge
On Sat, May 18, 2024 at 08:39:57AM -0300, Matheus Afonso Martins Moreira wrote:
> > Setting the variable at all is opting in to new behavior, and you do
> > that at your own risk, after reading the documentation and deciding
> > that this is what you want.
> 
> As the user, it should be my prerogative to set the
> variable on my environment and/or rc so that I can
> organize my sourceable scripts however I want and
> have the scripts which enabled the isolated sourcing
> just work no matter where I chose to install them.
> 
> As the user, I should be able to run shell scripts
> regardless of whether they support this or not.
> I should be to set this variable on my .bashrc
> and enjoy the nice infrastructure it provides
> without worrying about whether or not
> some script takes it into account.

You've made contradictory statements here.

First you said you wanted to put it into your *environment*.  That would
cause shell scripts to see it and exhibit a change in behavior.

Next you said you would like to set it in your .bashrc file.  That's
totally different.  Setting it in .bashrc *without* putting it into the
environment (no export) is "safe".  It will only affect your interactive
shell, and not any scripts that you run.



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-17 Thread Greg Wooledge
On Fri, May 17, 2024 at 03:32:23PM +, Matheus Afonso Martins Moreira wrote:
> > You don't have to export variables. I would recommend not exporting
> > such a variable to begin with unless you're sure of its effects.
> 
> It could be made safe so that it's possible to export it.
> Then it can always be set to some reasonable value.

"unset" is a very reasonable default value.  If BASH_SOURCE_PATH is
unset, then you get the historic bash behavior (or the POSIX behavior
if your shell is in POSIX mode).

Setting the variable at all is opting in to new behavior, and you do
that at your own risk, after reading the documentation and deciding
that this is what you want.

I do not foresee people setting BASH_SOURCE_PATH in their basic
interactive shell environments and login sessions.  Why would they
do that?  What purpose would it serve?

I could maybe see it being used in some sort of bash analogue of Python's
virtual environments.  Maybe you're building some bash project with
multiple resource files that get sourced from a subdirectory, and while
working in this project, you find it helpful to set BASH_SOURCE_PATH.
But you wouldn't want it to be set in your other windows.

What I'm imagining here is that the variable will be used almost
exclusively by scripts, to set the location of their resource files,
which they source.  These files may be "library code", or configuration
variables, or whatever.  They're not useful outside of the script which
sources them, so only that script needs to set BASH_SOURCE_PATH to find
them.

Effectively, it allows scripts to change from this:

#!/bin/bash
RSRCDIR=/opt/me/share
source "$RSRCDIR/foo"
source "$RSRCDIR/bar"

to this:

#!/bin/bash
BASH_SOURCE_PATH=/opt/me/share
source foo
source bar

Anything beyond that is up to you.



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-17 Thread Greg Wooledge
On Fri, May 17, 2024 at 01:42:43PM +0700, Robert Elz wrote:
> [GLOBSORT]
> Possibly useful I guess - though I'm not really sure about a possible
> need to stat() every file from a glob result because that got set to
> one of the options that needs it.  "nosort" seems likely to be the most
> useful of the available options

My immediate first thought was the "mtime" option.  No more need to
use "ls --magic-options-here -t" and eval the output stream, to get
the 10 oldest/newest log files?  Yes please!

Other than that one niche case, I don't see an immediate use for it,
but it's a potentially powerful tool to have in the box.  I'm sure
someone will find clever uses for it.

> (that description didn't say what "-nosort"
> means though - backwards from the order in the directory?)

Heh.  I would imagine -nosort is the same as nosort, but hell, Chet
might surprise me on this one.



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-16 Thread Greg Wooledge
On Fri, May 17, 2024 at 02:23:10AM +, Matheus Afonso Martins Moreira wrote:
> It's also important to consider the opposite situation:
> distributions and/or users setting BASH_SOURCE_PATH globally
> which then results in existing scripts _not_ getting the historical
> behavior which matches what their authors expected at the time.

They'll quickly learn not to do that.



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-16 Thread Greg Wooledge
On Thu, May 16, 2024 at 11:31:55AM -0400, Chet Ramey wrote:
> On 5/15/24 11:31 AM, Koichi Murase wrote:
> > Maybe it was not clear to use `source name' and `source -i name' to
> > describe the idea. I meant I assumed the search domain being
> > 
> > * BASH_SOURCE_PATH + PATH + PWD without the option
> 
> It seems to me that this isn't useful. The whole reason to have and use
> BASH_SOURCE_PATH is not to use PATH; a fallback to PATH if something
> isn't found in BASH_SOURCE_PATH doesn't allow that. And if you're using
> BASH_SOURCE_PATH, you have to set it, and if you want `.' in there,
> add it.

Yes, I'm inclined to agree with this.  If you want it to fall back to
$PATH, you can append the contents of $PATH to BASH_SOURCE_PATH as well.
You get the most control if BASH_SOURCE_PATH is used exclusively when
it's set.



Re: proposed BASH_SOURCE_PATH

2024-05-15 Thread Greg Wooledge
On Thu, May 16, 2024 at 10:51:01AM +0700, Robert Elz wrote:
> Personally, I don't think there is anything common here at all, most
> scripts don't fetch bits from other places, everything simply goes in
> the script, and the need for the '.' command at run time is avoided.
> 
> In general, I suspect that almost all use of '.' is for people writing
> a bunch of scripts for personal use, which use a set of common functions
> that that suit the needs of the person writing the script

There is one extremely noteworthy example of '.' being used in production
shell scripts: System V rc scripts on Linux systems.

As just one example, the /etc/init.d/cron script shipped with Debian's
cron package dots in /lib/lsb/init-functions very early.  This model
of "helper functions commonly used by a bunch of related scripts" is
prevalent in many Linux distributions' sysv-rc scripts, where such
scripts still exist at all.

This same script also contains the following line:

[ -r /etc/default/cron ] && . /etc/default/cron

This is another common practice in Linux sysv-rc scripts.  The files in
/etc/default/ are dottable shell fragments used to set configuration
variables.  They're intended to be user editable.

You'll note that both of the files being dotted in by this script are
absolute pathnames.  I am not aware of *any* scripts that use relative
paths and the $PATH variable to search for files to be dotted in.  I
can't say *nobody* does it, but I don't recall seeing it.



Re: [PATCH v2 5/8] builtins/source: parse the -i option

2024-05-14 Thread Greg Wooledge
On Tue, May 14, 2024 at 03:51:10PM -0400, Chet Ramey wrote:
> On 5/13/24 6:37 AM, Matheus Afonso Martins Moreira wrote:
> > Passing the -i option to the source builtin
> > enables isolated sourcing mode which restricts
> > its search path to the directories defined by
> > the BASH_SOURCE_PATH variable. This also has
> > the added benefit of not touching PATH at all.
> 
> 
> What do folks think about forcing an option to enable using
> BASH_SOURCE_PATH? Should it be required? Is it necessary? (I personally
> think it is not.) We discussed this briefly last week but without any
> conclusion.

I think setting the BASH_SOURCE_PATH variable already expresses the
intent to use it.  There's no need for another command to enable the
variable to work.



Re: bug in bash

2024-05-12 Thread Greg Wooledge
On Sun, May 12, 2024 at 03:33:12PM +0200, Andreas Schwab wrote:
> > On Sun, May 12, 2024 at 03:55:21AM +0200, Quốc Trị Đỗ wrote:
> >> I found a bug when i tried with syntax <(cmd). this is an example
> >> cat <(wc -l) < bk

> Since the redirection fails and the cat command is never started, bash
> doesn't switch the terminal process group, and the background wc command
> goes on competing with bash for the terminal.

Ah... I assumed bk was an existing file.

hobbit:~$ cat <(wc -l) <.bashrc
wc: 'standard input': Input/output error
0
hobbit:~$ 



Re: bug in bash

2024-05-12 Thread Greg Wooledge
On Sun, May 12, 2024 at 03:55:21AM +0200, Quốc Trị Đỗ wrote:
> I found a bug when i tried with syntax <(cmd). this is an example
> cat <(wc -l) < bk

What is "wc -l" supposed to read from?  It counts lines of standard input,
until EOF is reached.  But its standard input is a terminal.  And you're
running it as a background process.

I would *expect* this command to fail with an error message of some
kind, because a background process shouldn't be allowed to read
input from a terminal.

> How to fix it? It doesn't last that long. After a while, it will show "wc:
> stdin: read: Input/output error". Or we can ctrl C.

Sounds appropriate.

What's the "bug in bash" supposed to be?  What did you think this command
would do?



Re: [sr #111058] Problem transmitting script arguments

2024-05-08 Thread Greg Wooledge
On Wed, May 08, 2024 at 02:07:55PM -0400, Dale R. Worley wrote:
> "Kerin Millar"  writes:
> > On Mon, 6 May 2024, at 7:01 PM, Dale R. Worley wrote:
> >> anonymous  writes:
> >>> [...]
> 
> > It's likely that your reply will never be seen by the anonymous
> > Savannah issue filer.
> 
> OK.  Now does that mean that there is no way for me to effectively
> suggest a solution (and so I shouldn't have bothered), or that I should
> have done so by some different method?

Any replies you make here will go into the list archives, where someone
might find them if they search for this topic.  Maybe even the original
bug submitter, though the chances of that seem low, given that they
submitted this bug rather than searching for any existing literature
or discussions.

So, you just need to decide whether the time you'll spend writing a
reply here is justified.



Re: [PATCH 0/4] Add import builtin

2024-05-05 Thread Greg Wooledge
On Sun, May 05, 2024 at 03:32:04PM -0400, Lawrence Velázquez wrote:
> Much like the periodic requests for XDG-organized startup files, a
> BASH_SOURCE_PATH might be convenient but is not groundbreaking and
> probably doesn't merit significant changes to the shell.  I'm not
> even convinced it merits a new "source" option.

I don't really have any strong opinions on this issue, but the proposed
patches seem a bit overkill-ish to me.

The idea to add a BASH_SOURCE_PATH variable that gets searched before, or
instead of, PATH when using the source builtin -- that sounds good.  I
don't really understand the concepts behind the rest of the discussion.



Re: [Help-bash] difference of $? and ${PIPESTATUS[0]}

2024-04-22 Thread Greg Wooledge
On Mon, Apr 22, 2024 at 08:13:16AM +0200, felix wrote:
> Then after some tests:
> 
>   if ls /wrong/path | wc | cat - /wrong/path | sed 'w/wrong/path' >/dev/null 
> ; then
>   echo Don't print this'
>   fi ; echo ${?@Q} ${PIPESTATUS[@]@A}  $(( $? ${PIPESTATUS[@]/#/+} ))
> 
>   ls: cannot access '/wrong/path': No such file or directory
>   cat: /wrong/path: No such file or directory
>   sed: couldn't open file /wrong/path: No such file or directory
>   '0' declare -a PIPESTATUS=([0]="2" [1]="0" [2]="1" [3]="4") 7
> 
> Where $PIPESTATUS[0]=>2 and $?=>0 !!
> 
> I could explain that '$?' is result of bash's if...then...fi group command
> executed correctly [...]

That is indeed the issue here.  $? contains the exit status of the "if"
command, not of the pipeline.

hobbit:~$ help if
[...]
The exit status of the entire construct is the exit status of the
last command executed, or zero if no condition tested true.
[...]

hobbit:~$ if (exit 42); then :; fi
hobbit:~$ echo $?
0

If you remove the 'if' from your example, you get a very different result:

hobbit:~$ /wrong/path | wc | cat - /wrong/path | sed 'w/wrong/path' >/dev/null
bash: /wrong/path: No such file or directory
sed: couldn't open file /wrong/path: No such file or directory
hobbit:~$ echo "<$?> <${PIPESTATUS[@]@A}>"
<4> 

Here, $? is the exit status of the last command in the pipeline, as
it should be.

I don't know where your notion that $? and PIPESTATUS[0] should be
equivalent came from, but it is never right, except in the case where the
"pipeline" is just a single command.

Introducing the 'if' separates the value of $? and the values of the
PIPESTATUS array entirely.  They no longer refer to the same pipeline
at all.  $? contains the exit status of 'if' itself, and PIPESTATUS
contains the exit statuses of the commands from the pipeline.  If
you want $? to contain the exit status of the pipeline instead, you
need to reference it from within the 'if' statement.

hobbit:~$ if (exit 42); then echo "yes $?"; else echo "no $?"; fi
no 42

Once the 'if' terminates, the exit status of the pipeline is no longer
reliably available through $?.



Re: Erasing sensitive data from memory?

2024-04-21 Thread Greg Wooledge
On Sun, Apr 21, 2024 at 02:16:57PM -0400, Zachary Santer wrote:
> $ IFS='' read -e -r -s -p 'password: ' password
> password:

 seems to be relevant here.
I won't say that you have malicious intent here, but a script that
behaves in this way is just a step or two away from being a password
intercepter.



Re: [sr #111051] New commands: `-h`, `--help`

2024-04-18 Thread Greg Wooledge
On Thu, Apr 18, 2024 at 03:20:21PM +0700, Robert Elz wrote:
> ps: bash could probably lose the "be a login shell" '-' from argv[0][0]
> for error messages.   It isn't helpful.

It's a tiny bit helpful for someone who knows what it means, and harmless
for people who don't know.  I'd prefer it to be left as is.



Re: [sr #111051] New commands: `-h`, `--help`

2024-04-18 Thread Greg Wooledge
On Thu, Apr 18, 2024 at 03:03:32AM -0400, anonymous wrote:
> they gave me reply:
> 
> 'There isn't command `-h` on my Limux'
> 
> Therefore, after calling -h/--help, I suggest displaying a message like:

Adding a /usr/bin/-h command or whatever sounds like overkill to me.
I wouldn't want that to be present on every system in the world.

If you're maintaining a system that has extremely novice users on it,
you're going to be the one selecting their shell for them, so you can
just customize their shell.

E.g. add a -h() function in /etc/bash.bashrc (or whatever your Linux
distribution uses for that filename) which prints the message you'd
like your users to see.



Re: syntax error with lone > or < as string in [ ] tests with -a or -o operators

2024-04-15 Thread Greg Wooledge
On Mon, Apr 15, 2024 at 08:13:23PM +0200, Emanuel Attila Czirai wrote:
> On Mon, Apr 15, 2024 at 7:56 PM Greg Wooledge  wrote:
> > Sounds like you've found a nontrivial bug in FreeBSD (in the adduser
> > script, not in sh).  I hope this gets reported and fixed, and in any case,
> > good work and thank you.
> >
> It's nothing really,
> 
> there's code in adduser that does this:
> [ -z ">" -a -z ">" ] && continue
> which errors like:
> [: -a: unexpected operator
> but the > are $passwordvars

And that's a bug.  That code is wrong, and it should be written this way
instead:

[ -z "$var1" ] && [ -z "$var2" ] && continue



Re: syntax error with lone > or < as string in [ ] tests with -a or -o operators

2024-04-15 Thread Greg Wooledge
On Mon, Apr 15, 2024 at 07:04:23PM +0200, Emanuel Attila Czirai wrote:
> In my superficial report, I definitely didn't think of that. I even forgot
> to mention that it works when escaped like "\>"
> 
> I've encountered it in the "adduser" FreeBSD sh script that runs as root,
> while trying to set a one char password like ">", so I thought I'd mention
> it here as well in case it might be helpful since I saw it happens in bash
> as well.

Sounds like you've found a nontrivial bug in FreeBSD (in the adduser
script, not in sh).  I hope this gets reported and fixed, and in any case,
good work and thank you.



Re: syntax error with lone > or < as string in [ ] tests with -a or -o operators

2024-04-14 Thread Greg Wooledge
On Sun, Apr 14, 2024 at 11:16:27AM +0200, Emanuel Attila Czirai wrote:
> $ [ -n ">" -a -n "something" ] || echo hmm
> bash: [: syntax error: `-n' unexpected
> hmm

Don't do this.  You're in the land of unspecified behavior here.

Use multiple test or [ commands instead, or use bash's [[ command.

[ -n ">" ] && [ -n "something" ]

[[ -n ">" && -n "something" ]]

> Note, the issue is present also in (current latest) FreeBSD's '/bin/sh' and
> 'bash' and `/bin/[`.

That's because they all try to implement the POSIX rules.

> But doesn't happen on Gentoo's /usr/bin/[ which is from
> sys-apps/coreutils-9.5::gentoo

Who knows what that one does.



Re: echo test >& "quote'test"

2024-04-09 Thread Greg Wooledge
On Tue, Apr 09, 2024 at 01:37:08AM +, squeaky wrote:
> Bash Version: 5.2 Patch Level: 21 Release Status: release
> 
> Description:
> 
> Running
> echo test >& "quote'test"
> should create the file "quote'test", but it creates "quotetest" 
> instead.

I can confirm this all the way back to bash 2.05b, and only with the >&
redirection operator.  Not with &> or > or >> .



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Greg Wooledge
On Mon, Apr 08, 2024 at 02:23:18PM +0300, ad...@osrc.rip wrote:
> Btw wouldn't it be possible (and worth) temporarily revoking write access to
> the user while it's being executed as root, and restoring original rights
> after execution?

I think that would be a huge overreach.  It would also lead to a whole
lot of breakage.

Imagine that we implement this change.  It would have to be done in
the shell, since the kernel simply offloads script execution to the
interpreter.  So, your change would essentially add code to the shell
which causes it to change the permissions on a script that it's
reading, if that script is given as a command-line argument, and if
the shell's EUID is 0.  Presumably it would change the permissions
back to normal at exit.

Now imagine what happens if the shell is killed by a SIGKILL, or if
the system simply crashes during the script's execution.  The script
is left with altered permissions.



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Greg Wooledge
On Mon, Apr 08, 2024 at 12:40:55PM +0700, Robert Elz wrote:
> or perhaps better just:
> 
>   main() { ... } ; main "$@"

You'd want to add an "exit" as well, to protect against new lines of
code being appended to the script.



Re: Potential Bash Script Vulnerability

2024-04-07 Thread Greg Wooledge
On Mon, Apr 08, 2024 at 12:23:38AM +0300, ad...@osrc.rip wrote:
> - Looks for list of PIDs started by the user, whether it's started in 
> terminal or command line, and saves them into $DotShProcessList

> - Takes $DotShProcessList and filters out those that don't have root access. 
> Those that do are saved into $UserScriptsRunningAsRoot

> - Searches for file names of $UserScriptsRunningAsRoot processes in 
> /home/$USER (aka ~) and save it to $ScriptFiles

So your "vulnerability" requires that the attacker has unprivileged
access to the system, and locates a shell script which is owned by a
second unprivileged user, and for some reason has world write access,
and is also currently being executed by root?

In that scenario I would say the real problem is that the second user
is leaving world-writable files sitting around.  If the attacker finds
such scripts, they can edit them ahead of time, and simply wait for
the second user to execute them via sudo.  There's no need to find the
script being executed in real time.



Re: Scope change in loops with "read" built-in

2024-04-05 Thread Greg Wooledge
On Thu, Apr 04, 2024 at 08:39:51PM -0400, Dale R. Worley wrote:
> To circumvent that, I've sometimes done things like
> 
> exec 3<( ... command to generate stuff ... )
> while read VAR <&3; do ... commands to process stuff ... ; done
> exec 3<-

Please note that the syntax for closing an FD is 3<&-

I may be a bit hypersensitive to this one, after
.



Re: Scope change in loops with "read" built-in

2024-04-02 Thread Greg Wooledge
On Tue, Apr 02, 2024 at 08:08:57PM +, Linde, Evan wrote:
> In a loop constructed like `... | while read ...`, changes to 
> variables declared outside the loop only have a loop local
> scope, unlike other "while" or "for" loops.

https://mywiki.wooledge.org/BashFAQ/024



bug-bash@gnu.org

2024-03-29 Thread Greg Wooledge
On Fri, Mar 29, 2024 at 09:02:12PM +1100, Reuben wrote:
> $ echo cat /dev/stderr > bug
> $ bash bug 2>&-
> cat /dev/stderr

I don't understand what you were trying to do here.

> calling bash script 2>&- on linux
> seems to make /dev/stderr refer to script,
> though &2 seems unaffected.
> using 2>&- inside script does not trigger this bug.
> i assume it is a bug and not 'historical compatibility'.
> the bug does not seem to appear in bash on openbsd.
> the bug does not seem to appear in dash or pdksh.

As a first guess, bash was given the name of a file to read as a script,
so it opened that file using the first available file descriptor.  Since
you had already closed FD 2 before invoking bash, FD 2 was the first
available, and therefore the script was opened as FD 2.

To me, this seems like a case of "Doctor, it hurts when I bend my arm
this way."  Maybe Chet will disagree.



Re: Docco

2024-03-27 Thread Greg Wooledge
On Wed, Mar 27, 2024 at 10:00:06AM +0100, Phi Debian wrote:
> $ man bash
> ...
> CONDITIONAL EXPRESSIONS
> ...
> 
>-a file
>   True if file exists.
>-e file
>   True if file exists.
> ...
> 
> 'May be' would be nice for newbies to precise which options are [ specific
> vs [[ specific for instance
> 
>-a file
>   True if file exists ([[ only, for [ see test builtin)
> 
> This to avoid things like
> 
> $ [   -a /tmp ] && echo ok || echo nok
> ok
> $ [ ! -a /tmp ] && echo ok || echo nok
> ok
> 
> I know it is obvious, unless this is intended to force a complete
> multi-pass man read...

I wouldn't say it's "obvious" what's happening here.  The problem is
that there are two different "-a" operators, one unary, and one binary.
"help test" documents both of them:

hobbit:~$ help test | grep -- -a
  -a FILETrue if file exists.
  EXPR1 -a EXPR2 True if both expr1 AND expr2 are true.

In your first example, you are using the unary -a operator on a file:

> $ [   -a /tmp ] && echo ok || echo nok
> ok

This one returns true because there are two arguments, and the first one
is not '!', and is a "unary primary" (POSIX wording).  Therefore it uses
the unary  -a and tests for existence of the second argument as a file.

In your second example, you are using the binary -a operator:

> $ [ ! -a /tmp ] && echo ok || echo nok
> ok

Here, you have three arguments, and argument 2 is a "binary primary"
(POSIX wording again), so it's treated as if you had written this:

[ ! ] && [ /tmp ] && echo ok || echo nok

This is simply performing two string length tests.  Both strings are
non-empty (the first is one character, and the second is four), so
the result is true.

The check for whether the first argument is '!' is not performed,
because the "$2 is a binary primary" check comes first.  This is how
POSIX documents it.

So... how do you work around this?  Well, the easiest way would be
to stop using -a entirely.  Both the unary *and* binary forms.  The
unary form can be replaced by -e, and then everything works as you
expect.  The binary form should be discarded along with "-o", and
never used.  You are much better off stringing together multiple
test or [ commands instead:

if [ -e "$logdir" ] && [ -e "$outputdir" ]; then ...

This removes all ambiguity, and is in fact the only supported way
to write this under POSIX restrictions (">4 arguments: The results
are unspecified.")



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-24 Thread Greg Wooledge
On Sun, Mar 24, 2024 at 03:54:10PM -0500, Dennis Williamson wrote:
> The @K transform outputs key value pairs for indexed arrays as well as
> associative arrays (you used the @k transform which does word splitting and
> loses the k-v sequence).

The @K (capital) transformation gives you quoted strings which need to
be eval'ed.  Very Bourne-shell-ish.

The @k (lowercase) transformation gives you a list of alternating raw
key/value strings, like what you'd expect from a Tcl command.

hobbit:~$ unset -v hash kvlist
hobbit:~$ declare -A hash=([key 1]='value 1' [key 2]='value 2')
hobbit:~$ kvlist=( "${hash[@]@k}" )
hobbit:~$ declare -p kvlist
declare -a kvlist=([0]="key 2" [1]="value 2" [2]="key 1" [3]="value 1")
hobbit:~$ kvlist2=( "${hash[@]@K}" )
hobbit:~$ declare -p kvlist2
declare -a kvlist2=([0]="\"key 2\" \"value 2\" \"key 1\" \"value 1\" ")
hobbit:~$ eval kvlist3=\("${hash[@]@K}"\)
hobbit:~$ declare -p kvlist3
declare -a kvlist3=([0]="key 2" [1]="value 2" [2]="key 1" [3]="value 1")

kvlist2 is an undesired result.  Don't do that.  kvlist and kvlist3 are
both usable.

> Thus the @K allows preserving indices in a sparse
> indexed array.

Both of them do that:

hobbit:~$ sparse=(a b [42]=z)
hobbit:~$ echo "${sparse[@]@K}"
0 "a" 1 "b" 42 "z"
hobbit:~$ echo "${sparse[@]@k}"
0 a 1 b 42 z



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-24 Thread Greg Wooledge
On Sun, Mar 24, 2024 at 07:46:46PM +0200, Oğuz wrote:
> On Sunday, March 24, 2024, Zachary Santer  wrote:
> >
> > Yeah, but what can you do with @k?
> 
> 
> It helps when reconstructing an associative array as a JSON object in JQ
> 
> $ declare -A a=([x]=1 [y]=2)
> $ jq --args -n '[$ARGS.positional | _nwise(2) | {(.[0]): .[1]}] | add'
> "${a[@]@k}"
> {
>   "y": "2",
>   "x": "1"
> }

Conceptually that looks great, but how do you avoid "Argument list
too long" with larger inputs?

I've got this example on hand, but it doesn't include a hash:

hobbit:~$ a=("an array" "of strings"); b="a string"; printf '%s\0' "${a[@]}" | 
jq -R --arg b "$b" '{list: split("\u"), string: $b}'
{
  "list": [
"an array",
"of strings"
  ],
  "string": "a string"
}



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-24 Thread Greg Wooledge
On Sun, Mar 24, 2024 at 01:04:38PM -0400, Zachary Santer wrote:
> On Fri, Mar 22, 2024 at 11:23 AM Chet Ramey  wrote:
> >
> > This is what you can do with @K.
> >
> > https://lists.gnu.org/archive/html/bug-bash/2021-08/msg00119.html
> >
> > Word splitting doesn't happen on the rhs of an assignment statement, so you
> > use eval. The @K quoting is eval-safe.
> 
> Yeah, but what can you do with @k?

You can write the keys and values to a stream/file.  That's pretty
much it -- but that's actually not too shabby.  Imagine sharing a bash
associative array with some other programming language and wanting to
import it into that other language's dictionary/hash data structure.
This is what you'd want.

> The difference in expansion behavior between indexed and associative
> array compound assignment statements doesn't make sense. As nice as it
> is to have expansions that expand to eval-safe expressions, needing
> eval less would be nicer.

It would be pretty reasonable to have a builtin that could take an array
name plus any number of additional argument pairs, and load those pairs
as keys/values into said array.  Then you could do something like this:

declare -A hash=(...)
kvlist=( "${hash[@]@k}" )

declare -A newhash
addtoarray newhash "${kvlist[@]}"

Some readers may observe that this looks a bit like Tcl's "array set"
command.  That's not a coincidence.  Whenever I see "a list of alternating
keys and values", that's where my mind goes.

I'm debating mentally whether this hypothetical new builtin would only
work with associative arrays, or also double as an "lappend" (append new
elements to the end of a list/array) if given an indexed array name.
I'm leaning toward having it be both.  It wouldn't be any *more* confusing
than the current situation already is.  The main difference would be that
with an indexed array, every argument after the array name becomes an
array element, instead of half of them being keys and the other half
being values.

Then again, I'm not likely to attempt to implement it, so anyone who
actually writes the code gets to make all the decisions.



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-22 Thread Greg Wooledge
On Fri, Mar 22, 2024 at 02:09:23PM -0400, Lawrence Velázquez wrote:
> On Fri, Mar 22, 2024, at 12:54 PM, Greg Wooledge wrote:
> > Also, I don't see the lower-case k transformation in the man page.
> 
> It's at the end of the list:
> 
> https://git.savannah.gnu.org/cgit/bash.git/tree/doc/bash.1?id=f3b6bd1#n3525

Ah... for some reason I had a bash.1 page from bash 5.1 in my
/usr/local/share/man/man1 directory, which was being pulled up instead
of Debian's bash 5.2 page.  My fault.



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-22 Thread Greg Wooledge
On Fri, Mar 22, 2024 at 11:23:35AM -0400, Chet Ramey wrote:
> This is what you can do with @K.
> 
> https://lists.gnu.org/archive/html/bug-bash/2021-08/msg00119.html
> 
> Word splitting doesn't happen on the rhs of an assignment statement, so you
> use eval. The @K quoting is eval-safe.

It would be good to include that information in the manual.  I only
see this in the man page:

  K  Produces a possibly-quoted version of the value of param‐
 eter, except that it prints the values of indexed and as‐
 sociative arrays as a sequence of quoted key-value  pairs
 (see Arrays above).

The reader is left wondering whether it's eval-safe.

Also, I don't see the lower-case k transformation in the man page.



Re: ${var@A}; hypothetical, related parameter transformations

2024-03-20 Thread Greg Wooledge
On Wed, Mar 20, 2024 at 03:05:56PM -0400, Zachary Santer wrote:
> I want a declare command, no matter what ${var@A} gives me. I have to
> write a function for that: generate_declare_command (). That function
> can work a couple of reasonable ways:

This seems rather theoretical to me.  If the associative array has
nothing in it, does it really matter whether it's nonexistent, only
declared as a scope placeholder (I think that's a thing...), or fully
declared but empty?  The second script which receives the serialized data
doesn't really care how the first script stored it, does it?  All you
really need is to have the keys and values replicated faithfully in the
second script's associative array.  If that gives you an empty hash,
then that's what you use.

I simply struggle to figure out what the real-world application is for
some of these tangents.



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-20 Thread Greg Wooledge
On Wed, Mar 20, 2024 at 12:53:07PM +0100, alex xmb sw ratchev wrote:
> On Wed, Mar 20, 2024, 12:49 Greg Wooledge  wrote:
> 
> > On Wed, Mar 20, 2024 at 07:11:34AM -0400, Zachary Santer wrote:
> > > On Wed, Mar 20, 2024 at 12:29 AM Lawrence Velázquez 
> > wrote:
> > > > A couple of previous discussions:
> > > >   - https://lists.gnu.org/archive/html/bug-bash/2020-12/msg00066.html
> > > >   - https://lists.gnu.org/archive/html/bug-bash/2023-06/msg00128.html
> > >
> > > There I go, reporting a bug that isn't a bug again.
> > >
> > > One would think that enabling this behavior would be the entire
> > > purpose of the alternate ( key value ) syntax. If it doesn't do that,
> > > what benefit does it give over the standard ( [key]=value ) syntax?
> > > Maybe it;s easier to use eval with?
> >
> > I believe the "X" in this X-Y problem is "I want to serialize an
> > associative array to a string, send the string to another bash script,
> > and de-serialize it back into an associative array there."
> >
> 
> as to that , simply declare -p and run that back

That will work in some cases, yes.  The problem is that it locks you
in to having the same array name and the same attributes (if any) as
the original array had.  It also runs "declare" which can create issues
with scope, if it's done inside a function.  Perhaps you wanted to
populate an array that was previously declared at some scope other
than your own, and other than global (which means adding -g wouldn't
work either).

I suppose you might argue that you could run declare -p, and then
edit out the "declare" and the attributes, and then while you're at it,
also edit the array name.  I guess that's a possible solution, but it
really seems ugly.



Re: "${assoc[@]@k}" doesn't get expanded to separate words within compound assignment syntax

2024-03-20 Thread Greg Wooledge
On Wed, Mar 20, 2024 at 07:11:34AM -0400, Zachary Santer wrote:
> On Wed, Mar 20, 2024 at 12:29 AM Lawrence Velázquez  wrote:
> > A couple of previous discussions:
> >   - https://lists.gnu.org/archive/html/bug-bash/2020-12/msg00066.html
> >   - https://lists.gnu.org/archive/html/bug-bash/2023-06/msg00128.html
> 
> There I go, reporting a bug that isn't a bug again.
> 
> One would think that enabling this behavior would be the entire
> purpose of the alternate ( key value ) syntax. If it doesn't do that,
> what benefit does it give over the standard ( [key]=value ) syntax?
> Maybe it;s easier to use eval with?

I believe the "X" in this X-Y problem is "I want to serialize an
associative array to a string, send the string to another bash script,
and de-serialize it back into an associative array there."

So, the first thing we observe is the behavior of the @k and @K expansions:

hobbit:~$ declare -A aa=('key 1' 'value 1' 'key 2' 'value 2')
hobbit:~$ declare -p aa
declare -A aa=(["key 2"]="value 2" ["key 1"]="value 1" )
hobbit:~$ printf '<%s> ' "${aa[@]@k}"; echo

hobbit:~$ printf '<%s> ' "${aa[@]@K}"; echo
<"key 2" "value 2" "key 1" "value 1" > 

Essentially, @k serializes the associative array to a list of alternating
keys/values, while @K serializes it to a string.

For the purposes of sending the serialization to another script, the
list is not suitable unless we NUL-terminate each element.  We could
pursue that, but it's going to involve more work, I suspect.

Given the string serialization we get from @K, it makes sense to start
there, and try to determine whether it's eval-safe.

hobbit:~$ declare -A mess=('*' star '?' qmark $'\n' nl '$(date)' cmdsub '`id`' 
backticks)
hobbit:~$ declare -p mess
declare -A mess=([$'\n']="nl" ["?"]="qmark" ["*"]="star" ["\$(date)"]="cmdsub" 
["\`id\`"]="backticks" )

That's not comprehensive, but it's a start.

hobbit:~$ printf '<%s> ' "${mess[@]@K}"; echo
<$'\n' "nl" "?" "qmark" "*" "star" "\$(date)" "cmdsub" "\`id\`" "backticks" > 
hobbit:~$ serial="${mess[@]@K}"
hobbit:~$ printf '<%s>\n' "$serial"
<$'\n' "nl" "?" "qmark" "*" "star" "\$(date)" "cmdsub" "\`id\`" "backticks" >

There's our serialization to a string.  Now we'll try to de-serialize:

hobbit:~$ eval "declare -A copy=($serial)"
hobbit:~$ declare -p copy
declare -A copy=([$'\n']="nl" ["?"]="qmark" ["*"]="star" ["\$(date)"]="cmdsub" 
["\`id\`"]="backticks" )

Looks OK.  Let's verify another way:

hobbit:~$ for i in "${!copy[@]}"; do printf '<%s>: <%s>\n' "$i" "${copy[$i]}"; 
done
<
>: 
: 
<*>: 
<$(date)>: 
<`id`>: 

Unless someone else comes up with a key or value that breaks the eval,
this looks like the simplest way.

By comparison, the NUL-terminated @k list can't even be stored in a
variable, so you'd need to transmit it to the other script in some way
that's equivalent to using a temp file.  Thus:

hobbit:~$ printf '%s\0' "${mess[@]@k}" > tmpfile
hobbit:~$ unset -v copy; declare -A copy; while IFS= read -rd '' key && IFS= 
read -rd '' value; do copy[$key]=$value; done < tmpfile
hobbit:~$ declare -p copy
declare -A copy=([$'\n']="nl" ["?"]="qmark" ["*"]="star" ["\$(date)"]="cmdsub" 
["\`id\`"]="backticks" )

I can't think of any way to restore an @k serialization other than a
IFS= read -rd '' loop.  The only advantage I can see here is that it
doesn't use eval, and therefore is visibly safe.  With the eval @K
solution, we're still left wondering whether the script is a ticking
time bomb.



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Greg Wooledge
On Thu, Mar 14, 2024 at 10:15:35AM -0400, Zachary Santer wrote:
> > $ cat ./nameref-what
> > #!/usr/bin/env bash
> >
> > func_1 () {
> >   local var_3='EGG'
> >   func_2
> >   printf '%s\n' "func_1:"
> >   local -p var_3
> > }
> >
> > func_2 () {
> >   local -n nameref_3='var_3'
> >   nameref_3='soufflé'
> >   local var_4='GROUND BEEF'
> >   local -n nameref_4='var_4'
> >   local -l nameref_4
> >   printf '%s\n' "func_2:"
> >   local -p nameref_3
> >   local -p var_3
> >   local -p nameref_4
> >   local -p var_4
> > }
> >
> > func_1
> >
> > $ ./nameref-what
> > func_2:
> > declare -n nameref_3="var_3"
> > ./nameref-what: line 31: local: var_3: not found
> > declare -n nameref_4="var_4"
> > declare -l var_4="GROUND BEEF"
> > func_1:
> > declare -- var_3="soufflé"
> 
> Not on a machine with bash right now. 'declare -p var_3' in func_2 ()
> said var_3 was not found, despite it having just been set by assigning
> a value to the nameref variable nameref_3.

I can't seem to duplicate this.  This is with bash 5.2:

hobbit:~$ cat foo
#!/bin/bash

outer() {
local var3=EGG
inner
}

inner() {
local -n ref3=var3
ref3=BANANA
declare -p ref3
declare -p var3
}

outer
hobbit:~$ ./foo
declare -n ref3="var3"
declare -- var3="BANANA"

And I get the same result in older versions as well:

hobbit:~$ bash-5.1 foo
declare -n ref3="var3"
declare -- var3="BANANA"
hobbit:~$ bash-5.0 foo
declare -n ref3="var3"
declare -- var3="BANANA"
hobbit:~$ bash-4.4 foo
declare -n ref3="var3"
declare -- var3="BANANA"
hobbit:~$ bash-4.3 foo
declare -n ref3="var3"
declare -- var3="BANANA"



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Greg Wooledge
On Thu, Mar 14, 2024 at 08:29:47AM -0400, Zachary Santer wrote:
> Alright, that's all fair. But this?
> 
> On Sun, Mar 10, 2024 at 7:29 PM Zachary Santer  wrote:
> >
> > Additionally, a nameref variable referencing a variable declared in a 
> > calling function hides that variable in the scope of the function where the 
> > nameref variable is declared.
> 

I don't quite understand what this is saying.  Do the variables have
different names, or the same name?  If they have different names, then
the nameref shouldn't "hide" the other variable.  But they can't have
the same name, because a nameref pointing to itself is a circular
reference and won't ever work under any circumstance.

hobbit:~$ f() { local -n var=var; var=1; }; f
bash: local: warning: var: circular name reference
bash: warning: var: circular name reference
bash: warning: var: circular name reference

You don't even need an outer calling function to see this.



Re: nameref and referenced variable scope, setting other attributes (was "local -g" declaration references local var in enclosing scope)

2024-03-14 Thread Greg Wooledge
On Thu, Mar 14, 2024 at 08:27:33AM +0100, alex xmb sw ratchev wrote:
> how to unset a nameref

Look at "help unset".  Specifically the -n option.



Re: multi-threaded compiling

2024-03-12 Thread Greg Wooledge
On Tue, Mar 12, 2024 at 09:42:22AM +0100, Mischa Baars wrote:
> Here's the script and the Makefile using "printf '<%s>'":

Sadly, your mail user agent chose to attach "Makefile" with content-type
application/octet-stream, which my MUA (mutt) refuses to show inline,
or to include in a reply as quoted text.

Here's the top part of it, inline:


STR[0]="one and two"
STR[1]=one\ and\ two

CFLAGS[0]=-D__STRINGIZED__=0 -D__STRING__=${STR[0]}
CFLAGS[1]=-D__STRINGIZED__=0 -D__STRING__=${STR[1]}
CFLAGS[2]=-D__STRINGIZED__=1 -D__STRING__=${STR[0]}
CFLAGS[3]=-D__STRINGIZED__=1 -D__STRING__=${STR[1]}


And here's what Martin said about it:

> On Tue, Mar 12, 2024 at 9:32 AM Martin D Kealey 
> wrote:
> > In section two, the problem is that quote removal is done BEFORE variables
> > are expanded, even though it prevents word splitting from being done AFTER
> > variable expansion. Therefore writing VAR=" \"string 1\" \"string 2\" "
> > absolutely cannot do what you might expect; the embedded quote marks will
> > be used literally, and then (because ${CFLAGS[0]} is not quoted) the
> > resulting string will be split on any embedded whitespace..


As someone said yesterday, you will need eval for this.


hobbit:/tmp/x$ cat Makefile
FOO=-D__x="one two three"

all:
@bash -c 'eval '\''CFLAGS=${FOO}'\''; declare -p CFLAGS'
hobbit:/tmp/x$ make
declare -- CFLAGS="-D__x=one two three"


Of course, using eval presents its own set of challenges, so proceed
with extreme caution.

I'd still like to hear why you aren't simply using "make -j".



Re: multi-threaded compiling

2024-03-12 Thread Greg Wooledge
On Tue, Mar 12, 2024 at 02:15:38PM +0700, Robert Elz wrote:
> Date:Mon, 11 Mar 2024 17:25:57 -0400
> From:Chet Ramey 
> Message-ID:  <322e10a6-3808-49be-aa9d-a1d367a90...@case.edu>
> 
>   | OK, here's the longer answer. When the shell is interactive, and job
>   | control is enabled, the shell prints job completion notifications to
>   | stdout at command boundaries.
> 
> which is, IMO, yet another bogus bash misfeature.  That should
> happen, with the effect described, but only when PS1 is about
> to be printed - precisely so commands like the one described
> will work.   Sigh.
> 
> There aren't many complaints about this misfeature of bash,
> as almost no-one writes interactive command lines where it makes
> a difference.   That doesn't mean they should not be able to.

Yeah, it appears you're right.  In a script, this works as expected:

hobbit:~$ cat foo
#!/bin/bash
for i in {0..3}; do sleep 1 & done
for i in {0..3}; do
wait -n -p pid; e=$?
printf 'pid %s status %s\n' "$pid" "$e"
done
hobbit:~$ ./foo
pid 530359 status 0
pid 530360 status 0
pid 530361 status 0
pid 530362 status 0


But interactively, *even with bash -c and set +m*, it just fails:

hobbit:~$ bash -c 'set +m; for i in {0..3}; do sleep 1 & done; for i in {0..3}; 
do wait -n -p pid; e=$?; printf "pid %s status %s\n" "$pid" "$e"; done'
pid 530407 status 0
pid 530410 status 0
pid  status 127
pid  status 127

Looks like a race condition, where some of the children get reaped and
tossed away before "wait -n -p pid" has a chance to grab their status.

If I stick a "sleep 2" in between the two loops, then it's even worse:

hobbit:~$ bash -c 'set +m; for i in {0..3}; do sleep 1 & done; sleep 2; for i 
in {0..3}; do wait -n -p pid; e=$?; printf "pid %s status %s\n" "$pid" "$e"; 
done'pid  status 127
pid  status 127
pid  status 127
pid  status 127

ALL of the children are discarded before the second loop has a chance to
catch a single one of them.  This is clearly not working as expected.

Using "bash +i -c" doesn't change anything, either.  Or "bash +i +m -c".
Whatever magic it is that you get by putting the code in an actual
script, I can't figure out how to replicate it from a prompt.



Re: multi-threaded compiling

2024-03-12 Thread Greg Wooledge
On Tue, Mar 12, 2024 at 10:33:36AM +0100, Mischa Baars wrote:
> bash -c 'set +m; seconds=1; for (( i=0;i<32;i++ )); do exit ${i} & done;
> sleep ${seconds}; for (( i=0;i<32;i++ )); do wait -p pid; e=${?}; echo
> "$(printf %3u ${i}) pid ${pid} exit ${e}"; done;'

"wait -p pid" is not correct here.  In that command, pid is an output
variable, and you have not specified any process IDs to wait for -- so
wait is going to wait for *all* of the children to finish, and you'll
get zero as the exit status:

$ help wait
[...]

Waits for each process identified by an ID, which may be a process ID or a
job specification, and reports its termination status.  If ID is not
given, waits for all currently active child processes, and the return
status is zero.  If ID is a job specification, waits for all processes
in that job's pipeline.



Re: multi-threaded compiling

2024-03-11 Thread Greg Wooledge
On Mon, Mar 11, 2024 at 10:19:26PM +0100, Mischa Baars wrote:
> On Mon, 11 Mar 2024, 21:08 Kerin Millar,  wrote:
> > The pid can be obtained with the -p option, as of 5.1. Below is a
> > synthetic example of how it might be put into practice.

I'd forgotten about that one.  A recent addition, and one I've never used
yet.

> > #!/bin/bash
> >
> > declare -A job_by status_by
> > max_jobs=4
> > jobs=0
> >
> > wait_next() {
> > local pid
> > wait -n -p pid
> > status_by[$pid]=$?
> >
> 
> How exactly is this indexing implemented internally? Does a first number on
> index n take m bits (using a linked list) or does it take n * m bits (using
> realloc(3))?

"declare -A" makes it an associative array (hash table).

Without that declare -A, it would be a sparsely indexed array.

In either case, it doesn't just blindly allocate millions of bytes of
memory.  It uses something slimmer.



Re: multi-threaded compiling

2024-03-11 Thread Greg Wooledge
> On Mon, Mar 11, 2024, 20:13 Mischa Baars 
> wrote:
> 
> > Also I don't think that gives you an exit status for each 'exit $i'
> > started. I need that exit status.

"wait -n" without a PID won't help you, then.  You don't get the PID or
job ID that terminated, and you don't get the exit status.  It's only
of interest if you're trying to do something like "run these 100 jobs,
5 at a time" without storing their exit statuses.

If you need to reap each individual job and store its exit status indexed
by its PID, then you're probably going to need something like:


#!/bin/bash
i=0 pid=() status=()
for job in ...; do
longrunner "$job" & pid[i++]=$!
done

for ((i=0; i < ${#pid[@]}; i++)); do
wait "${pid[i]}"; status[i]=$#
done


You won't be able to take advantage of "wait -n"'s ability to react
to the first job that finishes.  You'll end up reaping each job in the
order they started, not the order they finished.



Re: multi-threaded compiling

2024-03-11 Thread Greg Wooledge
On Mon, Mar 11, 2024 at 07:22:39PM +0100, Mischa Baars wrote:
> On Mon, Mar 11, 2024 at 6:22 PM alex xmb sw ratchev 
> wrote:
> 
> > i also completly dont get ur issue
> >
> > f=( a.c b.c .. ) threads=$( nproc ) i=-1 r=
> >
> >   while [[ -v f[++i] ]] ; do
> >  (( ++r > threads )) &&
> > wait -n
> > gcc -c "${f[i]}" &
> >   done
> >
> 
> How nice!
> 
> wait -n exit 1 & echo $?
> 
> You got me the solution :) Except that wait expects a pid after -n.

No, wait -n without a PID waits for any one job to complete.

hobbit:~$ sleep 123 & sleep 234 & sleep 3 & wait -n
[1] 513337
[2] 513338
[3] 513339
[3]+  Donesleep 3
hobbit:~$ ps
PID TTY  TIME CMD
   1138 pts/000:00:00 bash
 513337 pts/000:00:00 sleep
 513338 pts/000:00:00 sleep
 513341 pts/000:00:00 ps
hobbit:~$ 



Re: multi-threaded compiling

2024-03-11 Thread Greg Wooledge
On Mon, Mar 11, 2024 at 06:51:54PM +0100, Mischa Baars wrote:
> SECONDS=5; for (( i=0;i<32;i++ )); do { exit ${i}; } & pid[${i}]=${!}; done; 
> sleep ${SECONDS}; for (( i=0;i<32;i++ )); do wait -n ${pid[${i}]}; e=${?}; 
> echo "$(printf %3u ${i}) pid ${pid[${i}]} exit ${e}"; done;
> /bin/bash: line 1: wait: 1747087: no such job
>   0 pid 1747087 exit 127
> /bin/bash: line 1: wait: 1747088: no such job
>   1 pid 1747088 exit 127

Without analyzing this in depth, one thing struck me immediately:
you're using the reserved variable SECONDS, which has special semantics
in bash.  ${SECONDS} is going to expand to an ever-increasing number,
beginning with 5 (since that's the value you assigned), and going up
by 1 per second that the script runs.  I'm assuming that was not your
intention.

In general, any variable whose name is all capital letters *may* have
special meaning to the shell and should not be used for general storage
purposes in scripts.



Re: "local -g" declaration references local var in enclosing scope

2024-03-10 Thread Greg Wooledge
On Sun, Mar 10, 2024 at 06:39:19PM -0400, Lawrence Velázquez wrote:
> On Sun, Mar 10, 2024, at 5:36 PM, Greg Wooledge wrote:
> > Here it is in action.  "local -g" (or "declare -g") without an assignment
> > in the same command definitely does things.
> >
> > hobbit:~$ f() { declare -g var; var=in_f; }
> > hobbit:~$ unset -v var; f; declare -p var
> > declare -- var="in_f"
> 
> This example appears to work the same without "declare -g":
> 
>   $ f() { var=in_f; }
>   $ unset -v var; f; declare -p var
>   declare -- var="in_f"

The impetus for adding it was that you *have* to declare associative
arrays.  If you want to create a global associative array from within a
function, you need the -g option.

For regular variables, it's mostly redundant, yes.



Re: "local -g" declaration references local var in enclosing scope

2024-03-10 Thread Greg Wooledge
On Sun, Mar 10, 2024 at 04:01:10PM -0400, Lawrence Velázquez wrote:
> Basically, without an assignment, "local -g" does nothing.

Well, the original purpose of -g was to create variables, especially
associative arrays, at the global scope from inside a function.

I think this thread has been asking about a completely different
application, namely to operate upon a global scope variable from
within a function where a non-global scope variable is shadowing the
global one.

(As far as I know, there isn't any sensible way to do that in bash.)


Here it is in action.  "local -g" (or "declare -g") without an assignment
in the same command definitely does things.

hobbit:~$ f() { declare -g var; var=in_f; }
hobbit:~$ unset -v var; f; declare -p var
declare -- var="in_f"


I think the key to understanding is that while "local -g var" creates
a variable at the global scope, any references to "var" within the
function still use the standard dynamic scoping rules.  They won't
necessarily *see* the global variable, if there's another one at a
more localized scope.



Re: Unable to close a file descriptor using exec command

2024-03-03 Thread Greg Wooledge
On Sun, Mar 03, 2024 at 10:29:17PM +, Venkat Raman via Bug reports for the 
GNU Bourne Again SHell wrote:
> Repeat-By:
> exec 2>&test1 commands freezes the terminal and unable to close the 
> fd.
> 
> Fix:
> [Is there a sane  way to close the fd?]

You're using the wrong syntax.

To open a file (for writing + truncation):  exec 2>file

To open a file (for appending):  exec 2>>file

To close a file descriptor:  exec 2>&-


The command that you ran (exec 2>&test1) redirects stderr which is
normally opened to the terminal.  In order to get it back, you would
need to do something like  exec 2>&1   assuming stdout has not been
altered yet.

The shell prompt is normally written to stderr, so you won't see the
prompt until stderr is restored.



Re: possible bash bug bringing job. to foreground

2024-02-17 Thread Greg Wooledge
On Sat, Feb 17, 2024 at 07:41:43PM +, John Larew wrote:
> After further examination, the examples with "fg $$" and "fg $!" clearly do 
> not bring the subshell into the foreground, as they are evaluated prior to 
> the subshells background execution.
> I'm trying to bring the subshell to the foreground to perform an exit, after 
> a delay.
> Ultimately, it will be used as part of a terminal emulator inactivity timeout.

Bash already has a TMOUT variable which will cause an interactive shell
to exit after a specified length of inactivity.  Is that sufficient?
If not, how does your desired solution need to differ from TMOUT?



Re: possible bash bug bringing job. to foreground

2024-02-17 Thread Greg Wooledge
On Sat, Feb 17, 2024 at 01:30:00PM +, John Larew wrote:
> Repeat-By:  1: (sleep 15s; set -m; fg %%; exit ) & 2: (sleep 15s; set -m; fg 
> %+; exit ) & 

You're using %% or %+ inside a shell where there have NOT been any
background jobs created yet.  The sleep 15s runs in the foreground,
because it doesn't have a & after it.

> Fix: (sleep 15s; set -m; kill $PPID) &     Not a preferred solution; I prefer 
> a smaller hammer.

It's not clear to me what you're trying to do in the first examples.
Did you think that the subshell would somehow "take over" the terminal
and become the interactive shell, after the 15 second delay?  And what
would that achieve, if it worked?

In the "Fix" example, you're killing the main shell's parent, which
is most likely a terminal emulator.  Did you want to kill the main
shell instead of the terminal?  If so, you can get its PID with
the $$ variable.  Even inside a subshell, $$ always refers to the main
shell's PID.  If you want the subshell's PID, use BASHPID instead.

What exactly is it about the "Fix" example that you don't prefer?
What goal are you trying to achieve?



  1   2   3   4   5   6   7   8   9   10   >