Re: printf -u "$fd"?

2024-05-19 Thread Kerin Millar
On Sun, 19 May 2024, at 5:08 PM, alex xmb sw ratchev wrote:
> On Sat, May 18, 2024, 04:54 Zachary Santer  wrote:
>
>> Was «difference between read -u fd and read <&"$fd"» on help-b...@gnu.org
>>
>> On Thu, May 16, 2024 at 12:51 AM Kerin Millar  wrote:
>> >
>> > On Thu, 16 May 2024, at 3:25 AM, Peng Yu wrote:
>> > > Hi,
>> > >
>> > > It appears to me that read -u fd and read <&"$fd" achieve the same
>> > > result. But I may miss corner cases when they may be different.
>> > >
>> > > Is it true that they are exactly the same?
>> >
>> > They are not exactly the same. To write read -u fd is to instruct the
>> read builtin to read directly from the specified file descriptor. To write
>> read <&"$fd" entails one invocation of the dup2 syscall to duplicate the
>> specified file descriptor to file descriptor #0 and another invocation to
>> restore it once read has concluded. That's measurably slower where looping
>> over read.
>>
>> So here's another tangent, but has it been considered to add an option
>> to the printf builtin to print to a given file descriptor, rather than
>> stdout? If printing to a number of different file descriptors in
>> succession, such an option would appear to have all the same benefits
>> as read's -u option.
>>
>
> id like a printf -u
>
> Zack

I understand that you have pledged to use broken email software for the rest of 
time but could you at least refrain from posting emails that look as though 
they were signed off by someone other than yourself? It's not cricket.

-- 
Kerin Millar



Re: printf -u "$fd"?

2024-05-19 Thread Kerin Millar
On Sat, 18 May 2024, at 3:53 AM, Zachary Santer wrote:
> Was «difference between read -u fd and read <&"$fd"» on help-b...@gnu.org
>
> On Thu, May 16, 2024 at 12:51 AM Kerin Millar  wrote:
>>
>> On Thu, 16 May 2024, at 3:25 AM, Peng Yu wrote:
>> > Hi,
>> >
>> > It appears to me that read -u fd and read <&"$fd" achieve the same
>> > result. But I may miss corner cases when they may be different.
>> >
>> > Is it true that they are exactly the same?
>>
>> They are not exactly the same. To write read -u fd is to instruct the read 
>> builtin to read directly from the specified file descriptor. To write read 
>> <&"$fd" entails one invocation of the dup2 syscall to duplicate the 
>> specified file descriptor to file descriptor #0 and another invocation to 
>> restore it once read has concluded. That's measurably slower where looping 
>> over read.
>
> So here's another tangent, but has it been considered to add an option
> to the printf builtin to print to a given file descriptor, rather than
> stdout? If printing to a number of different file descriptors in
> succession, such an option would appear to have all the same benefits
> as read's -u option.

Until now, not that I am aware of. I normally apply a redirection to a group 
command in such cases, though it still results in exactly two dup2 calls.

-- 
Kerin Millar



Re: bug in bash

2024-05-12 Thread Kerin Millar
On Sun, 12 May 2024 03:55:21 +0200
Quốc Trị Đỗ  wrote:

> Hi,
> 
> I found a bug when i tried with syntax <(cmd). this is an example
> cat <(wc -l) < bk
> 
> You can follow this command, then you have bugs. I think because you
> created a fork getting inside in the parenthesis and the main process do <
> bk. They work parallel. Because of that,  yeah.
> At least, I tried this here on this campus, at 42Heilbronn - Germany.
> 

Let's replace wc with ps for a moment.

$ cat <(ps -j)
PIDPGID SID TTY  TIME CMD
   784078407840 pts/000:00:00 bash
   849278407840 pts/000:00:00 ps
   849384937840 pts/000:00:00 cat

There, cat ends up being the sole member of the foreground process group (whose 
PGID happened to be 8493 at the time). It can be seen that ps is not a member 
of that same process group. Were ps to try to read from the standard input - as 
wc does - it would also fail. The reason is that only processes belonging to 
the foreground process group are allowed to read from the controlling terminal.

> How to fix it? It doesn't last that long. After a while, it will show "wc:
> stdin: read: Input/output error". Or we can ctrl C.
> Version 5.2

Though already asked, what were you intending for it to do?

-- 
Kerin Millar



Re: [sr #111058] Problem transmitting script arguments

2024-05-08 Thread Kerin Millar
On Wed, 8 May 2024, at 7:07 PM, Dale R. Worley wrote:
> "Kerin Millar"  writes:
>> On Mon, 6 May 2024, at 7:01 PM, Dale R. Worley wrote:
>>> anonymous  writes:
>>>> [...]
>
>> It's likely that your reply will never be seen by the anonymous
>> Savannah issue filer.
>
> OK.  Now does that mean that there is no way for me to effectively
> suggest a solution (and so I shouldn't have bothered), or that I should
> have done so by some different method?

At is stands, the best that can be done is to click through to the applicable 
Savannah issue and directly comment there, in the hope that the person that 
opened the issue later returns to  check for any new comments. No account is 
required to do so. In this particular case, they did and were able to resolve 
their issue.

https://savannah.gnu.org/support/?111058

-- 
Kerin Millar



Re: [PATCH 0/9] Add library mode to source builtin

2024-05-07 Thread Kerin Millar
On Tue, 7 May 2024, at 7:14 PM, Chet Ramey wrote:
> On 5/7/24 1:42 PM, Kerin Millar wrote:
>> On Tue, 7 May 2024, at 3:27 PM, Chet Ramey wrote:
>>> On 5/5/24 3:39 PM, Kerin Millar wrote:
>>>
>>>> Such is the extent to which I concur that I find even -l to be irritating.
>>>
>>> The option character isn't important. Is it useful to have an additional
>> 
>> If it were of no importance at all, once might then choose the character 
>> entirely at random. That's not usually what happens.
>
> The issue is whether one is needed at all, not whether or not one character
> irritates you.

In the course of quoting me, you stated that the character isn't important then 
proceeded to pose an open question as to whether having an option is 
worthwhile, so I engaged on both counts. I don't require for anyone's agreement 
to think that naming matters. I don't believe that anyone that writes software 
genuinely believes it doesn't. I didn't say what the character ought to be. I 
simply didn't like the one that was presented and still don't. If you meant 
something to the effect of "in the case that I decide that having an option is 
worthwhile, the choice of character is hardly the foremost concern in all of 
this", then by all means.

>>> option to `source' that forces it to use $BASH_SOURCE_PATH, or should that
>>> behave like other builtins ($CDPATH, $BASH_LOADABLES_PATH)?
>> 
>> If BASH_SOURCE_PATH is to be able to take its value from the environment, it 
>> might be useful.
>
> That's the standard behavior, with only a few exceptions.
>
>
>> That is, to have a means of opting in may be sensible from a backward-
>> compatibility standpoint, whether it be in the form of an option character, 
>> a 
>> shell option or something else. Otherwise, probably not. I'm thinking of the 
>> theoretical case in which a user exports BASH_SOURCE_PATH then runs existing 
>> scripts - not necessarily of their own design - that weren't counting on 
>> this 
>> feature to ever exist.
>
> You can export CDPATH and get the same effect with `cd', or export PATH and
> modify command search order.

Yes (albeit standard behaviours). I find myself mildly inclined towards the 
position of not adding more options unless it is considered truly necessary. I 
have nothing further to add, so will take my leave of this thread.

-- 
Kerin Millar



Re: [PATCH 0/9] Add library mode to source builtin

2024-05-07 Thread Kerin Millar
On Tue, 7 May 2024, at 3:27 PM, Chet Ramey wrote:
> On 5/5/24 3:39 PM, Kerin Millar wrote:
>
>> Such is the extent to which I concur that I find even -l to be irritating.
>
> The option character isn't important. Is it useful to have an additional

If it were of no importance at all, once might then choose the character 
entirely at random. That's not usually what happens.

> option to `source' that forces it to use $BASH_SOURCE_PATH, or should that
> behave like other builtins ($CDPATH, $BASH_LOADABLES_PATH)?

If BASH_SOURCE_PATH is to be able to take its value from the environment, it 
might be useful. That is, to have a means of opting in may be sensible from a 
backward-compatibility standpoint, whether it be in the form of an option 
character, a shell option or something else. Otherwise, probably not. I'm 
thinking of the theoretical case in which a user exports BASH_SOURCE_PATH then 
runs existing scripts - not necessarily of their own design - that weren't 
counting on this feature to ever exist. On the other hand, bash already has a 
cornucopia of options and I'm uncertain as to how realistic this concern is. 
There are various ways for the environment to influence the behaviour of a bash 
script as it stands. I would be interested to know what others think.

-- 
Kerin Millar



Re: [PATCH 0/4] Add import builtin

2024-05-07 Thread Kerin Millar
On Tue, 7 May 2024, at 5:24 AM, Phi Debian wrote:
> On Mon, May 6, 2024 at 7:51 PM Kerin Millar  wrote:
>> 
>> 
>> I'll put it in a little more detail, though no less plainly. I find the 
>> terminology of "libraries" and "modules" to be specious in the context of a 
>> language that has no support for namespaces, awkward scoping rules, a 
>> problematic implementation of name references, and so on. These foundational 
>> defects are hardly going to be addressed by a marginally more flexible 
>> source builtin. Indeed, it is unclear that they can be - or ever will be - 
>> addressed. Presently, bash is what it is: a messy, slow, useful 
>> implementation of the Shell Command Language with an increasing number of 
>> accoutrements, some of which are fine and others of which are less so (and 
>> virtually impossible to get rid of). As curmudgeonly as it may be to gripe 
>> over variable and option names, this is why the import of library, as a 
>> word, does not rest at all well in these quarters. That aside, I do not find 
>> the premise of the patch series to be a convincing one but would have little 
>> else to say about its prospective inclusion, provided that the behaviour of 
>> the posix mode were to be left unchanged in all respects.
>> 
>> -- 
>> Kerin Millar
>> 
>
> Thanx @Kerin, I got an intuitive reluctance with the patch series, but 
> could not formalize it that way, that is exactly it (specially the 
> nameref to me :-))  
>
> That brings up some questioning about the bash dev workflow. I 
> personally only monitor bash-bug (not the others bash-*), specially to 
> be aware of new incoming patches.
>
> Generally, the few patch that shows up here are patches that fix a 
> specific standing bug, or on some occasion, the declaration of a bug 
> along with a patch to fix it, they are generally reply with 'thanx for 
> the report and patch!'
>
> I rarely see patches about "Hey guys I got this great idea what do you 
> think", so I wonder for this kind of request does git{lab,hub,whatever} 
> be more appropriate? or may be I should say convenient, a public git 
> project (clone) generally have all the infrastructure for discussions, 
> enhancement and fix.

I don't think that there is anything wrong with the workflow that has recently 
been in evidence. In case you're not fully aware, it goes something like this:

1) git clone https://git.savannah.gnu.org/git/bash.git
2) enter the local working copy and checkout the "devel" branch
3) optionally create a new branch to contain the intended changes
4) hack on the code and apply some commits
5) use git-format-patch(1) to generate a series of patches based on those 
commits
6) post the patches to the list, either manually or by using git-send-email(1)
7) allow the patches to undergo discussion and review
8) return to step 4 if necessary

Let's say that Chet is prepared to accept the most recently posted patch 
series. In that case, all he need do is use git-am(1) to apply the commits from 
the posted messages directly to his own working copy of the devel branch. 
Similarly, anyone that would like to try out the patches themselves can do the 
very same thing and (re)compile bash. Perhaps even iterate on them.

There are a few characteristics of this workflow that are worth noting. There 
is no requirement for a specialised platform or service to be involved. Only 
the git software is required, along with some medium by which to convey the 
am-formatted patches to whomsoever is to review them. Of course, it is sensible 
to gauge whether there is any potential interest in a proposed feature prior to 
submitting patches. Matheus tried to do exactly that on the 21st April. I may 
think that the resulting patches have been oversold but I would not fault his 
process.

I don't wish to take this thread out into the weeds but would add one more 
thing. Not everyone is content with the encroachment of proprietary services 
such as GitHub. Increasingly, it is becoming the bar that must be cleared for 
open-source participation of any kind. Do we really want for a future in which 
one cannot even send a simple patch without agreeing to GitHub's terms and 
conditions, signing up, forking a project then opening a pull request? Or where 
maintainers won't even stoop to communicate with would-be contributors unless 
opening an issue on such a platform?

>
> The bash code owner can then pull the great idea when the demand for it 
> start roaring?

For more ambitious feature branches, it might be more convenient. Ultimately, 
if Chet is content to review patches in the present fashion then that's all 
that really matters.

>
> Just questioning, I really have no idea what is the bash dev team way 
> of receiving enhancement req

Re: [sr #111058] Problem transmitting script arguments

2024-05-06 Thread Kerin Millar
On Mon, 6 May 2024, at 7:01 PM, Dale R. Worley wrote:
> anonymous  writes:
>> URL:
>>   <https://savannah.gnu.org/support/?111058>
>>
>>  Summary: Problem transmitting script arguments
>
>> Date: Sat 04 May 2024 10:08:41 AM UTC By: Anonymous
>> I have the following problem with transmitting arguments to a bash script
>> onward to an inside program call.
>>
>> Lets name the inside program 'Z'.
>> An open number of arguments have to be transmitted from the script 
>> environment
>> to Z's environment. If an argument aa is given enclosed in double-quotes to
>> the script (because there are blanks within the value) these double-quotes 
>> are
>> removed when bash gets hold of it. When I transmit aa by use of $x, $* or $@,
>> the double-quotes are not resurrected by bash, which I think is a tragic
>> mistake because the call of Z obviously suffers a semantic error.
>>
>> So far I could not solve the problem. As this kind of problem cannot be new,
>> is there any recommended way to solve it?
>
> Providing a detailed example would make your requirements clearer.
>
> But if I understand correctly, you want to provide all of the arguments
> that the Bash script receives as arguments to another program, "Z".  The
> standard way to do this is:
>
> Z "$@"
>
> Indeed, it appears that $@ was created with special behavior precisely
> to handle this situation.  From the manual page:
>
>@  Expands  to  the  positional  parameters, starting from one.  In
>   contexts where word splitting is performed,  this  expands  each
>   positional  parameter  to  a separate word; if not within double
>   quotes, these words are subject to word splitting.  In  contexts
>   where  word splitting is not performed, this expands to a single
>   word with each positional parameter separated by a space.   When
>   the  expansion  occurs  within double quotes, each parameter ex‐
>   pands to a separate word.  That is, "$@" is equivalent  to  "$1"
>   "$2"  ...   If the double-quoted expansion occurs within a word,
>   the expansion of the first parameter is joined with  the  begin‐
>   ning  part  of  the original word, and the expansion of the last
>   parameter is joined with the last part  of  the  original  word.
>   When  there  are no positional parameters, "$@" and $@ expand to
>   nothing (i.e., they are removed).
>
> Dale

It's likely that your reply will never be seen by the anonymous Savannah issue 
filer.

-- 
Kerin Millar



Re: [PATCH 0/4] Add import builtin

2024-05-06 Thread Kerin Millar
ve.
> After some discussion in the help-bash list,
> I began to think that a builtin was justified.
> Then after discussing it with you I changed
> my mind and rewrote the patch so that it's an
> option for the source builtin instead of an
> entirely new builtin. You made excellent points
> and you convinced me that a new builtin was
> the wrong approach.
>
> I understand that it's not exactly what was discussed.
> However, I did have a reason for doing it that way.
> I think the logic behind the way source uses PATH
> is somewhat confusing and adding a new variable
> on top of all that would only confuse it further.
> For that reason, I added an option instead.
> If you pass the option, it uses the library variable.
> There are no nested conditionals to think about.
>
> I think that is easier for users to understand.
> It certainly is easier for me to understand.
>
>> However, as far as I understand, no one has requested you
>> to work on this. You've voluntarily appeared on the help-bash and
>> bug-bash lists and submitted patches.
>
> Was this inappropriate?
>
>> It's unfair to request 100% merge
>
> I did not request that. I asked for consideration.
> If it's rejected, I will accept that. I'm sure the maintainer
> will have good reasons for rejecting it. I made it a point
> to organize the commits so that valuable contributions
> could still be cherry picked even in the event of rejection.
>
>> Do we need to merge every patch unconditionally after discussions? I
>> don't think so.
>
> That's not what I meant. I'm not demanding acceptance.
> I just don't want to be seen as a guy who just shows up
> out of nowhere and demands that others do free work
> for him on the features he personally thinks are important.
>
> I cloned the repository and made bash do what I wanted.
> Then I shared the results of that effort with everyone here.
> What happens after that is up to the community.
>
>> The code change seems too large considering the amount of the feature
>> change, but I haven't yet checked the code. I'll later check them.
>
> Perhaps the number of patches gave everyone that impression.
> Many are just rather small incremental changes that lay out the
> groundwork for the feature. Changes such as extracting some
> logic into a parameterized helper function and converting
> hard coded strings into function parameters.
>
> Patches 3 and 6-9 implement the feature proper.
> The others could be cherry picked independently.
>  
>> You should be prepared to receive negative comments in the review
>> process. The reviews are not always 100% positive. This is normal.
>
> I can handle criticism of my code. It wasn't really my intention to be a
> source of irritation to the community though. That made me think
> I should probably just leave instead of continuing these arguments.

I'll put it in a little more detail, though no less plainly. I find the 
terminology of "libraries" and "modules" to be specious in the context of a 
language that has no support for namespaces, awkward scoping rules, a 
problematic implementation of name references, and so on. These foundational 
defects are hardly going to be addressed by a marginally more flexible source 
builtin. Indeed, it is unclear that they can be - or ever will be - addressed. 
Presently, bash is what it is: a messy, slow, useful implementation of the 
Shell Command Language with an increasing number of accoutrements, some of 
which are fine and others of which are less so (and virtually impossible to get 
rid of). As curmudgeonly as it may be to gripe over variable and option names, 
this is why the import of library, as a word, does not rest at all well in 
these quarters. That aside, I do not find the premise of the patch series to be 
a convincing one but would have little else to say about its prospective 
inclusion, provided that the behaviour of the posix mode were to be left 
unchanged in all respects.

-- 
Kerin Millar



Re: [PATCH 0/9] Add library mode to source builtin

2024-05-05 Thread Kerin Millar
On Sun, 5 May 2024, at 8:11 PM, Lawrence Velázquez wrote:
> On Sun, May 5, 2024, at 5:54 AM, Matheus Afonso Martins Moreira wrote:
>> This patch set adds a special operating mode to the existing source
>> builtin to make it behave in the desired way. When source is passed
>> the options --library or -l, it will search for files in the
>> directories given by the BASH_LIBRARIES_PATH environment variable,
>
> I think every single use of the term "library" in this whole endeavor
> is misleading and misguided.  It implies something unique about the
> files being sourced, when in fact there is nothing special about
> them at all -- they can still run arbitrary code, modify the global
> namespace, etc. etc. etc. etc.

Such is the extent to which I concur that I find even -l to be irritating.

-- 
Kerin Millar



Re: bash: ":?xxx" filename broken on autocomplete

2024-04-27 Thread Kerin Millar
On Sat, 27 Apr 2024 23:56:28 +0200
Andreas Schwab  wrote:

> On Apr 27 2024, Kerin Millar wrote:
> 
> > In the course of trying this in bash-5.3-alpha, I noticed something else. 
> > If ':?aa' is not the only entry in the current working directory, readline 
> > behaves as if : is an ambiguous completion. That is:
> >
> > # mkdir ':?aa'
> > # touch 'something-else'
> > # rmdir :
> >
> > ... produces nothing until pressing the tab key a second time, after which 
> > both entries are listed while the content of readline's input buffer 
> > remains unchanged.
> 
> ':' is in $COMP_WORDBREAKS.

Ah, I see.

-- 
Kerin Millar



Re: bash: ":?xxx" filename broken on autocomplete

2024-04-27 Thread Kerin Millar
On Sat, 27 Apr 2024 23:28:49 +0200
Gioele Barabucci  wrote:

> Control: found -1 5.2.21-2
> 
> On Tue, 27 Aug 2019 16:36:03 +0200 Philipp Marek  
> wrote:
> > the autocompletion is broken on filenames or directories with ":?" at the 
> > beginning.
> > 
> > # mkdir ':?aa'
> > # rmdir :
> > 
> > gives me
> > 
> > # rmdir :\:\?
> > 
> > which doesn't match the filename; I can finish completion by entering "aa", 
> > but then "rm" rejects this name.
> 
> In bash 5.2.21(1) the filename is now fully completed, but the stray ":" 
> at the beginning is still produced:
> 
>  $ mkdir ':?aa'
>  $ rmdir :
>  $ rmdir :\:\?aa/

In the course of trying this in bash-5.3-alpha, I noticed something else. If 
':?aa' is not the only entry in the current working directory, readline behaves 
as if : is an ambiguous completion. That is:

# mkdir ':?aa'
# touch 'something-else'
# rmdir :

... produces nothing until pressing the tab key a second time, after which both 
entries are listed while the content of readline's input buffer remains 
unchanged.

-- 
Kerin Millar



Re: Linux reports memfd_create() being called without MFD_EXEC or MFD_NOEXEC_SEAL set

2024-04-27 Thread Kerin Millar
On Sat, 27 Apr 2024 14:09:29 +0200
Andreas Schwab  wrote:

> On Apr 27 2024, Kerin Millar wrote:
> 
> > At some point after upgrading to bash-5.3-alpha, the following message 
> > appeared in my kernel ring buffer.
> >
> > [700406.870502] bash[3089019]: memfd_create() called without MFD_EXEC or 
> > MFD_NOEXEC_SEAL set
> 
> This warning has been tuned down in later kernels, but nevertheless,
> bash should pass MFD_NOEXEC_SEAL (if defined) when it calls
> memfd_create.

Thanks. For now, I have applied the attached modification to my working copy.

-- 
Kerin Millar


noexec-seal.patch
Description: Binary data


Linux reports memfd_create() being called without MFD_EXEC or MFD_NOEXEC_SEAL set

2024-04-27 Thread Kerin Millar
Hi,

At some point after upgrading to bash-5.3-alpha, the following message appeared 
in my kernel ring buffer.

[700406.870502] bash[3089019]: memfd_create() called without MFD_EXEC or 
MFD_NOEXEC_SEAL set

Unfortunately, it took me a while to notice the presence of this message. 
Therefore, I am uncertain as to what bash was being tasked with at the time 
that it was logged. The inference of the message seems clear, however. The 
presently running kernel is 6.6.28.

-- 
Kerin Millar



Re: 5.3-alpha: less readable output when set -x

2024-04-24 Thread Kerin Millar
On Wed, 24 Apr 2024, at 4:34 PM, baldu...@units.it wrote:
> hello
>
> Apologies if I am missing some blatant point here
>
> I have noticed a difference in behavior of bash-5.2.26 and
> bash-5.3-alpha which isn't a problem of correctness, but may be wasn't
> intentional(?)
>
> Given the scriptlett:
>
> 8<
> #!/bin/sh
> set -x
>
> show () {
> cat < $1
> EOF
> return 0
> }
>
> show "
> 1
> 2
> 3
> "
> exit 0
> 8<
>
> for me the output is different for the 2 versions:
>
> bash-5.2.26 :bash-5.3-alpha : 
> 8<   8< 
> ##> ./scriptlett.sh  ##> ./scriptlett.sh   
> + show ' + show $'\n1\n2\n3\n'  
> 1+ cat  
> 2   
> 31  
> '2  
> + cat3  
>
> 1+ return 0 
> 2+ exit 0   
> 3>8 
>
> + return 0
> + exit 0
> >8
>
> Note the difference in how the argument to the function is
> output.  In the case of bash-5.3-alpha the syntax of the argument is
> correct (ie if I call the show function with $'\n1\n2\n3\n' everything
> works as expected), but is less readable (and this is more so if the
> argument is a long stretch of lines)
>
> For what I seem to understand, this might be related to:
>
>   8<
>   b. Bash does a better job of preserving user-supplied quotes around a word
>  completion, instead of requoting it.
>   >8
> ?

I don't think so. That appears to be referring to the behaviour of readline 
completion in an interactive shell.

>
> Of course, if the "new" behavior is intentional, I guess there will be
> good reasons for it and apologize for the noise

It's an interesting observation. I have noticed lately that bash has started to 
become more consistent in its quoting strategies. For instance, 5.2 changed the 
behaviour of declare -p, such that it sometimes employs a quoting strategy like 
that of the ${param@Q} form of expansion.

$ var1=$'foo\nbar' var2=$'foo\rbar'
$ declare -p BASH_VERSION var1 var2
declare -- BASH_VERSION="5.2.26(1)-release"
declare -- var1=$'foo\nbar'
declare -- var2=$'foo\rbar'

$ var1=$'foo\nbar' var2=$'foo\rbar'
$ declare -p BASH_VERSION var1 var2
declare -- BASH_VERSION="5.1.16(1)-release"
declare -- var1="foo
bar"
bar"are -- var2="foo

In my opinion, that demonstrates that the new approach is obviously superior. 
That is, the output of 5.2 there is vastly more legible to me; to make sense of 
the output of 5.1, I might have to rely on a utility such as od or hexdump. Put 
another way, this style of quoting is tremendously helpful for conveying 
strings that do not exclusively consist of graphemes.

Anyway, it look as though the xtrace mode has been similar adjusted.

-- 
Kerin Millar



Re: [Help-bash] difference of $? and ${PIPESTATUS[0]}

2024-04-22 Thread Kerin Millar
On Mon, 22 Apr 2024, at 8:56 AM, Oğuz wrote:
> On Mon, Apr 22, 2024 at 10:24 AM Kerin Millar  wrote:
>> I cannot find anything in the manual that concretely explains why bash 
>> behaves as it does in this instance.
>
> Me neither, but the current behavior is useful. Take `while false |

Very much so. The clarity of the documentation is my only concern.

-- 
Kerin Millar



Re: [Help-bash] difference of $? and ${PIPESTATUS[0]}

2024-04-22 Thread Kerin Millar
On Mon, 22 Apr 2024, at 7:44 AM, Kerin Millar wrote:
> On Mon, 22 Apr 2024, at 7:13 AM, felix wrote:
>> Hi,
>>
>> Comming on this very old thread:
>>
>> On Wed, 4 Dec 2013 14:40:11 -0500, Greg Wooledge wrote:
>>> 
>>> The most obvious difference is that $? is shorter.
>>> 
>>> $? is also POSIX standard (older than POSIX in fact), so it works in sh
>>> scripts as well.  PIPESTATUS is a Bash extension.
>>> 
>>> Finally, note that if you execute a pipeline, $? will contain the exit
>>> status of the last command in the pipeline, not the first command,
>>> which is what ${PIPESTATUS[0]} would contain.  (If you execute a simple
>>> command instead of a pipeline, then they would both have the same value.)
>>
>> Some asked on StackOverflow.com:
>>   Why does a Bash while loop result in PIPESTATUS "1" instead of "0"?
>>   https://stackoverflow.com/q/78351657/1765658
>>
>> Then after some tests:
>>
>>   if ls /wrong/path | wc | cat - /wrong/path | sed 'w/wrong/path' 
>> >/dev/null ; then
>>   echo Don't print this'
>>   fi ; echo ${?@Q} ${PIPESTATUS[@]@A}  $(( $? ${PIPESTATUS[@]/#/+} ))
>>
>>   ls: cannot access '/wrong/path': No such file or directory
>>   cat: /wrong/path: No such file or directory
>>   sed: couldn't open file /wrong/path: No such file or directory
>>   '0' declare -a PIPESTATUS=([0]="2" [1]="0" [2]="1" [3]="4") 7
>>
>> Where $PIPESTATUS[0]=>2 and $?=>0 !!

I just looked at the the Stack Overflow thread. It's not explained by forking, 
at least. Here's a simpler test case, using the true and false builtins.

$ if false; then true; fi
$ echo "$?" "${PIPESTATUS[@]@Q}"
0 '1'

Clearly, ? shows the exit status of if, whereas PIPESTATUS shows the exit 
status of false, which counts as a foreground pipeline in its own right. I 
presume that this is by design but I must agree that it is surprising. I cannot 
find anything in the manual that concretely explains why bash behaves as it 
does in this instance.

Here is another case in which the pipeline comprises two compound commands, 
with the values of $? and PIPESTATUS being exactly as one would expect.

$ if false; then true; fi | if true; then false; fi; echo "$?" 
"${PIPESTATUS[@]@Q}"
1 '0' '1'

-- 
Kerin Millar



Re: [Help-bash] difference of $? and ${PIPESTATUS[0]}

2024-04-22 Thread Kerin Millar
On Mon, 22 Apr 2024, at 7:13 AM, felix wrote:
> Hi,
>
> Comming on this very old thread:
>
> On Wed, 4 Dec 2013 14:40:11 -0500, Greg Wooledge wrote:
>> 
>> The most obvious difference is that $? is shorter.
>> 
>> $? is also POSIX standard (older than POSIX in fact), so it works in sh
>> scripts as well.  PIPESTATUS is a Bash extension.
>> 
>> Finally, note that if you execute a pipeline, $? will contain the exit
>> status of the last command in the pipeline, not the first command,
>> which is what ${PIPESTATUS[0]} would contain.  (If you execute a simple
>> command instead of a pipeline, then they would both have the same value.)
>
> Some asked on StackOverflow.com:
>   Why does a Bash while loop result in PIPESTATUS "1" instead of "0"?
>   https://stackoverflow.com/q/78351657/1765658
>
> Then after some tests:
>
>   if ls /wrong/path | wc | cat - /wrong/path | sed 'w/wrong/path' 
> >/dev/null ; then
>   echo Don't print this'
>   fi ; echo ${?@Q} ${PIPESTATUS[@]@A}  $(( $? ${PIPESTATUS[@]/#/+} ))
>
>   ls: cannot access '/wrong/path': No such file or directory
>   cat: /wrong/path: No such file or directory
>   sed: couldn't open file /wrong/path: No such file or directory
>   '0' declare -a PIPESTATUS=([0]="2" [1]="0" [2]="1" [3]="4") 7
>
> Where $PIPESTATUS[0]=>2 and $?=>0 !!
>
> I could explain that '$?' is result of bash's if...then...fi group command
> executed correctly and PIPESTATUS hold result of "most-recently-executed
> foreground pipeline", but man page say:
>
> PIPESTATUS
>  An  array  variable (see Arrays below) containing a list of exit
>  status values from the processes in  the  most-recently-executed
>  foreground pipeline (which may contain only a single command).
>
>  ?   Expands  to  the exit status of the most recently executed fore‐
>  ground pipeline.
>
> If so, "$?" have to be equivalent to "${PIPESTATUS[0]}", I think.

No. That would only be true in the event that the pipeline comprises a single 
command. The present documentation is correct.

>
> I suggest that man page should be modified to replace "foreground pipeline"
> by "command" under "?" paragraph.

It's worth reading the section of the manual that concerns "Pipelines". Not 
least, to concretely understand what they are in grammatical terms, but also to 
understand that the exit status of a pipeline is "the exit status of the last 
command, unless the pipefail option is enabled". That is, the last command that 
constitutes the pipeline; there need not be more than one.

-- 
Kerin Millar



Re: Exporting functions does not expand aliases in subshells

2024-04-11 Thread Kerin Millar
On Thu, 11 Apr 2024, at 4:57 PM, Oğuz wrote:
> On Thursday, April 11, 2024, Kerin Millar  wrote: 
> Notwithstanding, I tried declaring the same function in an interactive 
> instance of dash and found that the alias within the command 
> substitution does end up being expanded, which is in stark contrast to 
> the behaviour of bash.
>
> Bash's behavior matches that of dash in POSIX mode. This is documented 
> here in the 8th item https://tiswww.case.edu/php/chet/bash/POSIX

Thank you. I thought that I had tested the posix mode properly before posting 
but I had not.

-- 
Kerin Millar



Re: Exporting functions does not expand aliases in subshells

2024-04-11 Thread Kerin Millar
On Thu, 11 Apr 2024, at 10:05 AM, Philipp Lengauer wrote:
> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -g -O2 -flto=auto -ffat-lto-objects -flto=auto
> -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security
> -Wall
> uname output: Linux TAG009442498805 5.15.0-102-generic #112-Ubuntu SMP Tue
> Mar 5 16:50:32 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
>
> Bash Version: 5.1
> Patch Level: 16
> Release Status: release
>
> Description:
>
> When defining aliases and then exporting a function uses these aliases, the
> exported function body has the aliases expanded. This makes sense because
> we cannot be sure the same aliases exist in the child process where the
> exported function will eventually be used. However, when using subshells in
> the child process, the aliases are not expanded. This is unexpected
> behavior and potentially breaks the function.
>
> Repeat-By:
>
> # this is a minimal example showing where it works and where it doesnt work
> alias echo='echo PREFIX'
> echo hello world
>  # prints "PREFIX hello world" => OK

There, the alias is expanded because you are in an interactive shell - where 
the expand_aliases shell option is already enabled - and because it is the 
"first word of a simple command".

> foo() { echo "hello world"; }

Likewise. Further, the alias is expanded at the time of the function's 
declaration.

$ declare -f foo
foo ()
{
echo PREFIX "hello world"
}

> export -f foo
> bash -c 'foo'
> # prints "PREFIX hello world" => OK
>
> foo() { output="$(echo "hello world")"; printf '%s\n' "$output"; }
> export -f 'foo'
> # prints "hello world" => NOT OK (PREFIX missing)

There, the alias does not end up being expanded ...

$ declare -f foo
foo ()
{
output="$(echo "hello world")";
printf '%s\n' "$output"
}

Given that aliases cannot be exported by way of the environment as functions 
can be, it ends up not working as you had anticipated. Even were it the case 
that they could be, you would still have needed to enable expand_aliases in the 
non-interactive shell.

Notwithstanding, I tried declaring the same function in an interactive instance 
of dash and found that the alias within the command substitution does end up 
being expanded, which is in stark contrast to the behaviour of bash.

$ ps -o comm= -p "$$"
dash
$ alias echo='echo PREFIX'
$ foo() { output="$(echo "hello world")"; printf '%s\n' "$output"; }
$ unalias echo
$ echo ok
ok
$ foo
PREFIX hello world

The behaviour of dash seems more logical to me, though I am uncertain as to 
which shell is in the right.

-- 
Kerin Millar



Re: Parsing regression with for loop in case statement

2024-04-10 Thread Kerin Millar
On Thu, 11 Apr 2024 15:07:14 +1200
Martin D Kealey  wrote:

> I can confirm that this changed between 4.4.23(49)-release and
> 5.0.0(1)-beta, which coincides with the parser being largely rewritten.
> 
> On Thu, 11 Apr 2024 at 12:51,  wrote:
> 
> > The POSIX shell grammar specifies that a newline may optionally appear
> > before the in keyword of a for loop.
> 
> 
> I don't see that at §2.9.4 "The for Loop" (
> https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_04_03)
> and I've never seen it in the wild.
> 
> But ... oh look, it's mentioned in §2.10.2 (
> https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_10_02
> ).
> 
> I wonder when that was added, and why?

I checked as far back as Issue 4 Version 2 (1994), which supports it. 
Specifically, it specifies the following two forms:

- for name linebreak do_group
- for name linebreak in wordlist sequential_sep do_group

Issue 6 additionally specifies the following form:

- for name linebreak in sequential_sep do_group

As a consequence of https://austingroupbugs.net/view.php?id=581, Issue 7 
additionally specifies the following form:

- for name sequential_sep do_group

Note that "linebreak" implies either a "newline_list" or nothing at all. With 
that in mind, here are some examples.

for var do :; done
for var in 1; do :; done
for var in; do :; done
for var; do :; done

-- 
Kerin Millar



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Kerin Millar
On Tue, 9 Apr 2024 10:42:58 +1200
Martin D Kealey  wrote:

> On Mon, 8 Apr 2024 at 01:49, Kerin Millar  wrote:
> 
> > the method by which vim amends files is similar to that of sed -i.
> >
> 
> I was about to write "nonsense, vim **never** does that for me", but then I
> remembered that using ":w!" instead of ":w" (or ":wq!" instead of ":wq")
> will write the file as normal, but if that fails, it will attempt to remove
> it and create a new one. Ironically, that's precisely one of the cases
> where using "sed -i" is a bad idea, but at least with vim you've already
> tried ":w" and noticed that it failed, and made a considered decision to
> use ":w!" instead.
> 
> Except that nowadays many folk always type ":wq!" to exit vim, and never
> put any thought into this undesirable side effect.
> 
> I put that in the same bucket as using "kill -9" to terminate daemons, or
> liberally using "-f" or "--force" in lots of other places. Those  are bad
> habits, since they override useful safety checks, and I recommend making a
> strenuous effort to unlearn such patterns. Then you can use these stronger
> versions only when (1) the soft versions fail, and (2) you understand the
> collateral damage, and (3) you've thought about it and decided that it's
> acceptable in the particular circumstances.
> 
> -Martin
> 
> PS: I've never understood the preference for ":wq" over "ZZ" (or ":x"); I
> want to leave the modification time unchanged if I don't edit the file.

Alright. In that case, I don't know why I wasn't able to 'inject' a replacement 
command with it. I'll give it another try and see whether I can determine what 
happened.

-- 
Kerin Millar



Re: Potential Bash Script Vulnerability

2024-04-08 Thread Kerin Millar
On Mon, 8 Apr 2024, at 5:29 AM, John Passaro wrote:
> if you wanted this for your script - read all then start semantics, as 
> opposed to read-as-you-execute - would it work to rewrite yourself 
> inside a function?
>
> function main() { ... } ; main

Mostly, yes. My initial post in this thread spoke of it. It isn't a panacea 
because a sufficiently large compound command can cause bash to run out of 
stack space. In that case, all one can do is to break the script down further 
into additional, smaller, functions.

-- 
Kerin Millar



Re: Potential Bash Script Vulnerability

2024-04-07 Thread Kerin Millar
On Mon, 08 Apr 2024 00:23:38 +0300
ad...@osrc.rip wrote:

> On 2024-04-07 16:49, Kerin Millar wrote:
> > On Sun, 7 Apr 2024, at 5:17 AM, ad...@osrc.rip wrote:
> >> Hello everyone!
> >> 
> >> I've attached a minimal script which shows the issue, and my 
> >> recommended
> >> solution.
> >> 
> >> Affected for sure:
> >> System1: 64 bit Ubuntu 22.04.4 LTS - Bash: 5.1.16(1)-release - 
> >> Hardware:
> >> HP Pavilion 14-ec0013nq (Ryzen 5 5500u, 32GB RAM, Radeon grapics, nvme
> >> SSD.)
> >> System2: 64 bit Ubuntu 20.10 (No longer supported.) - Bash:
> >> 5.0.17(1)-release - Hardware: DIY (AMD A10-5800k, 32GB RAM, Radeon
> >> graphics, several SATA drives)
> >> and probably a lot more...
> >> 
> >> Not sure whether or not this is a know issue, truth be told I 
> >> discovered
> >> it years ago (back around 2016) as I was learning bash scripting, and
> >> accidentally appended a command to the running script, which got
> >> executed immediately after the script but back then I didn't find it
> >> important to report since I considered myself a noob. I figured 
> >> someone
> >> more experienced will probably find and fix it, or there must be a
> >> reason for it. I forgotű it. Now watching a video about clever use of
> >> shell in XZ stuff I remembered, tested it again and found it still
> >> unpatched. :S So now I'm reporting it and hope it helps!
> > 
> > It is a known pitfall, though perhaps not as widely known as it ought 
> > to be. The reason that your usage of (GNU) sed fails as a 
> > self-modification technique is that sed -i behaves as follows.
> > 
> > 1) it creates a temporary file
> > 2) it sends its output to the temporary file
> > 3) it renames the temporary file over the original file from which it 
> > read
> > 
> > The consequence of the third step is that the original file is 
> > unlinked. In its place will be a new hard link, bearing the same name, 
> > but otherwise quite distinct from the original. Such can be easily 
> > demonstrated:
> > 
> > $ touch file
> > $ stat -c %i file
> > 1548822
> > $ strace -erename sed -i -e '' file
> > rename("./sedP2oQ5I", "file")   = 0
> > +++ exited with 0 +++
> > $ stat -c %i file
> > 1548823
> > 
> > See how the revised file has an entirely new inode number? It proves 
> > that sed does not perform 'in-place' editing at all. For more 
> > information regarding that particular topic, take a look at 
> > https://backreference.org/2011/01/29/in-place-editing-of-files/index.html.
> > 
> > Now, at the point that the original file is unlinked, its contents will 
> > remain available until such time as its reference count drops to 0. 
> > This is a characteristic of unix and unix-like operating systems in 
> > general. Let's assume that the file in question is a bash script, that 
> > bash had the file open and that it was still reading from it. Bash will 
> > not yet 'see' your modifications. However, once bash closes the file 
> > and exits, should you then instruct bash to execute the script again, 
> > it will follow the new hard link and thereby read the new file. 
> > Further, assuming that no other processes also had the original file 
> > open at the time of bash exiting, its reference count will drop to 0, 
> > and the backing filesystem will free its associated data.
> > 
> > From this, we may reason that the pitfall you stumbled upon applies 
> > where the file is modified in such a way that its inode number does not 
> > change e.g. by truncating and re-writing the file. One way to 
> > demonstrate this distinction is to apply your edit with an editor that 
> > behaves in this way, such as nano. Consider the following script.
> > 
> > #!/bin/bash
> > echo begin
> > sleep 10
> > : do nothing
> > echo end
> > 
> > You can try opening this script with nano before executing it. While 
> > the sleep command is still running, replace ": do nothing" with a 
> > command of your choosing, then instruct nano to save the amended 
> > script. You will find that the replacement command ends up being 
> > executed. Repeat the experiment with vim and you will find that the 
> > outcome is different. That's because the method by which vim amends 
> > files is similar to that of sed -i.
> > 
> > You propose a method by which bash might implicitly work around this 
> > pitfall but it would not suffice. If you p

Re: Potential Bash Script Vulnerability

2024-04-07 Thread Kerin Millar
On Sun, 7 Apr 2024, at 5:17 AM, ad...@osrc.rip wrote:
> Hello everyone!
>
> I've attached a minimal script which shows the issue, and my recommended 
> solution.
>
> Affected for sure:
> System1: 64 bit Ubuntu 22.04.4 LTS - Bash: 5.1.16(1)-release - Hardware: 
> HP Pavilion 14-ec0013nq (Ryzen 5 5500u, 32GB RAM, Radeon grapics, nvme 
> SSD.)
> System2: 64 bit Ubuntu 20.10 (No longer supported.) - Bash: 
> 5.0.17(1)-release - Hardware: DIY (AMD A10-5800k, 32GB RAM, Radeon 
> graphics, several SATA drives)
> and probably a lot more...
>
> Not sure whether or not this is a know issue, truth be told I discovered 
> it years ago (back around 2016) as I was learning bash scripting, and 
> accidentally appended a command to the running script, which got 
> executed immediately after the script but back then I didn't find it 
> important to report since I considered myself a noob. I figured someone 
> more experienced will probably find and fix it, or there must be a 
> reason for it. I forgotű it. Now watching a video about clever use of 
> shell in XZ stuff I remembered, tested it again and found it still 
> unpatched. :S So now I'm reporting it and hope it helps!

It is a known pitfall, though perhaps not as widely known as it ought to be. 
The reason that your usage of (GNU) sed fails as a self-modification technique 
is that sed -i behaves as follows.

1) it creates a temporary file
2) it sends its output to the temporary file
3) it renames the temporary file over the original file from which it read

The consequence of the third step is that the original file is unlinked. In its 
place will be a new hard link, bearing the same name, but otherwise quite 
distinct from the original. Such can be easily demonstrated:

$ touch file
$ stat -c %i file
1548822
$ strace -erename sed -i -e '' file
rename("./sedP2oQ5I", "file")   = 0
+++ exited with 0 +++
$ stat -c %i file
1548823

See how the revised file has an entirely new inode number? It proves that sed 
does not perform 'in-place' editing at all. For more information regarding that 
particular topic, take a look at 
https://backreference.org/2011/01/29/in-place-editing-of-files/index.html.

Now, at the point that the original file is unlinked, its contents will remain 
available until such time as its reference count drops to 0. This is a 
characteristic of unix and unix-like operating systems in general. Let's assume 
that the file in question is a bash script, that bash had the file open and 
that it was still reading from it. Bash will not yet 'see' your modifications. 
However, once bash closes the file and exits, should you then instruct bash to 
execute the script again, it will follow the new hard link and thereby read the 
new file. Further, assuming that no other processes also had the original file 
open at the time of bash exiting, its reference count will drop to 0, and the 
backing filesystem will free its associated data.

>From this, we may reason that the pitfall you stumbled upon applies where the 
>file is modified in such a way that its inode number does not change e.g. by 
>truncating and re-writing the file. One way to demonstrate this distinction is 
>to apply your edit with an editor that behaves in this way, such as nano. 
>Consider the following script.

#!/bin/bash
echo begin
sleep 10
: do nothing
echo end

You can try opening this script with nano before executing it. While the sleep 
command is still running, replace ": do nothing" with a command of your 
choosing, then instruct nano to save the amended script. You will find that the 
replacement command ends up being executed. Repeat the experiment with vim and 
you will find that the outcome is different. That's because the method by which 
vim amends files is similar to that of sed -i.

You propose a method by which bash might implicitly work around this pitfall 
but it would not suffice. If you perform an in-place edit upon any portion of a 
script that bash has not yet read and/or buffered - while bash is still 
executing said script - then the behaviour of the script will be affected. If 
you consider this to be a genuine nuisance, a potential defence is to compose 
your scripts using compound commands. For example:

#!/bin/bash
{
   : various commands here
   exit
}

Alternatively, use functions - which are really just compound commands attached 
to names:

#!/bin/bash
main() {
   : various commands here.
   exit
}
main "$@"

Doing so helps somewhat because bash is compelled to read all the way to the 
end of a compound command at the point that it encounters one, prior to its 
contents being executed.

Ultimately, the best defence against the potentially adverse consequences of 
performing an in-place edit is to to refrain entirely from performing in-place 
edits.

-- 
Kerin Millar



Re: Scope change in loops with "read" built-in

2024-04-05 Thread Kerin Millar
On Thu, 04 Apr 2024 20:39:51 -0400
"Dale R. Worley"  wrote:

> "Linde, Evan"  writes:
> > In a loop constructed like `... | while read ...`, changes to 
> > variables declared outside the loop only have a loop local
> > scope, unlike other "while" or "for" loops.
> 
> Yeah, that's a gotcha.  But it's a general feature of *pipelines*,
> documented in
> 
>Each command in a pipeline is executed as a separate process (i.e.,  in
>a  subshell).  See COMMAND EXECUTION ENVIRONMENT for a description of a
>subshell environment.  If the lastpipe  option  is  enabled  using  the
>shopt builtin (see the description of shopt below), the last element of
>a pipeline may be run by the shell process.
> 
> To circumvent that, I've sometimes done things like
> 
> exec 3<( ... command to generate stuff ... )
> while read VAR <&3; do ... commands to process stuff ... ; done
> exec 3<-
> 
> You may be able to condense that to
> 
> {
> while read VAR <&3; do ... commands to process stuff ... ; done
> } <( ... command to generate stuff ... )
> 

Owing to while being a compound command, it need not be solely enclosed by 
another. Optionally, read's -u option may be used to avoid a dup(2) syscall for 
each line read. More importantly, the necessary redirection is missing. As 
such, your example could be amended as:

while read -r -u3 var; do ... processing commands ...; done 3< <(... generating 
commands ...)

In the event that the processing commands are known not to attempt to read from 
STDIN, that may be further reduced to:

while read -r var; do ... processing commands ...; done < <(... generating 
commands ...)

-- 
Kerin Millar



Re: Bash 5.2: Missing ‘shopt’ option ‘syslog_history’ in doc/bashref.texi

2024-03-17 Thread Kerin Millar
On Mon, 18 Mar 2024 06:30:49 +0200
Oğuz  wrote:

> On Sunday, March 17, 2024, tpeplt  wrote:
> >
> >The texinfo version of the bash manual is missing a description of
> > the ‘shopt’ option ‘syslog_history’.
> >
> 
>  That must be a vendor extension, bash doesn't have such an option.

It has such an option in the case that SYSLOG_HISTORY and SYSLOG_SHOPT are 
defined.

-- 
Kerin Millar



Re: multi-threaded compiling

2024-03-11 Thread Kerin Millar
On Mon, 11 Mar 2024 15:36:48 -0400
Greg Wooledge  wrote:

> > On Mon, Mar 11, 2024, 20:13 Mischa Baars 
> > wrote:
> > 
> > > Also I don't think that gives you an exit status for each 'exit $i'
> > > started. I need that exit status.
> 
> "wait -n" without a PID won't help you, then.  You don't get the PID or
> job ID that terminated, and you don't get the exit status.  It's only

It does convey the exit status.

> of interest if you're trying to do something like "run these 100 jobs,
> 5 at a time" without storing their exit statuses.

The pid can be obtained with the -p option, as of 5.1. Below is a synthetic 
example of how it might be put into practice.

#!/bin/bash

declare -A job_by status_by
max_jobs=4
jobs=0

wait_next() {
local pid
wait -n -p pid
status_by[$pid]=$?
unset -v 'job_by[$pid]'
}

worker() {
sleep "$(( RANDOM % 5 ))"
exit "$(( RANDOM % 2 ))"
}

for (( i = 0; i < 16; ++i )); do
(( jobs++ < max_jobs )) || wait_next
worker & job_by[$!]=
done

while (( ${#job_by[@]} )); do
wait_next
done

declare -p status_by

-- 
Kerin Millar



Re: "local -g" declaration references local var in enclosing scope

2024-03-11 Thread Kerin Millar
On Mon, 11 Mar 2024 11:45:17 -0400
Chet Ramey  wrote:

> On 3/11/24 12:08 AM, Kerin Millar wrote:
> 
> > Speaking of which, to do both of these things has some interesting effects 
> > ...
> > 
> > $ z() { local -g a; unset -v a; a=123; echo "innermost: $a"; }; unset -v a; 
> > x; declare -p a
> > innermost: 123
> > inner: 123
> > outer: 123
> > declare -- a
> > 
> > $ z() { local -g a; unset -v a; unset -v a; a=123; echo "innermost: $a"; }; 
> > unset -v a; x; declare -p a
> > innermost: 123
> > inner: 123
> > outer: 123
> > declare -- a="123"
> 
> These show the normal effects of unset combined with dynamic scoping. This
> ability to unset local variables in previous function scopes has been the
> subject of, um, spirited discussion.

Of course, you are right. I had confused myself into thinking that there was 
some curious interaction between -g and the scope peeling technique on display 
there but I just tested again and could observe no adverse interaction. I'll 
attribute that to only being half-awake at the time.

> 
> > $ x() { local a; y; local +g a; a=456; echo "outer: $a"; }; unset -v a; x; 
> > declare -p a
> > innermost: 123
> > inner: 123
> > outer: 456
> > declare -- a="123"
> 
> Let's assume the previous function definitions remain in effect here. In
> that case, z continues to unset the local variable definitions in previous
> scopes, but the `local +g a' is basically a no-op -- it implies not using
> the global scope for a, which is the same as not using the option at all.

Thanks. To confirm this, I replaced "local +g a" with "local a" and saw that it 
continued to have the same effect of preventing the assignment of 456 from 
happening in the global scope. It came as a minor surprise but does make sense, 
given the purpose of local.

-- 
Kerin Millar



Re: "local -g" declaration references local var in enclosing scope

2024-03-10 Thread Kerin Millar
On Sun, 10 Mar 2024 17:36:13 -0400
Greg Wooledge  wrote:

> On Sun, Mar 10, 2024 at 04:01:10PM -0400, Lawrence Velázquez wrote:
> > Basically, without an assignment, "local -g" does nothing.
> 
> Well, the original purpose of -g was to create variables, especially
> associative arrays, at the global scope from inside a function.
> 
> I think this thread has been asking about a completely different
> application, namely to operate upon a global scope variable from
> within a function where a non-global scope variable is shadowing the
> global one.
> 
> (As far as I know, there isn't any sensible way to do that in bash.)

I agree. Given that bash does not support static scoping, it seems only 
sensible to suggest that one simply refrains from trying to do that.

> 
> 
> Here it is in action.  "local -g" (or "declare -g") without an assignment
> in the same command definitely does things.
> 
> hobbit:~$ f() { declare -g var; var=in_f; }
> hobbit:~$ unset -v var; f; declare -p var
> declare -- var="in_f"
> 
> 
> I think the key to understanding is that while "local -g var" creates
> a variable at the global scope, any references to "var" within the
> function still use the standard dynamic scoping rules.  They won't
> necessarily *see* the global variable, if there's another one at a
> more localized scope.

-- 
Kerin Millar



Re: "local -g" declaration references local var in enclosing scope

2024-03-10 Thread Kerin Millar
On Sun, 10 Mar 2024 16:01:10 -0400
Lawrence Velázquez  wrote:

> On Sun, Mar 10, 2024, at 1:51 PM, Kerin Millar wrote:
> > Dynamic scoping can be tremendously confusing. The following examples 
> > should help to clarify the present state of affairs.
> >
> > $ x() { local a; y; echo "outer: $a"; }
> > $ y() { local a; a=123; echo "inner: $a"; }
> > $ x; echo "outermost: $a"
> > inner: 123
> > outer:
> > outermost:
> >
> > This is likely as you would expect.
> >
> > $ y() { local -g a; a=123; echo "inner: $a"; }
> > $ x; echo "outermost: $a"
> > inner: 123
> > outer: 123
> > outermost:
> >
> > This may not be. There, the effect of the -g option effectively ends at 
> > the outermost scope in which the variable, a, was declared. Namely, 
> > that of the x function.
> 
> This doesn't seem to be accurate; the assignment is performed at
> the *innermost* declared scope (other than the "local -g" one):
> 
>   $ x() { local a; y; echo "outer: $a"; }
>   $ y() { local a; z; echo "inner: $a"; }
>   $ z() { local -g a; a=123; echo "innermost: $a"; }
>   $ x; echo "outermost: $a"
>   innermost: 123
>   inner: 123
>   outer:
>   outermost:
> 
> Basically, without an assignment, "local -g" does nothing.

It might be tempting to think that the criteria for being "ignored" are 
fulfilled but it is not the case. Below is something that I should also have 
tried before initially posting in this thread.

$ z() { a=123; echo "innermost: $a"; }; unset -v a; x; declare -p a
innermost: 123
inner: 123
outer:
bash: declare: a: not found

$ z() { local -g a; a=123; echo "innermost: $a"; }; unset -v a; x; declare -p a
innermost: 123
inner: 123
outer:
declare -- a

$ z() { local -g a=456; a=123; echo "innermost: $a"; }; unset -v a; x; declare 
-p a
innermost: 123
inner: 123
outer:
declare -- a="456"

I think that Greg has it right. The use of the -g option, alone, is sufficient 
to reach into the global scope, though dynamic scoping behaviour otherwise 
remains in effect. That is, one would otherwise still need to pop scopes - so 
to speak - to reach the outermost/bottommost scope.

Speaking of which, to do both of these things has some interesting effects ...

$ z() { local -g a; unset -v a; a=123; echo "innermost: $a"; }; unset -v a; x; 
declare -p a
innermost: 123
inner: 123
outer: 123
declare -- a

$ z() { local -g a; unset -v a; unset -v a; a=123; echo "innermost: $a"; }; 
unset -v a; x; declare -p a
innermost: 123
inner: 123
outer: 123
declare -- a="123"

$ x() { local a; y; local +g a; a=456; echo "outer: $a"; }; unset -v a; x; 
declare -p a
innermost: 123
inner: 123
outer: 456
declare -- a="123"

I remain somewhat uncertain that the manual conveys enough information to be 
able to perfectly reason with all of this.

-- 
Kerin Millar



Re: "local -g" declaration references local var in enclosing scope

2024-03-10 Thread Kerin Millar
On Sun, 10 Mar 2024 23:09:37 +0800
Adrian Ho  wrote:

> [Apologies, an earlier edition of this bug report was sent from the address
> a...@lex.03s.net, which can only be replied to from my internal network.
> Please ignore that report. Thanks much!]
> 
> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc-11
> Compilation CFLAGS: -DSSH_SOURCE_BASHRC
> uname output: Linux lex 6.5.0-25-generic #25-Ubuntu SMP PREEMPT_DYNAMIC Wed
> Feb  7 14:58:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
> 
> Bash Version: 5.2
> Patch Level: 26
> Release Status: release
> 
> Description:
> "local -g var" incorrectly references a local variable in an
> enclosing scope, while "local -g var=val" correctly references and modifies
> the global variable.
> 
> Repeat-By:
> Test script:
> 
> #!/usr/bin/env bash
> echo "Bash version: $BASH_VERSION"
> 
> b() {
>   local -g a=2
>   c
> }
> c() {
>   local -g a
>   a=3
> }
> 
> y() {
>   local z=1
>   x
> }
> x() {
>   local -g z=2
>   w
> }
> w() {
>   local -g z
>   echo "w: z=$z"
>   z=3
> }
> 
> b; y
> echo "a=$a z=$z"
> 
> Expected output:
> 
> Bash version: 5.2.26(1)-release
> w: z=2
> a=3 z=3
> 
> Actual output:
> 
> Bash version: 5.2.26(1)-release
> w: z=1
> a=3 z=2

Dynamic scoping can be tremendously confusing. The following examples should 
help to clarify the present state of affairs.

$ x() { local a; y; echo "outer: $a"; }
$ y() { local a; a=123; echo "inner: $a"; }
$ x; echo "outermost: $a"
inner: 123
outer:
outermost:

This is likely as you would expect.

$ y() { local -g a; a=123; echo "inner: $a"; }
$ x; echo "outermost: $a"
inner: 123
outer: 123
outermost:

This may not be. There, the effect of the -g option effectively ends at the 
outermost scope in which the variable, a, was declared. Namely, that of the x 
function.

$ y() { local -g a=123; echo "inner: $a"; }
$ x; echo "outermost: $a"
inner:
outer:
outermost: 123

There, by combining the -g option with assignment syntax, the outermost scope - 
that which most would consider as being implied by the term, global - is duly 
reached.

The manual states that the "-g option forces variables to be created or 
modified at the global scope, even when declare is executed in a shell 
function" and that "it is ignored in all other cases". I would consider this 
wording insufficient for a user to be able to effectively reason with the 
difference between the second case and the other two cases presented.

-- 
Kerin Millar



Re: [PATCH] tests/array.tests: using grep -E instead of egrep

2024-02-19 Thread Kerin Millar
On Mon, 19 Feb 2024, at 1:29 PM, Oğuz wrote:
> On Monday, February 19, 2024, Lawrence Velázquez  wrote:
>>
>> On what system does this happen?  The proposed changes might break the
>> test suite on some older systems.
>>
>
> Agreed. Those egrep invocations can be replaced with `grep -v -e
> BASH_VERSINFO -e PIPESTATUS -e GROUPS' though.

Indeed. This would be a perfectly sensible solution. For anything more 
involved, there is always awk.

-- 
Kerin Millar



Inconsistent treatment of left-hand side of conditional expression where IFS is not its default value

2024-02-18 Thread Kerin Millar
Hi,

This report stems from the discussion at 
https://lists.gnu.org/archive/html/help-bash/2024-02/msg00085.html.

Consider the following two cases.

$ ( set a -- b; f=+ IFS=$f; [[ $f$*$f == *"$f--$f"* ]]; echo $? )
0

$ ( set a -- b; f=$'\1' IFS=$f; [[ $f$*$f == *"$f--$f"* ]]; echo $? )
1

It does not make sense that that the exit status value differs between these 
cases, especially since SOH is not a whitespace character (in the sense of 
field splitting). I think that the second case should also yield 0. Regardless 
of what the intended behaviour is, I would also expect for the manual to 
describe it.

Note that quoting the left-hand side fixes it for SOH. In the absence of 
quotes, xtrace output suggests that all of the SOH characters are stripped from 
the expansion of $f$*$f.

$ ( set a -- b; f=$'\1' IFS=$f; [[ "$f$*$f" == *"$f--$f"* ]]; echo $? )
0

-- 
Kerin Millar



Re: declare -A +A

2024-02-12 Thread Kerin Millar
On Tue, 13 Feb 2024 00:07:47 +0100
alex xmb sw ratchev  wrote:

> On Mon, Feb 12, 2024, 23:25 Koichi Murase  wrote:
> 
> > 2024年2月13日(火) 6:13 Chet Ramey :
> > > Only for indexed and associative arrays, since those are the attributes
> > > that cause changes in the underlying value storage format. That's
> > different
> > > than turning the integer attribute on and off, for instance.
> > >
> > > Should it be an actual error, or should the shell just cancel out the
> > > attribute change requests and go on? What do folks think?
> >
> > I think it can be a usage error; it doesn't do anything on the
> > variable (i.e. doesn't make it an associative array) and outputs an
> > error message.
> >
> 
> isnt the gnu style , excepts defined cases , last used is active

That's not quite accurate. GNU does not directly mention any such style.

  
https://www.gnu.org/prep/standards/html_node/Command_002dLine-Interfaces.html#Command_002dLine-Interfaces

Still, GNU begins by recommending that the "POSIX guidelines" be followed (in 
typical GNU fashion, not clearly referencing the source material). As such, 
here are the current "Utility Conventions".

  https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html

There, the 11th guideline states:

"The order of different options relative to one another should not matter, 
unless the options are documented as mutually-exclusive and such an option is 
documented to override any incompatible options preceding it. If an option that 
has option-arguments is repeated, the option and option-argument combinations 
should be interpreted in the order specified on the command line."

So, it expresses a preference for the last specified, mutually exclusive option 
winning, _provided_ that it is documented to that effect. For that matter, +A 
does not qualify as an option in their parlance.

In any case, it is a guideline, not an edict. It doesn't seem to me to be a 
compelling argument against having the declare builtin treat an illogical 
request as a usage error.

-- 
Kerin Millar



Re: [PATCH] printf6.sub: set LC_ALL

2024-02-07 Thread Kerin Millar
On Wed, 7 Feb 2024 13:59:47 -0500
Grisha Levit  wrote:

> The tests in printf6.sub fail if `make check' is executed in the C locale.
> 
> diff --git a/tests/printf6.sub b/tests/printf6.sub
> index fbacd4d5..382943c7 100644
> --- a/tests/printf6.sub
> +++ b/tests/printf6.sub
> @@ -11,6 +11,8 @@
>  #   You should have received a copy of the GNU General Public License
>  #   along with this program.  If not, see <http://www.gnu.org/licenses/>.
>  #
> +#LC_ALL=en_US.UTF-8
> +
>  # this should echo nothing
>  printf '%ls'
>  # this should echo a null byte
> 

Is this not merely adding a comment?

-- 
Kerin Millar



Re: set-e and command expansion

2024-02-04 Thread Kerin Millar
On Sun, 04 Feb 2024 20:27:56 +0300
Van de Bugger  wrote:

> Hi,
> 
> bash version 5.2.21(1)-release (x86_64-redhat-linux-gnu)  
> Fedora Linux 38  
> bash is installed from the Fedora repo, compiler I guess is gcc 13, probably
> 13.2 or 13.2.1
> 
> All the test scripts follow the same scheme:
> 
> #!/bin/bash  
> set -e  
> echo before  
> COMMAND  
> echo after
> 
> Let's consider few COMMAND variations and script behavior.
> 
> Case 1: false
> 
> $ cat ./test
> #!/bin/bash
> set -e
> echo before
> false
> echo after
> 
> $ ./test  
> before
> 
> "false" exits with status 1, so "set -e" terminates the script before "echo
> after". That's expected behavior.
> 
> Case 2: var=$(false); echo $var
> 
> $ cat ./test
> #!/bin/bash
> set -e
> echo before
> var=$(false); echo $var
> echo after
> 
> $ ./test  
> before
> 
> The script is still terminated before "echo after" (actually, even before 
> "echo
> $var"). I didn't find in the bash manual how bash should behave in such case,
> but it exits and I think this is ok.

This behaviour pertains to the use of Simple Commands.

Below is an excerpt from 
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_01.

"If there is a command name, execution shall continue as described in Command 
Search and Execution. If there is no command name, but the command contained a 
command substitution, the command shall complete with the exit status of the 
last command substitution performed."

Below is an excerpt from 
https://www.gnu.org/software/bash/manual/html_node/Simple-Command-Expansion.html.

"If there is a command name left after expansion, execution proceeds as 
described below. Otherwise, the command exits. If one of the expansions 
contained a command substitution, the exit status of the command is the exit 
status of the last command substitution performed."

> 
> Case 3: echo $(false)
> 
> $ cat ./test  
> #!/bin/bash  
> set -e  
> echo before  
> echo $(false)  
> echo after
> 
> $ ./test  
> before
> 
> after
> 
> Oops, in this case the script is NOT terminated before "echo after", but
> continues to the end. I would say this is a bug, but interaction between "set 
> -

It isn't a bug. In this particular case, the exit status of echo was 0, while 
the exit status of the false builtin was immaterial.

The behaviour of errexit can be confusing in practice. You may find 
https://mywiki.wooledge.org/BashFAQ/105 to be an interesting read.

-- 
Kerin Millar



Re: cd"": Shouldn't it fail?

2024-01-30 Thread Kerin Millar
On Tue, 30 Jan 2024 10:34:28 -0500
Chet Ramey  wrote:

> On 1/30/24 10:29 AM, Kerin Millar wrote:
> 
> >>> I'm not sure that this is accurate. In my testing, bash does not even 
> >>> perform this canonicalization step, which is optional to begin with. That 
> >>> is, it states that "an implementation may" canonicalize in the manner 
> >>> described, and prescribes a course of action to take if - and only if - 
> >>> such canonicalization is performed.
> >>
> >> It does. Try `cd /usr/bin' and see what ends up in $PWD.
> > 
> > Thanks. I doubted myself on this point upon reading your response to my 
> > last bug report (in which you spoke of canonicalization). I was certain 
> > that it was not material to the behaviour under discussion in this thread, 
> > though.
> 
> It's not, really, since the operand is "" and $PWD won't contain multiple
> slashes.

I said "it was not".

-- 
Kerin Millar



Re: cd"": Shouldn't it fail?

2024-01-30 Thread Kerin Millar
On Tue, 30 Jan 2024 10:13:37 -0500
Chet Ramey  wrote:

> > POSIX Programmer's Manual (cd(1p) manpage) says this:
> 
> I don't know what this is.

Some operating systems helpfully offer packages containing POSIX material in 
the form of man pages. The pages covering the shell and standard utilities have 
a section of "1p".

https://repology.org/project/man-pages-posix/versions

-- 
Kerin Millar



Re: cd"": Shouldn't it fail?

2024-01-30 Thread Kerin Millar
On Tue, 30 Jan 2024 10:22:18 -0500
Chet Ramey  wrote:

> On 1/28/24 10:34 AM, Kerin Millar wrote:
> > On Sun, 28 Jan 2024 18:09:24 +0300
> > Oğuz  wrote:
> > 
> >> On Sun, Jan 28, 2024 at 5:10 PM  wrote:
> >>> POSIX Programmer's Manual (cd(1p) manpage) says this:
> >>>
> >>>[9 unrelated special cases]
> >>>
> >>>10. The cd utility shall then perform actions equivalent to the
> >>> chdir() function called with curpath as the path argument. If
> >>> these actions fail for any reason, the cd utility shall
> >>> display an appropriate error message and the remainder of
> >>> this step shall not be executed. [...]
> >>
> >> Right before that, it says in 8.c:
> >>> If, as a result of this canonicalization, the curpath variable is null, 
> >>> no further steps shall be taken.
> >>
> >> which is what happens in bash, busybox sh, dash, mksh, pdksh, oksh,
> >> yash, zsh, and NetBSD sh.
> >>
> > 
> > I'm not sure that this is accurate. In my testing, bash does not even 
> > perform this canonicalization step, which is optional to begin with. That 
> > is, it states that "an implementation may" canonicalize in the manner 
> > described, and prescribes a course of action to take if - and only if - 
> > such canonicalization is performed.
> 
> It does. Try `cd /usr/bin' and see what ends up in $PWD.

Thanks. I doubted myself on this point upon reading your response to my last 
bug report (in which you spoke of canonicalization). I was certain that it was 
not material to the behaviour under discussion in this thread, though.

-- 
Kerin Millar



Re: [bash-devel] Attempting to cd to the empty directory operand where ./ does not exist aborts

2024-01-29 Thread Kerin Millar
On Mon, 29 Jan 2024 10:30:43 -0500
Chet Ramey  wrote:

> On 1/29/24 5:51 AM, Kerin Millar wrote:
> 
> > $ bash -c 'declare -p BASH_VERSION; cd ""'
> > shell-init: error retrieving current directory: getcwd: cannot access 
> > parent directories: No such file or directory
> > declare -- BASH_VERSION="5.3.0(4)-devel"
> > chdir: error retrieving current directory: getcwd: cannot access parent 
> > directories: No such file or directory
> > malloc: ./cd.def:619: assertion botched
> > malloc: 0x56be197137b0: allocated: last allocated from pathcanon.c:109
> > free: start and end chunk sizes differ
> > Aborting...Aborted (core dumped)
> 
> Thanks for the report. `cd' should not try to canonicalize empty pathnames;
> it should just see what chdir(2) returns and go from there.
> 
> > And, again, with bash 5.2.
> 
> I suspect your version of bash-5.2 is built without the bash malloc for
> some reason, since I can reproduce this with bash-5.2.

You are quite right.

https://gitlab.archlinux.org/archlinux/packaging/packages/bash/-/blob/c5dfc21dfe74524ca5766af83924cc8c3e3f1a0a/PKGBUILD#L60

-- 
Kerin Millar



[bash-devel] Attempting to cd to the empty directory operand where ./ does not exist aborts

2024-01-29 Thread Kerin Millar
Hi,

This is with commit 138f3cc3591163d18ee4b6390ecd6894d5d16977 running on Linux 
6.7.2 and glibc-2.38.

$ mkdir -p ~/dir && cd ~/dir && rmdir ~/dir
$ cd ""
bash: cd: : No such file or directory

So far, so good. Now let's try to cd from a non-interactive instance.

$ bash -c 'declare -p BASH_VERSION; cd ""'
shell-init: error retrieving current directory: getcwd: cannot access parent 
directories: No such file or directory
declare -- BASH_VERSION="5.3.0(4)-devel"
chdir: error retrieving current directory: getcwd: cannot access parent 
directories: No such file or directory
malloc: ./cd.def:619: assertion botched
malloc: 0x56be197137b0: allocated: last allocated from pathcanon.c:109
free: start and end chunk sizes differ
Aborting...Aborted (core dumped)

And, again, with bash 5.2.

$ /bin/bash -c 'declare -p BASH_VERSION; cd ""'
shell-init: error retrieving current directory: getcwd: cannot access parent 
directories: No such file or directory
declare -- BASH_VERSION="5.2.26(1)-release"
chdir: error retrieving current directory: getcwd: cannot access parent 
directories: No such file or directory

-- 
Kerin Millar



Re: cd"": Shouldn't it fail?

2024-01-28 Thread Kerin Millar
On Sun, 28 Jan 2024 18:09:24 +0300
Oğuz  wrote:

> On Sun, Jan 28, 2024 at 5:10 PM  wrote:
> > POSIX Programmer's Manual (cd(1p) manpage) says this:
> >
> >   [9 unrelated special cases]
> >
> >   10. The cd utility shall then perform actions equivalent to the
> >chdir() function called with curpath as the path argument. If
> >these actions fail for any reason, the cd utility shall
> >display an appropriate error message and the remainder of
> >this step shall not be executed. [...]
> 
> Right before that, it says in 8.c:
> > If, as a result of this canonicalization, the curpath variable is null, no 
> > further steps shall be taken.
> 
> which is what happens in bash, busybox sh, dash, mksh, pdksh, oksh,
> yash, zsh, and NetBSD sh.
> 

I'm not sure that this is accurate. In my testing, bash does not even perform 
this canonicalization step, which is optional to begin with. That is, it states 
that "an implementation may" canonicalize in the manner described, and 
prescribes a course of action to take if - and only if - such canonicalization 
is performed.

I think that step #5 is the relevant one: given a null CDPATH value, the 
concatenation of ,  and the empty operand (usually) names an 
existing directory. This matter is also mentioned by 
https://austingroupbugs.net/view.php?id=1047.

To summarise, the behaviour of bash appears to conform with the wording of 
Issue 7 but it may have to change for Issue 8 or a later revision.

-- 
Kerin Millar



Re: cd"": Shouldn't it fail?

2024-01-28 Thread Kerin Millar
On Sun, 28 Jan 2024 04:30:48 +0100
j...@jwo.cz wrote:

> My opinion on this is that the ``cd ""'' command should fail with an
> error in this case. The zsh is the only POSIX-compatible or
> POSIX-shell-like shell that was available on machines to which I have
> access that exhibits this behavior.
> 
> I think that the empty argument can be relatively frequently caused by
> substitution with some undefined variable -- it's better to warn the
> user than to perform actions in the wrong directory.
> 
> We risk that this will break some badly written scripts.
> 
> I propose to make Bash report an error when the user attempts to cd into
> empty string, possibly with an shopt option that allows changing
> behaviour of the cd back to the current one -- silently ignoring the
> command.
> 
>   Jiri Wolker.
> 

Related: https://austingroupbugs.net/view.php?id=1047

An associated issue is that there presently appears to be no way of suppressing 
the processing of CDPATH in bash.

-- 
Kerin Millar



Re: inconsistent handling of closing brace inside no-fork command substitution

2024-01-03 Thread Kerin Millar
On Wed, 3 Jan 2024 22:36:34 +0100
Martin Schulte  wrote:

> Hello Oğuz!
> 
> > See:
> > 
> > $ ${ case } in }) echo uname; esac }
> > Linux
> > $ ${ case }x in }x) echo uname; esac }
> > bash: command substitution: line 25: syntax error near unexpected token 
> > `x'
> > bash: command substitution: line 25: `}x)'
> > $ ${ case }x in \}x) echo uname; esac }
> > Linux
> 
> I couldn't reproduce this with neither 5.1.4 nor 5.2.15 - in both cases

Neither of those versions support the non-forking command substitution syntax. 
You would need to build bash from the devel branch to reproduce it.

-- 
Kerin Millar



Re: A possible bug?

2023-12-26 Thread Kerin Millar
On Tue, 26 Dec 2023, at 6:06 PM, George R Goffe wrote:
> Hi,
>
> I'm building bash with the bash at " git clone 
> https://git.savannah.gnu.org/git/bash.git; and am seeing the following 
> messages:
>
> make depend
> bash ./support/mkdep -c gcc --  -DPROGRAM='"bash"' 
> -DCONF_HOSTTYPE='"x86_64"' -DCONF_OSTYPE='"linux-gnu"' 
> -DCONF_MACHTYPE='"x86_64-pc-linux-gnu"' -DCONF_VENDOR='"pc"' 
> -DLOCALEDIR='"/usr/lsd/Linux/share/locale"' -DPACKAGE='"bash"' -DSHELL 
> -DHAVE_CONFIG_H   -I.  -I. -I./include -I./lib -I./lib/intl 
> -I/tools/bash/bash/lib/intl   -g -O2 -Wno-parentheses 
> -Wno-format-security shell.c eval.c parse.y general.c make_cmd.c 
> print_cmd.c y.tab.c dispose_cmd.c execute_cmd.c variables.c  version.c 
> expr.c copy_cmd.c flags.c subst.c hashcmd.c hashlib.c mailcheck.c 
> test.c trap.c alias.c jobs.c nojobs.c  braces.c input.c bashhist.c 
> array.c arrayfunc.c assoc.c sig.c pathexp.c unwind_prot.c siglist.c 
> bashline.c bracecomp.c error.c list.c stringlib.c locale.c findcmd.c 
> redir.c pcomplete.c pcomplib.c syntax.c xmalloc.c
> bash: ./support/mkdep: No such file or directory
> make: *** [Makefile:978: depends] Error 127
>
>
>
> I do have a full log if you need/want it. I don't see mkdep anywhere in 
> the build tree. Is this a "bug" or a mistake on my part?

As far as I can tell, it exists only in the devel branch. However, mkdep should 
not be used - at least, not for GNU make. Which commands were issued on your 
part?

-- 
Kerin Millar



Re: issue with debug trap

2023-12-15 Thread Kerin Millar
On Sat, 16 Dec 2023 00:09:10 +
Kerin Millar  wrote:

> At this point, the value of $? is 1, prior to executing true - a simple 
> command. Just as for any other simple command, the trap code shall be 
> executed beforehand. Consequently, your test observes that $? is 
> arithmetically false and acts accordingly. Keep in mind that this is the only 
> part of your script in which an "else" clause is actually reached.

Of course, I meant to write "arithmetically true" there.

-- 
Kerin Millar



Re: issue with debug trap

2023-12-15 Thread Kerin Millar
On Fri, 15 Dec 2023 17:21:23 -0400
Giacomo Comes  wrote:

> Hi,
> I have stumbled upon a bug or something I don't understand while
> using the debug trap.
> Please run the script at the end.
> When debug is turned on, during its execution the program
> prints the line number and the line content which returned a non
> zero value (error).
> 
> If you look at the script, the only line which should cause
> a non zero return value is:
>   ! :
> However the script shows also a non zero return value
> before executing the 'true' command.
> I can only imagine that the sequence
>   if ((0)); then
> before the 'else' is the one causing a non zero
> return value, however the previous:
>   if ((0)); then
> :
>   fi
> (without the else clause) does not cause a non zero return value.
> Is this the expected behavior (and if yes why)?

Yes, it is to be expected.

$ if false; then true; else echo "$?"; fi
1

> Or is it a bug?
> Seen in bash 4.4 and 5.2.
> 
> Giacomo Comes
> 
> #!/bin/bash
> debugon () {
> trap 'if (($?)); then echo "$((LINENO-1)): $(sed -n "$((LINENO-1))p" 
> "$0")" ; fi' DEBUG
> }

The trap here ought to report LINENO without deducting 1. Otherwise, it is a a 
recipe for confusion.

> debugoff () {
> trap '' DEBUG
> }
> debugon
> 
> :
> ! :
> if ((1)); then
>   :
> fi
> if ((0)); then
>   :
> fi
> if ((1)); then
>   :
> else
>   :
> fi
> if ((0)); then
>   :
> else

At this point, the value of $? is 1, prior to executing true - a simple 
command. Just as for any other simple command, the trap code shall be executed 
beforehand. Consequently, your test observes that $? is arithmetically false 
and acts accordingly. Keep in mind that this is the only part of your script in 
which an "else" clause is actually reached.

>   true
> fi
> 
> debugoff
> 
> 

-- 
Kerin Millar



Re: funsub questions

2023-12-13 Thread Kerin Millar
On Wed, 13 Dec 2023 23:16:11 -0500
Zachary Santer  wrote:

> On Wed, Dec 13, 2023 at 11:06 PM Greg Wooledge  wrote:
> > Is that on a system that lacks a process manager?  Something like
> > "systemctl reload ssh" or "service ssh reload" would be preferred from
> > a system admin POV, on systems that have process managers.
> I am not super knowledgeable in this kind of stuff, but would that not
> cause you to lose your SSH connection?

It would not. Nor would even a restart, owing to the way privilege separation 
is implemented in sshd(8).

-- 
Kerin Millar



Re: funsub questions

2023-12-13 Thread Kerin Millar
On Wed, 13 Dec 2023 21:17:05 -0500
Greg Wooledge  wrote:

> On Wed, Dec 13, 2023 at 08:50:48PM -0500, Zachary Santer wrote:
> > Would there be a purpose in implementing ${< *file*; } to be the equivalent
> > of $(< *file* )? Does $(< *file* ) itself actually fork a subshell?
> 
> $(< file) does indeed fork.  The only difference between $(< file) and
> $(cat file) is the latter also does an exec (but it's portable).

This stopped being the case with the release of 5.2 or thereabouts.

-- 
Kerin Millar



Re: funsub questions

2023-12-13 Thread Kerin Millar
On Wed, 13 Dec 2023 20:50:48 -0500
Zachary Santer  wrote:

> Would there be a purpose in implementing ${< *file*; } to be the equivalent
> of $(< *file* )? Does $(< *file* ) itself actually fork a subshell?

No, $(< file) does not fork.

> 
> Would using funsubs to capture the stdout of external commands be
> appreciably faster than using comsubs for the same?

In the case of a script that would otherwise fork many times, frequently, the 
difference is appreciable and can be easily measured. However, scripts of that 
nature sometimes benefit from being written in a way that does not involve 
comsubs. Therefore, I would place a greater value on the elimination of 
gratuitous comsubs, where possible, than to merely replace all of them with 
funsubs (notwithstanding that 5.3 has yet to be released).

-- 
Kerin Millar



Re: TAB completion bug

2023-12-05 Thread Kerin Millar
On Wed, 6 Dec 2023 05:43:43 +
Kerin Millar  wrote:

> On Tue, 5 Dec 2023 23:46:51 +
> Ole Tange via Bug reports for the GNU Bourne Again SHell  
> wrote:
> 
> > Configuration Information [Automatically generated, do not change]:
> > Machine: x86_64
> > OS: linux-gnu
> > Compiler: gcc
> > Compilation CFLAGS: -g -O2
> > uname output: Linux aspire 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
> > 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
> > Machine Type: x86_64-pc-linux-gnu
> > 
> > Bash Version: 5.2
> > Patch Level: 21
> > Release Status: release
> > 
> > Description:
> > Tested on git (2023-12-06).
> > 
> > For the (admitedly weirdly named) dirs below TAB completion does not 
> > work correctly.
> > 
> > Repeat-By:
> > #!/bin/bash
> > 
> > # TAB works
> > # $ ls -l ta
> > # 
> > # Tab completes but is escaped wrongly:
> > # $ ls -l ta
> > # 
> 
> I can confirm this for both 5.2.21 and the development branch. The backticks 
> are not quoted as they ought to be, resulting in a command substitution.
> 
> > 
> > mkdir -p 'tab/
> > `/tmp/trip`>/tmp/tripwire;
> > '"'"'@ > 
> > # These give the same
> > # $ ls -l tw
> > # $ ls -l tw
> > # But the last should include tmp
> 
> I was not able to reproduce this, however.

Apologies. I had overlooked the presence of the second mkdir command entirely. 
Indeed, it does not complete.

-- 
Kerin Millar



Re: TAB completion bug

2023-12-05 Thread Kerin Millar
On Tue, 5 Dec 2023 23:46:51 +
Ole Tange via Bug reports for the GNU Bourne Again SHell  
wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -g -O2
> uname output: Linux aspire 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
> 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
> 
> Bash Version: 5.2
> Patch Level: 21
> Release Status: release
> 
> Description:
> Tested on git (2023-12-06).
> 
> For the (admitedly weirdly named) dirs below TAB completion does not work 
> correctly.
> 
> Repeat-By:
> #!/bin/bash
> 
> # TAB works
> # $ ls -l ta
> # 
> # Tab completes but is escaped wrongly:
> # $ ls -l ta
> # 

I can confirm this for both 5.2.21 and the development branch. The backticks 
are not quoted as they ought to be, resulting in a command substitution.

> 
> mkdir -p 'tab/
> `/tmp/trip`>/tmp/tripwire;
> '"'"'@ 
> # These give the same
> # $ ls -l tw
> # $ ls -l tw
> # But the last should include tmp

I was not able to reproduce this, however.

$ cd ta
$ echo "${PWD@Q}"
$'/home/kerin/tange/tab/\n`/tmp/trip`>'
$ cd ../../../..
$ cd ta
$ echo "${PWD@Q}"
$'/home/kerin/tange/tab/\n`/tmp/trip`>/tmp'

That was with programmable completion disabled (shopt -u progcomp) and the 
following directory structure in place.

$ LC_ALL=C find . -mindepth 1 -exec ls -1d --quoting-style=c {} +
"./tab"
"./tab/\n`"
"./tab/\n`/tmp"
"./tab/\n`/tmp/trip`>"
"./tab/\n`/tmp/trip`>/tmp"
"./tab/\n`/tmp/trip`>/tmp/tripwire;\n'@/tmp/tripwire;\n'@

Re: Fwd: Strange results

2023-10-27 Thread Kerin Millar
On Fri, 27 Oct 2023 19:28:15 +0700
Victor Pasko  wrote:

> Let me ask more questions presented in the sent bug2.bash
> Look at the following lines:
> 
> *printf -v a_int '%d' "'a" *# very strange syntax here to use char as
> integer
> echo "echo4 $a_int"   # should be 97 at the end

All implementations of the standard printf utility are required to support 
this, as expressed by the third-from-last paragraph of its EXTENDED 
DESCRIPTION, and the bullet points that follow it.

https://pubs.opengroup.org/onlinepubs/9699919799/utilities/printf.html

>
> 
> echo "echo5 *${ASCII_SET:$((a_int-12)):1}" *# you can see letter u at the
> end as at 97-12=85 place
> echo "echo5.1 ${ASCII_SET:a_int-12:1}"# you can see letter u at the end
> as at 97-12=85 place
> 
> 1) Could you please suggest elegant way to cast string symbol to
> integer instead of printf for  * "'a"*

Your request implies that printf is ill-suited to the task but I do not see 
why. It allows for you to convert a character to its ordinal value according to 
the system's character type (LC_CTYPE). What is your objection to using it? If 
it simply that you don't consider it to be elegant, there are many other 
languages to choose from.

> 2) Here are some unsuccessful examples to use bitwise operations with
> symbols:
> 
> % echo $(("'a" >> 4))
> -bash: 'a >> 4: syntax error: operand expected (error token is "'a >> 4")

You are in an arithmetic context there, and must abide by its rules of syntax 
(which are mostly those of ANSI C, not the printf utility). Obtain the integer 
value first.

$ char=a; printf -v ord %d "'${char}"; echo $(( ord >> 4 ))
6

-- 
Kerin Millar



Re: Fwd: Strange results

2023-10-27 Thread Kerin Millar
On Fri, 27 Oct 2023 15:34:16 +0100
Kerin Millar  wrote:



> Keep in mind that the Shell Command Language specification requires that 
> "Default Value" parameter expansion be implemented in the way that it is, and 
> that there are countless scripts that depend on the status quo. However, the 
> Shell Command Language also doesn't suffer from this ambiguity because it 
> doesn't specify any means of performing "Substring Expansion" to begin with 
> (it's a bash extension). Since there is no way for bash to otherwise know 
> that "-10:1" wasn't intended as the word in ${parameter:-word}, you'll have 
> to choose a preferred workaround for negative offsets and live with it, as 
> irksome as it may be.

Of course, I meant to write "10:1" there, not "-10:1".

-- 
Kerin Millar



Re: Fwd: Strange results

2023-10-27 Thread Kerin Millar
On Fri, 27 Oct 2023 19:28:15 +0700
Victor Pasko  wrote:

> See my comments below inline
> 
> On Fri, Oct 27, 2023 at 2:50 AM Kerin Millar  wrote:
> 
> > On Fri, 27 Oct 2023 02:00:01 +0700
> > Victor Pasko  wrote:
> >
> > > -- Forwarded message -
> > > From: Victor Pasko 
> > > Date: Fri, Oct 27, 2023 at 1:57 AM
> > > Subject: Re: Strange results
> > > To: Dennis Williamson 
> > >
> > >
> > >
> > > Also
> > >
> > > echo10 ${ASCII_SET:$((-10)):1}
> >
> > This is the "Substring Expansion" kind of parameter expansion.
> > >
> > > and
> > >
> > > echo11 ${ASCII_SET:-10:1}
> >
> > This is the "Use Default Values" kind of parameter expansion.
> >
> > >
> > > have different behaviour:(
> >
> > Substring expansions already imply a numeric context. A single pair of
> > enclosing brackets is enough to avoid this pitfall.
> >
> > ${ASCII_SET:(-10):1}
> >
> > Another method is to have a leading space.
> >
> > ${ASCII_SET: -10:1}
> >
> 
>  Well, it's kind of a workaround to use brackets or extra space, but how to
> recognize such expectations according to string operation with -10:1 ?

That much is easy to explain.

${ASCII_SET:-
^ At this point, it looks like a "Default Value" parameter expansion

Thefore, it will be treated as one. If your intention is to use a negative 
offset with a Substring Expansion then you must write your code in such a way 
that it can be disambiguated by the parser. Here are some other ways of going 
about it.

echo "${ASCII_SET:0-10:1}"
i=-10; echo "${ASCII_SET:i:1}"

Keep in mind that the Shell Command Language specification requires that 
"Default Value" parameter expansion be implemented in the way that it is, and 
that there are countless scripts that depend on the status quo. However, the 
Shell Command Language also doesn't suffer from this ambiguity because it 
doesn't specify any means of performing "Substring Expansion" to begin with 
(it's a bash extension). Since there is no way for bash to otherwise know that 
"-10:1" wasn't intended as the word in ${parameter:-word}, you'll have to 
choose a preferred workaround for negative offsets and live with it, as irksome 
as it may be.

As concerns your other questions, I shall respond to them in a separate message.

-- 
Kerin Millar



Re: Fwd: Strange results

2023-10-26 Thread Kerin Millar
On Fri, 27 Oct 2023 02:00:01 +0700
Victor Pasko  wrote:

> -- Forwarded message -
> From: Victor Pasko 
> Date: Fri, Oct 27, 2023 at 1:57 AM
> Subject: Re: Strange results
> To: Dennis Williamson 
> 
> 
> 
> Also
> 
> echo10 ${ASCII_SET:$((-10)):1}

This is the "Substring Expansion" kind of parameter expansion.
> 
> and
> 
> echo11 ${ASCII_SET:-10:1}

This is the "Use Default Values" kind of parameter expansion.

> 
> have different behaviour:(

Substring expansions already imply a numeric context. A single pair of 
enclosing brackets is enough to avoid this pitfall.

${ASCII_SET:(-10):1}

Another method is to have a leading space.

${ASCII_SET: -10:1}

> 
> Both of these say "output the character that's 10th from the end" which is
> > "u". What did you expect it to output?
> >
> > echo "echo11 ${ASCII_SET:-10:1}"
> >
> 
> Firstly, expected the only one symbol from  ASCII_SET string
> 
> This says, according to the man page:
> >
> >${parameter:-word}
> >   Use Default Values.  If parameter is unset or null, the
> > expansion of word is substituted.  Otherwise, the value of parameter is
> > substituted
> >
> > which means output "10:1" if ASCII_SET is unset or null. Since it isn't,
> > the contents of that variable are output giving you a long sequence of
> > ASCII characters.
> >
> 
> But ASCII_SET is not unset so -word must not be used

It behaves precisely as the manual states. The parameter, ASCII_SET, is neither 
unset nor null (empty). Therefore, the value of the parameter is substituted, 
rather than the given word of "10:1".

-- 
Kerin Millar



Re: Simple use of HOME with ${} instead of single $ sometimes doen't work... Ubuntu 22.04.3 LTS

2023-10-20 Thread Kerin Millar
On Fri, 20 Oct 2023 09:41:26 + (UTC)
Etienne Lorrain via Bug reports for the GNU Bourne Again SHell 
 wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -g -O2 -flto=auto -ffat-lto-objects -flto=auto 
> -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security 
> -Wall
> uname output: Linux etienne-7950x 6.2.0-34-generic #34~22.04.1-Ubuntu SMP 
> PREEMPT_DYNAMIC Thu Sep  7 13:12:03 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
> 
> Bash Version: 5.1
> Patch Level: 16
> Release Status: release
> 
> Description:
> simple unmodified cut of a session:
> 
> etienne@etienne-7950x:~$ /bin/bash --version
> GNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu)
> Copyright (C) 2020 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> 
> This is free software; you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> etienne@etienne-7950x:~$ /bin/bash
> etienne@etienne-7950x:~$ echo 
> ${​​​HOME}​​​:/home/${​​​USER}​​​
> bash: ${​​​HOME}​​​:/home/${​​​USER}​​​: bad substitution
> etienne@etienne-7950x:~$ echo ${​​​HOME}​​​
> bash: ${​​​HOME}​​​: bad substitution

The above commands are chock full of ZERO-WIDTH SPACE characters, encoded as 
UTF-8.

  65 63 68 6f 20 24 7b e2  80 8b e2 80 8b e2 80 8b  |echo ${.|
0010  e2 80 8b e2 80 8b e2 80  8b e2 80 8b 48 4f 4d 45  |HOME|
0020  7d e2 80 8b e2 80 8b e2  80 8b e2 80 8b e2 80 8b  |}...|
0030  e2 80 8b e2 80 8b 0a  |...|
0037

Bash prints these non-printing characters within the diagnostic message exactly 
as they are, making the fact harder to diagnose. Nevertheless, it is quite 
correct in pointing out that it is a bad substitution.

-- 
Kerin Millar



Re: variable set in exec'ing shell cannot be unset by child shell

2023-10-13 Thread Kerin Millar
On Fri, 13 Oct 2023 13:02:30 -0400
Ti Strga  wrote:

> First off, I have a feeling that GMail is going to garble the line
> wrapping in this message; I cannot get it to stop being "helpful".
> Apologies if that happens.
> 
> I've encountered some behavior that I cannot find described anywhere in
> the man page, and I'm hoping to learn whether it's a bug (it seems like
> unintended behavior) or just a quirk for hysterical raisins.  If it's
> the latter then I'm also hoping there's a BASH_COMPAT level that might
> adjust the behavior, although I'll state right now that I have no idea
> whether previous versions behaved any differently.
> 
> The summary is that if a parameter is set specifically for a '.'/'source'
> command, and the source'd file calls 'exec' to run another script, then
> that exec'd script cannot unset the parameter; if we want that parameter
> to not be present in the exec'd script, then the source'd file must do
> the unsetting prior to exec.
> 
> We're running this...
> 
> $ declare -p BASH_VERSINFO
> declare -ar BASH_VERSINFO=([0]="5" [1]="2" [2]="15" [3]="3"
> [4]="release" [5]="x86_64-pc-cygwin")
> 
> ...although the platform [5] doesn't seem to matter; the same behavior
> was reported to me on Linux as well as what I'm observing on Cygwin.  I
> did not have a chance to verify the Linux behavior firsthand.
> 
> 
> === Background (or, I Promise This Isn't Just Code Golf)
> 
> The example reproduction here is a calling script "outer" sourcing
> "inner.sh".  The real world situation is that "inner.sh" is a small
> library of shell functions and environment variable setup for our workflow,
> and multiple top-level scripts each '.' that library.
> 
> The games here with exec are to support scripts that might be running for
> a long time.  For those we want the script to make a temporary copy of
> itself and exec the temp copy, so that potential updates to the installed
> scripts don't hose up the long-running shell when it suddenly reads from
> a different point in the script.[*]  The way it's implemented, the author
> of the top-level script can simply set a parameter when sourcing the
> library; the library makes the copy and performs the exec.  When the copy
> sets the same parameter and sources the library, the library detects the
> cloning and will not keep doing it.  (The library also fixes up what gets
> reported as "name of current script" for error messages and whatnot, but
> none of that is shown here as it doesn't change the weird behavior.)
> 
> [*] Alternatively, there's the trick about putting the entire script
> contents inside a compound statement to force the parser to read it all,
> but that just makes the script harder for a human to read.  Copy-and-exec
> makes the top-level scripts cleaner IMHO.
> 
> The kicker is that the parameters that trigger all this behavior must be
> unset before going on with the remainder of the library and back to the
> calling script.  If not, then anytime a "cloned" script might call any
> other script, that will be cloned as well even if its author did not write
> anything saying to do that.  (And in a couple cases, the scripts actually
> start an interactive subshell; if the parameters get exported to there,
> then "CLONE ALL THE THINGS" behavior just keeps propagating through the
> scripts.  Hilarity ensues.)
> 
> 
> === Reproduction

Bash employs dynamic scoping and the behaviour of unset can be confusing in 
some cases (with local variables in particular). However, given the code that 
you presented, I am unable to reproduce the claimed behaviour in any of 5.1.16, 
5.2.15 and the devel branch. In the absence of a minimal reproducible example, 
it is difficult to comment. Incidentally, echo is not ideal as a tool for 
determining the state of a given variable. Consider "declare -p INSIDE OUTSIDE" 
instead.

-- 
Kerin Millar



Re: Some minor notes on manual chapter 4 "Shell Builtin Commands"

2023-10-09 Thread Kerin Millar
On Mon, 9 Oct 2023 14:56:24 -0400
Chet Ramey  wrote:

> On 10/9/23 1:57 AM, Kerin Millar wrote:
> 
> > Just to add that, while POSIX does not specify the behaviour of the exit 
> > builtin where its operand "is not an unsigned decimal integer or is greater 
> > than 255", it does forbid all special built-in utilities (including exit) 
> > from causing an interactive shell to exit on account of an error. 
> 
> Again, it's a mixed bag. The ash-derived shells and mksh exit, others

Very mixed indeed.

> (bash, yash) do not. I can see not exiting if the interactive shell is
> in posix mode.

I don't mean to nitpick over a minor issue but bash does also report that a 
numeric argument is "required", implying (at least, to my mind) that the 
built-in requires a valid argument to be able to act at all. Should bash remain 
able to exit in the interactive context, it might make sense to avoid that 
particular word.

-- 
Kerin Millar



Re: Some minor notes on manual chapter 4 "Shell Builtin Commands"

2023-10-09 Thread Kerin Millar
On Mon, 9 Oct 2023 10:35:20 -0400
Chet Ramey  wrote:

> On 10/8/23 7:16 PM, Martin Schulte wrote:
> 
> > The following has been tested with bash 5.2.15:
> > 
> > - 3.7.5 Exit Status says: "All builtins return an exit status of 2 to 
> > indicate incorrect usage, generally invalid options or missing arguments." 
> > but cd with two or more non-optional arguments returns an exit status of 1.
> 
> There is surprising variance in behavior here, from a status of 2 to 1
> to 0 (dash), plus the silly ksh "substitute old for new in $PWD," which
> the NetBSD sh (!) also performs. I agree that a status of 2 is reasonable.
> 
> The historical sh behavior is to ignore additional arguments.
> 
> > - The same is true if exit is called with two or more argument where the 
> > first is numeric. This exit doesn't terminate bash.
> 
> More varying behavior. ash-based shells (dash, BSD sh, etc.) ignore any
> additional arguments -- the historical sh behavior. bash and yash treat
> it as a non-fatal error. mksh treats it as a fatal error, which I suppose
> it can because `exit' is a posix special builtin. Posix makes it all
> explicitly unspecified, even whether the return status is non-zero.
> 
> > - When exit is invoked with a non-numeric first argument it terminates 
> > bash. That seems to be inconsistent with the behaviour described before, 
> > while the exit status of the shell is 2 and consistent in some way.
> 
> Everyone does this (including the exit status of 2) except ksh93, which
> simply ignores any error and exits with a 0 status. Posix makes the
> behavior unspecfied.

Although, not everyone does it in the case that the shell is interactive. There 
is scarcely any opportunity to read the ensuing diagnostic message before the 
terminal potentially closes. Here is how it looks in Apple's Terminal, which 
defaults to "Don't close the window".

$ exit foo
logout
-bash: exit: foo: numeric argument required

Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.

That is, bash chooses to treat it as an error (which is perfectly sensible) but 
exits the interactive instance as a consequence. I think that the latter 
behaviour goes against the spirit of section 2.8.1 and that it is neither 
helpful nor useful. Instead, I think that it should continue to print the 
diagnostic message and set $? to 2, but not exit the interactive instance.

-- 
Kerin Millar



Re: Some minor notes on manual chapter 4 "Shell Builtin Commands"

2023-10-08 Thread Kerin Millar
On Mon, 9 Oct 2023 01:16:41 +0200
Martin Schulte  wrote:

> Hello,
> 
> I took a closer look on the online manual chapter 4 "Shell Builtin Commands" 
> and found some inconsistencies:
> 
> - true and false seem to be missing (in 4.1 Bourne Shell Builtins).
> 
> The following has been tested with bash 5.2.15:
> 
> - 3.7.5 Exit Status says: "All builtins return an exit status of 2 to 
> indicate incorrect usage, generally invalid options or missing arguments." 
> but cd with two or more non-optional arguments returns an exit status of 1.
> 
> - The same is true if exit is called with two or more argument where the 
> first is numeric. This exit doesn't terminate bash.
> 
> - When exit is invoked with a non-numeric first argument it terminates bash. 
> That seems to be inconsistent with the behaviour described before, while the 
> exit status of the shell is 2 and consistent in some way.

Just to add that, while POSIX does not specify the behaviour of the exit 
builtin where its operand "is not an unsigned decimal integer or is greater 
than 255", it does forbid all special built-in utilities (including exit) from 
causing an interactive shell to exit on account of an error. So, for an 
interactive instance of bash to raise a "numeric argument required" error then 
proceed to exit seems contrary to the specification.

Taking dash as a counter-example, its behaviour fulfils the requirements laid 
out by 2.8.1 Consequences of Shell Errors for both interactive and 
non-interactive instances.

$ exec dash
$ exit foo
dash: 1: exit: Illegal number: foo
$ echo $?
2
$ printf %s\\n 'exit foo' 'echo done' | dash
dash: 1: exit: Illegal number: foo
$ echo $?
2

-- 
Kerin Millar



Re: error message lacks useful debugging information

2023-10-04 Thread Kerin Millar
On Wed, 4 Oct 2023 20:05:41 -0400
Dave Cigna via Bug reports for the GNU Bourne Again SHell  
wrote:

> Description:
> 
> Attempting to tun an executable file (not a bash script) with the following
> command:
> 
> ./Candle
> 
> the following error message is reported by bash:
> 
> bash: ./Candle: cannot execute: required file not found
> 
> The executable file 'Candle' does exist in the current directory;
> if it didn't then bash would report a different error.
> 
> The problem may be a missing dependency. However, the BUG is in bash in that

That is, indeed, the problem.

https://lists.gnu.org/archive/html/bug-bash/2022-05/msg00048.html
https://lists.gnu.org/archive/html/bug-bash/2022-05/msg00050.html

> it doesn't offer any useful debugging information. Debugging the issue could
> go from "nearly hopeless" to "I think I can handle this" if bash simply
> reported what required file was missing. i.e. path and filename.
> 
> Repeat-By:
> 
> Here's how I encountered the problem. You might not be able to reproduce
> it on your machine, but that doesn't mean that it's not a bug with bash:

While it can be considered as a matter of usability, it is not a bug in bash. 
The real problem (in so far as there is one) is that the kernel reports the 
error in a manner that is coarse enough to confuse users as to the matter of 
exactly which file is missing. That is, there is no distinct error code that 
exists to indicate that the missing file just happens to be one that the 
dynamic loader implemented by your OS was looking for. Moreover, bash is only 
the messenger. It is not realistic to expect for bash to have any genuine 
insight into (precisely) why a given kernel and userland - for a given 
distribution of a given OS - decides to raise ENOENT after being charged with 
executing a binary. The information that would be necessary to print any 
accurate debugging information is simply beyond the purview of the shell.

Ironically, bash does already supply its own diagnostic message for ENOENT (the 
one you saw and reported), rather than print the native error string which 
would simply have been "No such file or directory". In principle, you could 
propose yet another improvement in its wording. In my view, the value in 
replacing or decorating native error messages is dubious. There is only so much 
that can be said without making too many assumptions about the underlying 
platform or, worse, implementing dodgy heuristics. Although, there is a 
heuristic that tries to determine whether ENOENT was caused by a script 
containing a shebang specifying an invalid interpreter, which does not help in 
this case.

-- 
Kerin Millar



Re: math operations with base#prefix

2023-09-19 Thread Kerin Millar
On Tue, 19 Sep 2023 22:08:25 +0200
alex xmb ratchev  wrote:

> so u mean a $ sign requirement ?

For dereferencing variable identifiers in base#n notation, yes.

> i didnt get the base values , i tried simple one
> i faced the ' without $ it doesnt work '

I don't fully understand this sentence. Anyway, it works as described in the 
manual.

> 
> This is what matters. The implied complaint is that the $ symbols have to
> > be there and that they should somehow be optional. In other words, Victor
> > wants for "res = 10#res1 + 10#res2" to be able to consider res1 and res2
> 
> 
> optional as in make default values in if empty ?

No. Victor wants for, say, "10#res1" to consider res1 as an identifier, instead 
of a number. I can't know how it could be stated any more clearly.

-- 
Kerin Millar



Re: math operations with base#prefix

2023-09-19 Thread Kerin Millar
On Tue, 19 Sep 2023 21:29:32 +0200
alex xmb ratchev  wrote:

> On Tue, Sep 19, 2023, 21:14 Kerin Millar  wrote:
> 
> > On Wed, 20 Sep 2023 01:41:30 +0700
> > Robert Elz  wrote:
> >
> > > Date:Tue, 19 Sep 2023 18:09:13 +0100
> > > From:Kerin Millar 
> > > Message-ID:  <20230919180913.bd90c16b908ab7966888f...@plushkava.net>
> > >
> > >   | >   | On Tue, 19 Sep 2023, at 8:40 AM, Victor Pasko wrote:
> > >   | >   | > in let "<>" and $((<>)) constructs all variables should be
> > >   | >   | > evaluated
> > >
> > >   | This assertion would be more compelling had you explained at some
> > point
> > >   | during the ensuing treatise how to potentially square the request
> > being
> > >   | made with the base#n notation, as presently implemented by bash.
> > >
> > > I didn't even consider that case plausible, or what was intended, but now
> > > I can see that maybe it was - but that could never work.  Not (quite) for
> > > the reason that you gave (or even Chet's explanation, though if he had
> > > explained why it is like it is, the explanation might have been like I
> > > am about to give), but because the syntax simply doesn't work out like
> > that.
> > >
> > > Given a token of x#y that's not a variable, variables have no #'s in
> > their
> > > names, so one cannot be expecting that (this would mean something
> > entirely
> > > different if actually written) ${x#y} to be evaluated in that case.
> > >
> > > So, the only way to get variables out of that would be to split it into
> > > two (or three) tokens, x and #y or x # and y.   One might parse it like
> > > that, and then evaluate x and y as variables, but if that were done, now
> > > we'd have 3 tokens, not the one (representing a number in some other
> > base)
> > > to deal with, say 11 # 97 (where the 11 and 97 are now integers, not
> > strings).
> > >
> > > That's not what was desired, which was 11#97 as one token (106 decimal,
> > if
> > > my mental arithmetic is correct), and the only way to get it back would
> > be
> > > to invent a new (very high priority, must be higher than unary '-' for
> > > example) # operator, which takes a base as its left operand, and a value
> > > as its right, and somehow reinterprets the value in that base - but
> > that's
> > > essentially impossible, as we now have binary 97, which might have
> > originally
> > > been 0141 or 0x61 -   11#0141 is an entirely different thing from 11#97
> > > and at this stage we simply wouldn't know which was intended.
> > >
> > > So that method can't work either.
> > >
> > > The $x#$y form works, as that (everything in $(( )) or other similar
> > > contexts) is being treated just like inside a double quoted string.
> > > Those get expanded first before being used, in this case as 11#97 (just
> > > as strings, variable expansion has no idea of the context, nor does it
> > > generally care what characters it produces) as a char sequence in the
> > > effectively double quoted string.   The arith parser can then parse that,
> > > and see it has a specific base, and value - if y had been 0141 it would
> > have
> > > been parsing 11#0141 instead, unlike a simple reference of 'y' in the
> > > expression, where all of 0x61 97 and 0141 turn into the binary value "97"
> > > for arithmetic to operate on).
> > >
> > > That's why I never even considered that might have been what was being
> > > requested, it can't work as hoped.
> >
> > It is exactly the nature of the request. I don't know whether you looked
> > at Victor's "bug.bash" script. To recap, it contains a number of arithmetic
> > expressions, beginning with "res = res1 + res2 * 3" (clearly understanding
> > it to be fine). Ultimately, the script solicited a response concerning two
> > particular situations. Firstly, this.
> >
> >   let "res = base10#$res1 + base10#$res2 * 3"
> >
> 
> me to get into the mails topic ..
> .. what are res1 and res2 values

It really doesn't matter (see below).

> 
> Rightly dismissed as invalid syntax so there is nothing more to be said for
> > that.
> >
> > Secondly, this.
> >
> >   # without $-signs before both res
> >   let "res = 10#res1 + 3 * 10#res2"  # let: res = 10#res1: value too great
> > for base (error token is "10#res1")

This is what matters. The implied complaint is that the $ symbols have to be 
there and that they should somehow be optional. In other words, Victor wants 
for "res = 10#res1 + 10#res2" to be able to consider res1 and res2 as 
(variable) identifiers instead of integer constants. Both "res1" and "res2" are 
perfectly valid integer constants for bases between 29 and 64.

$ echo $(( 29#res1 )) $(( 29#res2 ))
671090 671091

That is why bash correctly complains that the value is too great for the base 
of 10. It doesn't matter whether res1 or res2 exist as variables, whether they 
are set or what their values are. The n in base#n notation is always taken as a 
number, so the only way to have n be the value of a variable is to expand it.

-- 
Kerin Millar



Re: math operations with base#prefix

2023-09-19 Thread Kerin Millar
On Wed, 20 Sep 2023 01:41:30 +0700
Robert Elz  wrote:

> Date:Tue, 19 Sep 2023 18:09:13 +0100
> From:    Kerin Millar 
> Message-ID:  <20230919180913.bd90c16b908ab7966888f...@plushkava.net>
> 
>   | >   | On Tue, 19 Sep 2023, at 8:40 AM, Victor Pasko wrote:
>   | >   | > in let "<>" and $((<>)) constructs all variables should be
>   | >   | > evaluated
> 
>   | This assertion would be more compelling had you explained at some point
>   | during the ensuing treatise how to potentially square the request being
>   | made with the base#n notation, as presently implemented by bash.
> 
> I didn't even consider that case plausible, or what was intended, but now
> I can see that maybe it was - but that could never work.  Not (quite) for
> the reason that you gave (or even Chet's explanation, though if he had
> explained why it is like it is, the explanation might have been like I
> am about to give), but because the syntax simply doesn't work out like that.
> 
> Given a token of x#y that's not a variable, variables have no #'s in their
> names, so one cannot be expecting that (this would mean something entirely
> different if actually written) ${x#y} to be evaluated in that case.
> 
> So, the only way to get variables out of that would be to split it into
> two (or three) tokens, x and #y or x # and y.   One might parse it like
> that, and then evaluate x and y as variables, but if that were done, now
> we'd have 3 tokens, not the one (representing a number in some other base)
> to deal with, say 11 # 97 (where the 11 and 97 are now integers, not strings).
> 
> That's not what was desired, which was 11#97 as one token (106 decimal, if
> my mental arithmetic is correct), and the only way to get it back would be
> to invent a new (very high priority, must be higher than unary '-' for
> example) # operator, which takes a base as its left operand, and a value
> as its right, and somehow reinterprets the value in that base - but that's
> essentially impossible, as we now have binary 97, which might have originally
> been 0141 or 0x61 -   11#0141 is an entirely different thing from 11#97
> and at this stage we simply wouldn't know which was intended.
> 
> So that method can't work either.
> 
> The $x#$y form works, as that (everything in $(( )) or other similar
> contexts) is being treated just like inside a double quoted string.
> Those get expanded first before being used, in this case as 11#97 (just
> as strings, variable expansion has no idea of the context, nor does it
> generally care what characters it produces) as a char sequence in the
> effectively double quoted string.   The arith parser can then parse that,
> and see it has a specific base, and value - if y had been 0141 it would have
> been parsing 11#0141 instead, unlike a simple reference of 'y' in the
> expression, where all of 0x61 97 and 0141 turn into the binary value "97"
> for arithmetic to operate on).
> 
> That's why I never even considered that might have been what was being
> requested, it can't work as hoped.

It is exactly the nature of the request. I don't know whether you looked at 
Victor's "bug.bash" script. To recap, it contains a number of arithmetic 
expressions, beginning with "res = res1 + res2 * 3" (clearly understanding it 
to be fine). Ultimately, the script solicited a response concerning two 
particular situations. Firstly, this.

  let "res = base10#$res1 + base10#$res2 * 3"

Rightly dismissed as invalid syntax so there is nothing more to be said for 
that.

Secondly, this.

  # without $-signs before both res
  let "res = 10#res1 + 3 * 10#res2"  # let: res = 10#res1: value too great for 
base (error token is "10#res1")

I explained in my initial post that the sigils are required because "res1" and 
"res2" will otherwise be treated as integer values that (in this case) happen 
not to be within the scope of the specified base. Victor did not respond to 
this post but responded to a subsequent post by Chet, opining that $ should be 
"optional" regardless. That is the point at which I concluded that the matter 
had not been thought through.

Anyway, thank you for the additional commentary.

-- 
Kerin Millar



Re: math operations with base#prefix

2023-09-19 Thread Kerin Millar
On Tue, 19 Sep 2023 20:00:13 +0700
Robert Elz  wrote:

> Date:Tue, 19 Sep 2023 09:52:21 +0100
> From:    "Kerin Millar" 
> Message-ID:  <4c2e3d39-0392-41ae-b73c-3e17296a9...@app.fastmail.com>
> 
>   | On Tue, 19 Sep 2023, at 8:40 AM, Victor Pasko wrote:
>   | > Thanks for your response.
>   | > In my opinion, in let "<>" and $((<>)) constructs all variables should 
> be
>   | > evaluated, so that $-sign for them is to be  just optional
>   |
>   | You haven't thought this through.
> 
> You didn't think that through.

This assertion would be more compelling had you explained at some point during 
the ensuing treatise how to potentially square the request being made with the 
base#n notation, as presently implemented by bash.

> 
>   | It would amount to an egregious break of backward compatibility,
> 
> That's correct, but
> 
>   | How would you expect for digits above 9 to be expressed for bases above 
> 10?
> 
> that's completely missing the point Victor was making.
> 
> But Victor, they are all evaluated (in many shells, just once), if you
> have
> 
>   a=10
> 
> then
>   $(( a + 5 )) and $(( $a + 5 ))
> 
> achieve exactly the same thing.To be postable however if we also
> have
> 
>   b=a
> 
> then
>   $(( b + 5 )) is quite likely to fail, whereas $(( $b + 5 )) will
> work fine (after expansions it becomes $(( a + 5 )) which is evaluated as
> above.
> 
> However, where the expansions are really needed is with
> 
>   o1=+
>   o2=*
> 
> used in
> 
>   $(( 10 $o1 5 $o2 6 ))
> 
> If we were to write it instead as
> 
>   $(( 10 o1 5 o2 6 ))
> 
> then the expression parser would have no idea at all how to
> interpret that, as var names are not evaluated until needed in
> the expression - what's there is just meaningless nonsense.  But
> when the variables have the $ ahead of them, the expression parser
> sees
>   $(( 10 + 5 * 6 ))
> 
> and can build the correct evaluation tree to generate 40 as the answer.
> (The "10" could have been any of 'a' '$a' or '$b' given the values assigned
> earlier, that would make no difference, the ' chars there are just for
> clarity in this e-mail, not to be included).
> 
> Much more complex expressions, which rely upon vars being expanded before
> the expression is parsed, can also be created, but not really needed here.

So, how would you accommodate Victor's request, theoretically speaking?

Permitting n as an identifier for bases between 11 and 64 seems out of the 
question; there would be no reasonable way to disambiguate them from legitimate 
integer constants. Could they be tolerated for bases between 2 and 10? Perhaps. 
At that point they would no longer be integer constants and the interface 
fundamentally changes. To my mind, it would make for yet another exception to 
the rule to be borne in mind, and to potentially confound the expectations of 
those acquainted with the notation from having used it in other shells. However 
one might go about it, it requires for more thought to be invested than to 
simply state that $ be "optional" for the let builtin and for arithmetic 
expansion.

-- 
Kerin Millar



Re: math operations with base#prefix

2023-09-19 Thread Kerin Millar
On Tue, 19 Sep 2023, at 8:40 AM, Victor Pasko wrote:
> Thanks for your response.
> In my opinion, in let "<>" and $((<>)) constructs all variables should be
> evaluated, so that $-sign for them is to be  just optional

You haven't thought this through. It would amount to an egregious break of 
backward compatibility, even by the standards of bash. How would you expect for 
digits above 9 to be expressed for bases above 10?

You might, perhaps, argue that the n in "base#n" could be quoted.

$ ff='fe'
$ echo $(( 16#ff ))   # imaginarily prints 254 (instead of 255)
$ echo $(( 16#'ff' )) # imaginarily prints 255 (instead of being an error of 
syntax)

Keep in mind that this would render bash less consistent - and harder to reason 
with - than it already is. For example, $(( 'ff' )) is already an error of 
syntax in the Shell Command Language. Not to mention that it would further 
distance bash from other shells, such as ksh93 and zsh.

Additionally, you imply that let and $(( should be treated specially. At least, 
it seems that way because you speak of those two arithmetic contexts, yet not 
of the others [1]. Why?

[1] ((, for, array subscripts, string-slicing parameter expansions etc

-- 
Kerin Millar 



Re: math operations with base#prefix

2023-09-17 Thread Kerin Millar
On Mon, 18 Sep 2023 04:56:18 +0200
alex xmb ratchev  wrote:

> On Mon, Sep 18, 2023, 04:03 Kerin Millar  wrote:
> 
> > Hi Victor,
> >
> > On Sun, 17 Sep 2023, at 8:59 PM, Victor Pasko wrote:
> > > Hi,
> > >
> > > Could you please take a look at attached bug.bash.
> > >
> > > Maybe, not all math combinations were presented there or the test has
> > > duplications somehow.
> > > Here are results of several runs with test# as argument
> > >
> > >
> > > *% bash --version*GNU bash, version 5.2.15(3)-release (x86_64-pc-cygwin)
> > >
> > > Good test without argument but others with errors :(
> > > *% ./bug.bash*
> > >
> > > res1=010 good 010 base8
> > > res2=03 good 03 base8
> > > res=17 good result 17 base10 (res1+3*res2)
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > res1=8 good result 8 base10
> > > res1=10 good result 10
> > > res1=10 good result 10
> > > res1=010 good result 010 base8
> > > base10-res1=10 good result 10
> > > res1=16 good result 16
> > >
> > >
> > > *% ./bug.bash 1*
> > > TESTCASE=1
> > > res1=010 good 010 base8
> > > res2=03 good 03 base8
> > > res=17 good result 17 base10 (res1+3*res2)
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > ./bug.bash: line 29: let: res = base10#010 + base10#03 * 3: syntax error:
> > > invalid arithmetic operator (error token is "#010 + base10#03 * 3")
> >
> > This seems like a misinterpretation of the manual. The manual states that
> > numbers "take the form [base#]n, where the optional base is a decimal
> > number between 2 and 64 representing the arithmetic base". As such,
> > "base10" is not a decimal number between 2 and 64, whereas "10" would be.
> >
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > res1=8 good result 8 base10
> > > res1=10 good result 10
> > > res1=10 good result 10
> > > res1=010 good result 010 base8
> > > base10-res1=10 good result 10
> > > res1=16 good result 16
> > >
> > >
> > > *% ./bug.bash 2*
> > > TESTCASE=2
> > > res1=010 good 010 base8
> > > res2=03 good 03 base8
> > > res=17 good result 17 base10 (res1+3*res2)
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > base10-res=19 good result 19 base10 (res1+3*res2)
> > > ./bug.bash: line 35: let: res = 10#res1: value too great for base (error
> > > token is "10#res1")
> >
> > For numbers in the form "[base#]n", it isn't practically possible for n to
> > be specified using a variable without prefixing it with a sigil (so that it
> > is treated as a parameter expansion and injected). There is a very good
> > reason for this: numbers in a base higher than 10 can require alphabetical
> > letters to be expressed. Consider the following example.
> >
> > $ echo $(( 16#ff ))
> > 255
> >
> 
> some were prefixed with 0 which make those also not work

You're probably thinking of the 'September' issue but it isn't true. Consider 
the declarations of res1 and res2.

res1=010 # base8 number
res2=03 # base8 number

The comments make it clear that Victor is expecting for those to be treated as 
base 8 (octal). At least, without qualifying the base. There is no issue there; 
both are perfectly valid octal numbers.

$ echo $(( 010 )) $(( 03 ))
8 3

On the other hand, the base number in "base#n" notation may not have leading 
zeroes.

$ echo $(( 064#1 ))
bash: 064#1: invalid number (error token is "064#1")

However, that particular mistake is never made in the program.

-- 
Kerin Millar



Re: math operations with base#prefix

2023-09-17 Thread Kerin Millar
Hi Victor,

On Sun, 17 Sep 2023, at 8:59 PM, Victor Pasko wrote:
> Hi,
>
> Could you please take a look at attached bug.bash.
>
> Maybe, not all math combinations were presented there or the test has
> duplications somehow.
> Here are results of several runs with test# as argument
>
>
> *% bash --version*GNU bash, version 5.2.15(3)-release (x86_64-pc-cygwin)
>
> Good test without argument but others with errors :(
> *% ./bug.bash*
>
> res1=010 good 010 base8
> res2=03 good 03 base8
> res=17 good result 17 base10 (res1+3*res2)
> base10-res=19 good result 19 base10 (res1+3*res2)
> base10-res=19 good result 19 base10 (res1+3*res2)
> base10-res=19 good result 19 base10 (res1+3*res2)
> res1=8 good result 8 base10
> res1=10 good result 10
> res1=10 good result 10
> res1=010 good result 010 base8
> base10-res1=10 good result 10
> res1=16 good result 16
>
>
> *% ./bug.bash 1*
> TESTCASE=1
> res1=010 good 010 base8
> res2=03 good 03 base8
> res=17 good result 17 base10 (res1+3*res2)
> base10-res=19 good result 19 base10 (res1+3*res2)
> ./bug.bash: line 29: let: res = base10#010 + base10#03 * 3: syntax error:
> invalid arithmetic operator (error token is "#010 + base10#03 * 3")

This seems like a misinterpretation of the manual. The manual states that 
numbers "take the form [base#]n, where the optional base is a decimal number 
between 2 and 64 representing the arithmetic base". As such, "base10" is not a 
decimal number between 2 and 64, whereas "10" would be.

> base10-res=19 good result 19 base10 (res1+3*res2)
> base10-res=19 good result 19 base10 (res1+3*res2)
> res1=8 good result 8 base10
> res1=10 good result 10
> res1=10 good result 10
> res1=010 good result 010 base8
> base10-res1=10 good result 10
> res1=16 good result 16
>
>
> *% ./bug.bash 2*
> TESTCASE=2
> res1=010 good 010 base8
> res2=03 good 03 base8
> res=17 good result 17 base10 (res1+3*res2)
> base10-res=19 good result 19 base10 (res1+3*res2)
> base10-res=19 good result 19 base10 (res1+3*res2)
> ./bug.bash: line 35: let: res = 10#res1: value too great for base (error
> token is "10#res1")

For numbers in the form "[base#]n", it isn't practically possible for n to be 
specified using a variable without prefixing it with a sigil (so that it is 
treated as a parameter expansion and injected). There is a very good reason for 
this: numbers in a base higher than 10 can require alphabetical letters to be 
expressed. Consider the following example.

$ echo $(( 16#ff ))
255

This is the appropriate outcome. It would be undesirable for "ff" to be treated 
as a variable name identifier there.

In your case, the error is that the letters "r", "e" and "s" have ordinal 
values that are too high to be valid for base 10, but they could have been 
valid for a higher base.

$ echo $(( 29#res1 ))
671090

-- 
Kerin Millar 



Re: Prompt messed up if PS1 contains ANSI escape sequences

2023-09-07 Thread Kerin Millar
On Thu, 7 Sep 2023 17:33:45 +0200
alex xmb ratchev  wrote:

> On Thu, Sep 7, 2023, 16:51 Kerin Millar  wrote:
> 
> > On Thu, 7 Sep 2023 15:53:03 +0200
> > alex xmb ratchev  wrote:
> >
> > > On Thu, Sep 7, 2023, 15:46 Gioele Barabucci  wrote:
> > >
> > > > On 07/09/23 15:00, alex xmb ratchev wrote:
> > > > > u have to \[ esc-seq \]
> > > > > eg inside \[ and \]
> > > > >
> > > > > PS1=$'\u\[\e[1m\]\h\[\e[0m- '
> > > > >
> > > > > should display hostname bold
> > > >
> > > > Thanks for the suggestion, but adding \] does not really fix the
> > > > problem, it just masks it in many cases (better than nothing).
> > > >
> > > > Try:
> > > >
> > > > $ long_name="$(printf 'abcdef0123456789/%.0s' {0..20})"
> > > > $ mkdir -p /tmp/$long_name
> > > > $ cd /tmp/$long_name
> > > > $ PS1='\n\[\e[1m\]\w\[\e[m\] \$ '
> > > >
> > >
> > > foo=$' .. '
> > > not
> > > foo=' .. '
> >
> > $'' quoting is not required for that particular definition of PS1.
> >
> 
> o cause \e gets expanded .. ? didnt know ..

Yes. \n also.

-- 
Kerin Millar



Re: command substitution when timing grouped commands fails

2023-09-07 Thread Kerin Millar
On Thu, 07 Sep 2023 05:50:49 -0700
hacke...@member.fsf.org wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: linux-gnu
> Compiler: gcc
> Compilation CFLAGS: -g -O2 -fstack-protector-strong -Wformat 
> -Werror=format-security -Wall
> uname output: Linux abyssal 6.4.0-3-amd64 #1 SMP PREEMPT_DYNAMIC Debian 
> 6.4.11-1 (2023-08-17) x86_64 GNU/Linux
> Machine Type: x86_64-pc-linux-gnu
> 
> Bash Version: 5.2
> Patch Level: 15
> Release Status: release
> 
> Description:
> 
>   Bash gives a syntax error when using the $(...) form of
>   command substitution and timing grouped commands.
> 
>   However, Bash works correctly when using the `...` form of
>   command substitution.
> 
> 
> Repeat-By:
> 
>   The 'time' built-in command can measure a group of commands
>   run in a subshell, for example:
> 
>   $ time (date; sleep 1)
>   Thu Sep  7 05:19:21 AM PDT 2023
> 
>   real0m1.005s
>   user0m0.003s
>   sys 0m0.001s
> 
>   Attempting to save the output of time to a variable fails when
>   using $(...) command substitution. For example,
> 
>   $ x=$( time ( date; sleep 1 ) 2>&1 )
>   -bash: syntax error near unexpected token `date'
> 
>   However, old versions of bash (~2016) used to work correctly.
>   And, indeed, even the current version of bash works if one
>   uses backticks for command substitution.
> 
>   $ x=` time ( date; sleep 1 ) 2>&1 `
>   $   # no error
> 
>   There should be no difference between $(...) and `...`.

This issue, which affects 5.2, was previously reported here:

https://lists.gnu.org/archive/html/bug-bash/2023-08/msg00124.html

It was fixed in the devel branch in the fashion described by:

https://lists.gnu.org/archive/html/bug-bash/2023-09/msg00013.html

The issue has not yet been addressed by any available 5.2 patchlevel. Should 
you wish to patch 5.2 yourself - as I did - apply the above-mentioned change 
while ignoring the addition of "case DOLBRACE:".

-- 
Kerin Millar



Re: Prompt messed up if PS1 contains ANSI escape sequences

2023-09-07 Thread Kerin Millar
On Thu, 7 Sep 2023 15:53:03 +0200
alex xmb ratchev  wrote:

> On Thu, Sep 7, 2023, 15:46 Gioele Barabucci  wrote:
> 
> > On 07/09/23 15:00, alex xmb ratchev wrote:
> > > u have to \[ esc-seq \]
> > > eg inside \[ and \]
> > >
> > > PS1=$'\u\[\e[1m\]\h\[\e[0m- '
> > >
> > > should display hostname bold
> >
> > Thanks for the suggestion, but adding \] does not really fix the
> > problem, it just masks it in many cases (better than nothing).
> >
> > Try:
> >
> > $ long_name="$(printf 'abcdef0123456789/%.0s' {0..20})"
> > $ mkdir -p /tmp/$long_name
> > $ cd /tmp/$long_name
> > $ PS1='\n\[\e[1m\]\w\[\e[m\] \$ '
> >
> 
> foo=$' .. '
> not
> foo=' .. '

$'' quoting is not required for that particular definition of PS1.

-- 
Kerin Millar



Re: Warn upon "declare -ax"

2023-09-05 Thread Kerin Millar
On Tue, 5 Sep 2023 16:04:50 +0200
alex xmb ratchev  wrote:

> On Mon, Sep 4, 2023, 15:19 Kerin Millar  wrote:
> 
> > On Mon, 4 Sep 2023 14:46:08 +0200
> > Léa Gris  wrote:
> >
> > > Le 04/09/2023 à 14:18, Dan Jacobson écrivait :
> > > > Shouldn't "declare -ax" print a warning that it is useless?
> > >
> > > There don's seem to be any warning system in Bash or other shells. As
> > > long as it is not a fatal error condition and errexit is not set,
> > > execution continue.
> > >
> > > There are static analysis tools like Shellcheck which might be expanded
> > > to ware of such incompatible flags but that's it.
> >
> > Curiously, ARRAY_EXPORT can be defined in config-top.h. It's probably safe
> > to say that nobody uses it (nor should anybody wish to upon realising how
> > it works).
> >
> 
> does it make too big copies or wha ..

My pet name for it is arrayshock.

$ arr=(foo bar baz)
$ export arr
$ env | grep ^BASH_ARRAY_
BASH_ARRAY_arr%%=([0]="foo" [1]="bar" [2]="baz")
$ ./bash -c 'declare -p arr'
declare -ax arr=([0]="foo" [1]="bar" [2]="baz")

It's not particularly reliable. The following is to be expected because the 
prospective environment ends up being too large.

$ arr=({1..10}); /bin/true
bash: /bin/true: Argument list too long

However, emptying the array does not remedy the situation (unsetting does).

$ arr=(); /bin/true
bash: /bin/true: Argument list too long

-- 
Kerin Millar



Re: Warn upon "declare -ax"

2023-09-04 Thread Kerin Millar
On Mon, 4 Sep 2023 14:46:08 +0200
Léa Gris  wrote:

> Le 04/09/2023 à 14:18, Dan Jacobson écrivait :
> > Shouldn't "declare -ax" print a warning that it is useless?
> 
> There don's seem to be any warning system in Bash or other shells. As 
> long as it is not a fatal error condition and errexit is not set, 
> execution continue.
> 
> There are static analysis tools like Shellcheck which might be expanded 
> to ware of such incompatible flags but that's it.

Curiously, ARRAY_EXPORT can be defined in config-top.h. It's probably safe to 
say that nobody uses it (nor should anybody wish to upon realising how it 
works).

-- 
Kerin Millar



Re: Inner Command Lists fail in Bash 5.2.15

2023-09-01 Thread Kerin Millar
On Fri, 1 Sep 2023 12:52:14 -0400
Dima Korobskiy  wrote:

> Kerin,
> 
> thanks for the workaround.
> 
> Just to clarify, is the issue specific to the `time` command?

I think that you would have to get an answer from Chet because yacc/bison 
grammar is a little over my head. To that end, I have taken the liberty of 
copying the bug-bash list back into this discussion.

That being said, it appears so. After reading your report, I tried to break the 
parser in other, similar, ways but was not able to do so. I suppose that the 
time keyword is special in so far as it expects to be given a pipeline.

-- 
Kerin Millar



Re: Inner Command Groups fail in Bash 5.2

2023-09-01 Thread Kerin Millar
On Fri, 1 Sep 2023 10:29:29 -0400
Chet Ramey  wrote:

> On 9/1/23 10:27 AM, Kerin Millar wrote:
> 
> > Would you mind supplying a diff for 5.2.15? For that version, I get:
> > 
> > ./parse.y: In function ‘time_command_acceptable’:
> > ./parse.y:3139:14: error: ‘DOLBRACE’ undeclared (first use in this 
> > function); did you mean ‘Q_DOLBRACE’
> 
> Remove `DOLBRACE'. It's for nofork comsubs.

Thanks.

-- 
Kerin Millar



Re: Inner Command Groups fail in Bash 5.2

2023-09-01 Thread Kerin Millar
On Fri, 1 Sep 2023 09:52:17 -0400
Chet Ramey  wrote:

> On 8/31/23 1:21 PM, Dima Korobskiy wrote:
> 
> > Bash Version: 5.2
> > Patch Level: 15
> > Release Status: release
> > 
> > Description:
> >      One of my Bash scripts started to fail all of a sudden and I was 
> > able to narrow down this problem to Bash upgrade itself (via Homebrew on a 
> > Mac): from 5.1.16 to 5.2.15.
> > It looks to me pretty serious: a lot of scripts might be potentially 
> > affected.
> 
> Thanks for the report. It's an easy fix:
> 
> 
> *** ../bash-20230818/parse.y  Wed Aug 23 09:56:19 2023
> --- parse.y   Fri Sep  1 09:29:27 2023
> ***
> *** 3224,3227 
> --- 3224,3229 
>case TIMEOPT:  /* time -p time pipeline */
>case TIMEIGN:  /* time -p -- ... */
> + case DOLPAREN:
> + case DOLBRACE:
>  return 1;
>default:

Would you mind supplying a diff for 5.2.15? For that version, I get:

./parse.y: In function ‘time_command_acceptable’:
./parse.y:3139:14: error: ‘DOLBRACE’ undeclared (first use in this function); 
did you mean ‘Q_DOLBRACE’

-- 
Kerin Millar



Re: Fwd: Some incorrect behaviour for BASH arrays

2023-09-01 Thread Kerin Millar
On Fri, 1 Sep 2023 14:44:49 +0700
Victor Pasko  wrote:

> Just forward my response to all who was involved in discussion of my request
> 
> -- Forwarded message -
> From: Victor Pasko 
> Date: Fri, Sep 1, 2023 at 2:23 PM
> Subject: Re: Some incorrect behaviour for BASH arrays
> To: Kerin Millar 
> 
> 
> Thanks for the detailed explanations of *declare *.
> 
> As to the idea behind of my request:
> 1) I need local variable RESULT as empty string (not array)
> 2) this RESULT should collect symbols taken from other strings using
> ${STRING:i:1} for one symbol or ${STRING:i:k} for k-symbols
> So, the main question is: how to save such sub-strings in RESULT at needed
> j-place?
> With another words - I need RESULT as C-char-array to use it something like
> this
> 
> RESULT[j]="${STRING:i:1}"

You would have to reassemble RESULT from the appropriate substrings.

$ RESULT=cat; STRING=moo; i=1 j=1; 
RESULT=${RESULT:0:j}${STRING:i:1}${RESULT:j+1}; declare -p RESULT
declare -- RESULT="cot"

Alternatively, use an array then finish by joining its elements, using the [*] 
subscript.

f() {
local RESULT=(c a t)
local STRING=moo
local IFS=
local i=1 j=1
RESULT[j]=${STRING:i:1}
printf '%s\n' "${RESULT[*]}" # joins elements by first char of IFS (empty 
string)
}

-- 
Kerin Millar



Re: Inner Command Lists fail in Bash 5.2.15

2023-08-31 Thread Kerin Millar
On Thu, 31 Aug 2023 12:05:21 -0400 (EDT)
dkro...@gmail.com wrote:

> Configuration Information [Automatically generated, do not change]:
> Machine: x86_64
> OS: darwin20.6.0
> Compiler: clang
> Compilation CFLAGS: -DSSH_SOURCE_BASHRC
> uname output: Darwin San-Francisco-iMac.local 20.6.0 Darwin Kernel Version 
> 20.6.0: Thu Jul  6 22:12:47 PDT 2023; 
> root:xnu-7195.141.49.702.12~1/RELEASE_X86_64 x86_64
> Machine Type: x86_64-apple-darwin20.6.0
> 
> Bash Version: 5.2
> Patch Level: 15
> Release Status: release
> 
> Description:
>   I've run into a regression in one of my scripts after upgrading Bash 
> from 5.1.16 to 5.2.15 via HomeBrew on a Mac.
> I was able to narrow it down to the Bash upgrade itself. The problem seems to 
> be a pretty serious one that can affect a lot of scripts.
> 
> Repeat-By:
> 
> # Bash 5.1.16: success
> # Bash 5.2.15: syntax error near unexpected token `}'
> var2=$(time { echo foo; echo bar; })

This looks like a casualty of the changes made for 5.2 that have it validate 
syntax recursively. The error message suggests that "time" is not considered to 
be a keyword there, even though it is. In any case, here is a temporary 
workaround for this regression.

$ declare -- BASH_VERSION="5.2.15(1)-release"
$ var2=$(:; time { echo foo; echo bar; })

real0m0.000s
user0m0.000s
sys 0m0.000s

-- 
Kerin Millar



Re: Some incorrect behaviour for BASH arrays

2023-08-31 Thread Kerin Millar
On Thu, 31 Aug 2023 01:12:48 +0700
Victor Pasko  wrote:

> Hi,
> 
> On my Win-7 64-bit using Cygwin
> 
> 
> *% bash --version*GNU bash, version 5.2.15(3)-release (x86_64-pc-cygwin)
> Copyright (C) 2022 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
> >
> 
> This is free software; you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> 
> 
> *% echo LANG=$LANG; echo LC_ALL=$LC_ALL*LANG=
> LC_ALL=
> 
> Attached please find bug.bash to reproduce incorrect BASH behaviour for
> BASH arrays
> and look at ways to run it

This script is a mess because its functions begin by defining RESULT as an 
ordinary, non-array variable (if not yet declared).

$ RESULT='string'
$ declare -p RESULT # not yet an array variable
declare -- RESULT='string'

It then proceeds to operate on the variable in such a way that it will be 
transformed to an array.

$ RESULT[1]='second element'
$ declare -p RESULT
declare -a RESULT=([0]="string" [1]="second element")

Now, would RESULT='' empty this array? No, it would not.

$ RESULT='' # no different from RESULT[0]=''
$ declare -p RESULT
declare -a RESULT=([0]="" [1]="second element")

A correct way to initialise an empty array variable is RESULT=().

$ RESULT=()
$ declare -p RESULT # now an empty array
declare -a RESULT=()

You might also consider using the "local" builtin to declare the variable with 
a function-local scope.

$ f() { local RESULT=(); RESULT+=("append me"); declare -p RESULT; }; unset -v 
RESULT; f; declare -p RESULT
declare -a RESULT=([0]="append me")
bash: declare: RESULT: not found

Note that the second declare command correctly raises an error because RESULT 
is not visible in its scope. Used judiciously, local can help to avoid the 
writing of needless bugs.

-- 
Kerin Millar



Re: Some incorrect behaviour for BASH arrays

2023-08-31 Thread Kerin Millar
On Thu, 31 Aug 2023 19:39:09 +0700
Victor Pasko  wrote:

> Thanks for prompt response. But see my comments below.
> 
> Well, instead of the following line
> 
> RESULT=''
> 
> done
> 
> declare -p RESULT
> 
> without initialization, because otherwise I got the following error "-bash:
> declare: RESULT=: not found".

declare -p RESULT='' translates as declare -p RESULT=, which instructs bash to 
attempt to dump the contents of a variable named RESULT=. There is no variable 
named RESULT=, so it prints an error to that effect. Put another way, you're 
not supposed to be using declare -p as a substitute for assignment syntax. 
While declare can also be used to set variables, that's not what the -p option 
is for.

> 
> And after deleting/commenting the following fragment in badrev
> 
> #if [ $i -ge $QSTRING ] ; then
> #echo "XXX-break i=$i result=$RESULT size=${#RESULT}"
> #break
> #fi
> 
> and correcting end echo-line to
> 
> echo " result=${RESULT[0]} size=${#RESULT[0]}"

${RESULT[0]} will expand as the first element of the array. ${#RESULT[0]} will 
expand as the length of the first element of the array (not the length of the 
array itself).

> 
> i can see strange output for badrev:
> declare -a RESULT=([0]="987654321" [1]="B" [2]="1" [3]="2" [4]="3" [5]="\\"
> [6]="-" [7]="4" [8]="5" [9]="6" [10]="7" [11]="\\" [12]="-" [13]="8")
> result=9 size=1
> Still incorrect output and size:( How to get correct output? ${RESULT[0]}
> ???

What would be correct to you?

> 
> result=${RESULT[0]} size=${#RESULT[0]} ### does not work :(

What would it be doing for you to be satisfied that it works?

Your script contains no explanation as to what it is intended to do. Comments 
to the effect of "not working" do not help to discern your intent.

> 
> BTW, in the presented link to the manual with  *6.7 Arrays*
> there is no explanation for *declare -p *:(

It's a builtin command, so it's listed within the SHELL BUILTIN COMMANDS 
section. Also, you may use the help builtin to display some (less detailed) 
documentation.

$ help declare | grep -- -p
  -pdisplay the attributes and value of each NAME

-- 
Kerin Millar



Re: test -v difference between bash 5.1 and 5.2

2023-08-29 Thread Kerin Millar
On Tue, 29 Aug 2023 11:44:13 -0400
Chet Ramey  wrote:

> On 8/29/23 11:38 AM, Kerin Millar wrote:
> > On Tue, 29 Aug 2023 11:24:43 -0400
> > Chet Ramey  wrote:
> > 
> >> If you want to check whether an array variable is set, you can check
> >> whether it has any set elements:
> >>
> >> (( ${#assoc[@]} > 0 ))
> > 
> > This doesn't check whether an "array variable is set".
> 
> It checks whether there are any set elements. You have to assign a value
> to set a variable.

I conflated the property of being set with that of being declared. Sorry about 
that. So, what I really meant to say was that the existing test does not prove 
that it's declared. I initially thought that Christian might be concerned with 
that distinction, but do hope it's not the case.

> 
> > Not only that, but the test will be true in the case that assoc has been 
> > defined as a variable that is not an array.
> 
> One hopes that the shell programmer knows what variable types he's
> using, and uses the appropriate constructs.

Some elect to source shell code masquerading as configuration data (or are 
using programs that elect to do so). Otherwise, yes, definitely.

-- 
Kerin Millar



Re: test -v difference between bash 5.1 and 5.2

2023-08-29 Thread Kerin Millar
On Tue, 29 Aug 2023 11:34:21 -0400
Chet Ramey  wrote:

> On 8/29/23 11:30 AM, Kerin Millar wrote:
> > Hi,
> > 
> > On Tue, 29 Aug 2023 16:32:36 +0200
> > Christian Schneider  wrote:
> > 
> >> Hi all,
> >>
> >> not sure if this intended or not, but in bash 5.2-p15 one of our scripts
> >> is broken. it is related to test -v, that checks, if a variable is set
> >> together with arrays.
> >>
> >> I condensed it to following example:
> >> #!/bin/bash
> >>
> >> declare -A foo
> >> foo=(["a"]="b" ["c"]="d")
> >> declare -a bar
> >> bar=("a" "b" "c")
> >> declare -a baz
> >> baz=("foo" "bar")
> >> for i in "${baz[@]}" ; do
> >>   echo $i
> >>   if [ ! -v "$i"[@] ] ; then
> >>   echo "$i not set"
> >>   fi
> >> done
> >> 
> >>
> >> with bash 5.2-p15 the output of this script is
> >> foo
> >> foo not set
> >> bar
> > 
> > It pertains to the following change.
> > 
> > j. Associative array assignment and certain instances of referencing (e.g.,
> > `test -v' now allow `@' and `*' to be used as keys.
> > 
> > For now, you have the option of setting the compatibility level to 51.
> > 
> > Incidentally, your code is defective to begin with. That is, it doesn't 
> > actually prove that an array variable is set, even with 5.1.
> > 
> > $ declare -p BASH_VERSION
> > declare -- BASH_VERSION="5.1.16(1)-release
> > $ declare -A map; [[ -v 'map[@]' ]]; echo $?
> > 1
> 
> That isn't set; you have to assign a value to set a variable. For instance,

Fair enough.

> `export foo' does not result in `foo' being set. What he's really
> interested in is whether the array has any set elements.

Yes, hopefully.

-- 
Kerin Millar



Re: test -v difference between bash 5.1 and 5.2

2023-08-29 Thread Kerin Millar
On Tue, 29 Aug 2023 11:24:43 -0400
Chet Ramey  wrote:

> If you want to check whether an array variable is set, you can check
> whether it has any set elements:
> 
> (( ${#assoc[@]} > 0 ))

This doesn't check whether an "array variable is set".

Not only that, but the test will be true in the case that assoc has been 
defined as a variable that is not an array.

$ unset -v assoc; assoc=; (( ${#assoc[@]} > 0 )); echo $?
0

-- 
Kerin Millar



Re: test -v difference between bash 5.1 and 5.2

2023-08-29 Thread Kerin Millar
Hi,

On Tue, 29 Aug 2023 16:32:36 +0200
Christian Schneider  wrote:

> Hi all,
> 
> not sure if this intended or not, but in bash 5.2-p15 one of our scripts 
> is broken. it is related to test -v, that checks, if a variable is set 
> together with arrays.
> 
> I condensed it to following example:
> #!/bin/bash
> 
> declare -A foo
> foo=(["a"]="b" ["c"]="d")
> declare -a bar
> bar=("a" "b" "c")
> declare -a baz
> baz=("foo" "bar")
> for i in "${baz[@]}" ; do
>  echo $i
>  if [ ! -v "$i"[@] ] ; then
>  echo "$i not set"
>  fi
> done
> 
> 
> with bash 5.2-p15 the output of this script is
> foo
> foo not set
> bar

It pertains to the following change.

j. Associative array assignment and certain instances of referencing (e.g.,
   `test -v' now allow `@' and `*' to be used as keys.

For now, you have the option of setting the compatibility level to 51.

Incidentally, your code is defective to begin with. That is, it doesn't 
actually prove that an array variable is set, even with 5.1.

$ declare -p BASH_VERSION
declare -- BASH_VERSION="5.1.16(1)-release
$ declare -A map; [[ -v 'map[@]' ]]; echo $?
1

Frankly, the only interface that I would trust for this is declare -p, which is 
a wasteful one; there is no way to instruct the declare builtin to refrain from 
writing out the elements of an array.

-- 
Kerin Millar



Re: String replacement drops leading '-e' if replacing char is a space

2023-08-13 Thread Kerin Millar
On Mon, 14 Aug 2023 02:11:27 +
pphick via Bug reports for the GNU Bourne Again SHell  wrote:

> If a string starts with '-e' the replacement operators ${x//,/ } and ${x/, /} 
> drop the '-e'.
> The behaviour seems to be very specific: the string must start with '-e' and 
> the replacing character has to be a space.
> 
> Repeat-By:
> 
> x='-e,b,c'
> echo ${x//,/ }
> b c
> echo ${x/,/ }
> b,c

This is to be expected. Given that you haven't quoted the expansion, word 
splitting occurs, after which echo is run with three arguments. The first of 
these arguments is -e, which is treated as an option.

-- 
Kerin Millar



Re: suggestion: shell option for echo to not interpret any argument as an option

2023-07-26 Thread Kerin Millar
On Wed, 26 Jul 2023, at 1:42 PM, Zachary Santer wrote:
> bash's echo command is broken - YouTube
> <https://www.youtube.com/watch?v=lq98MM2ogBk>
>
> To restate what's in the video, you can't safely use echo to print the
> contents of a variable that could be arbitrary, because the variable could
> consist entirely of '-n', '-e', or '-E', and '--' is not interpreted as the
> end of options, but rather, something to print.
>
> I recognized this and replaced all of my calls to echo with printf some
> time ago.
>
> If POSIX mandates that '--' not be taken as the end of options, then the
> safe thing would be to simply not have echo take any options. Obviously,
> that would break backwards compatibility, so you'd want this to be optional
> behavior that the shell programmer can enable if desired.

echo() { local IFS=' '; printf '%s\n' "$*"; }

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-30 Thread Kerin Millar
On Sat, 01 Jul 2023 02:25:33 +0700
Robert Elz  wrote:

> Date:Fri, 30 Jun 2023 18:35:34 +0100
> From:    Kerin Millar 
> Message-ID:  <20230630183534.85da7986a24855126bfea...@plushkava.net>
> 
>   | This can be trivially foiled.
> 
> You mean it doesn't give you all the variable names?   Why not?
> Does bash have a bug in this area that I am unaware of?
> 
> Or do you mean that it will sometimes (due to newlines in the values)
> be able to be persuaded to give you more than just the var names?
> 
> If the latter, why do you care?  The processing can check for each variable

Well, I don't in particular. However, it is also the case that:

- every single solution posted in this thread has been broken in its initial 
form, save for that which I had devised even before Eli posted
- I have to yet to see one working solution that provides any tangible 
advantage over mine
- I am not soliciting solutions in any case; they have been nothing but a 
distraction
- I am tired of being told things that I already know and am frankly now tired 
of this entire thread; nothing is going to come of it

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-30 Thread Kerin Millar
On Sat, 01 Jul 2023 00:19:41 +0700
Robert Elz  wrote:

> Date:Thu, 29 Jun 2023 23:05:38 +0100
> From:    Kerin Millar 
> Message-ID:  <20230629230538.cbef14a75694143ccf034...@plushkava.net>
> 
>   | The thing is that portage also has a legitimate stake in needing
>   | to enumerate all variable names.
> 
> Why isn't "set" good enough for that?
> 
>   (set -o posix; set) | sed 's/=.*$//' | whatever_processes_them

This can be trivially foiled.

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-29 Thread Kerin Millar
On Thu, 29 Jun 2023 16:39:52 -0400
Chet Ramey  wrote:

> On 6/25/23 2:38 PM, Eli Schwartz wrote:
> > compgen is a useful builtin for inspecting information about the shell
> > context e.g. in scripts -- a good example of this is compgen -A function
> > or compgen -A variable.
> > 
> > But it's not always available depending on how bash is built, which
> > results in people lacking confidence that it can / should be used in
> > scripts. See e.g. https://bugs.gentoo.org/909148
> 
> It's dependent on programmable completion and readline, which are features
> that are enabled by default. Who builds a version of bash with those turned
> off? What's the rationale for doing that?

This is discussed in the referenced bug.

To begin with, readline is disabled during gentoo's bootstrapping stage. I'm 
not one of its release engineers, but I presume that there is a cromulent 
reason for doing so. Sam might be able to chime in on this. Meanwhile, portage 
tries to run "compgen -A function" when the time comes to save the "ebuild 
environment", which is always stashed in the "VDB" for an installed package 
(under /var/db/pkg). Naturally, these two things do not go well together. As 
such, the bug was filed with the intent of eliminating this (sole) use of 
compgen.

Now for my part in this. Without going into the details of portage's internals, 
some of the code concerning the processing of the ebuild environment is ... 
well, let me just put it politely by saying that it is far from ideal. I have 
been working on cleaning much of this up. Consequently, I have come to rely 
upon compgen quite heavily, duly compelling me to pose the same question: what 
is the reason for disabling readline support in the first place? In fact, not 
only is it unavailable during bootstrapping but users do also have the option 
to disable it. Admittedly, they never do in practice; it's an option that is 
made deliberately difficult to take advantage of. In any case, comments #7 and 
#8 were the catalyst for bringing the matter up here on the list, if you're 
interested.

As it happens, I have submitted a patch that replaces the use of "compgen -A 
function" with a "declare -F" parser, which is all well and good. The thing is 
that portage also has a legitimate stake in needing to enumerate all variable 
names. To that end, I had been using "compgen -A variable" in my fork. The 
implication of the bug is that I must refrain from using it, at which point the 
prospect of parsing "declare -p" is a tremendously unappealing one, to say the 
least.

Ultimately, depending on whether you entertain any of this or not, the outcome 
will be that I rely upon my eval-using hack - which I had thought of by comment 
#10 - in all perpetuity or look forward to a cleaner way of going about it that 
doesn't necessarily depend upon compgen being available. One possibility would 
be to extend declare in such a manner that it is able to print only variable 
names, just as declare -F prints only function names. Another might be to 
extend the "${!prefix*}" and "${!prefix@}" syntax in such a way that it is 
possible to match any name. To be clear, I am not as concerned with the matter 
as I was at the time that the bug was initially filed but would still welcome 
any potential improvement, if it is at all feasible.

-- 
Kerin Millar



Re: Bash silently exits where attempting to assign an array to certain built-in variables using declare

2023-06-29 Thread Kerin Millar
On Thu, 29 Jun 2023 12:10:37 -0400
Chet Ramey  wrote:

> On 6/29/23 8:51 AM, Chet Ramey wrote:
> 
> > It should be consistent, at least. I think the string assignment behavior
> > is the most reasonable: assignments return 1 but there's no assignment
> > error. I'll look at how compound assignments are different.
> 
> I have reconsidered this. Making assignments that are supposed to be
> ignored set $? to 1 means that a shell with errexit enabled will exit.
> I don't think that's desirable. I don't think attempted assignments to
> noassign variables should change $?.

That seems reasonable to me. It would also make it less likely that the 
evaluation of the prior output of declare -p - be in whole or in part - affects 
the value of $?, which seems like a win.

-- 
Kerin Millar



Re: maybe a bug in bash?

2023-06-29 Thread Kerin Millar
On Thu, 29 Jun 2023 11:55:12 +0200
Sebastian Luhnburg  wrote:

> #!/usr/bin/env bash
> 
> initial_password="\$abc"
> echo "initial password: " $initial_password
> printf -v password '%q' $initial_password
> echo "initial password with escaped characters: " $password
> bash << EOF
> echo "password in here document: " ${password@Q}
> /bin/bash -c "echo 'password in subshell in here document: ' ${password@Q}"
> EOF

While Dominique has already responded adequately, I have several things to add. 
One is that you should quote the expansion of initial_password in your printf 
statement to impede word splitting and pathname expansion. Another is that you 
should not requote the string with %q then proceed to do so a second time by 
using the ${param@Q} form of expansion. Instead, use one or the other.

Traversing multiple quoting layers is hard and I would suggest simply not doing 
it, if it can be helped. That being said, you could get the results that you 
expect by conveying password as a discrete argument to bash (which is not a 
subshell, by the way). Below is an example.

#!/bin/bash
initial_password='$abc'
printf -v password %q "$initial_password"
bash <

Re: Bash silently exits where attempting to assign an array to certain built-in variables using declare

2023-06-29 Thread Kerin Millar
On Thu, 29 Jun 2023 08:51:58 -0400
Chet Ramey  wrote:

> On 6/28/23 1:14 PM, Kerin Millar wrote:
> > This report is based on an observation made within the depths of this 
> > thread: https://lists.gnu.org/archive/html/bug-bash/2023-06/msg00094.html.
> > 
> > Attempting to assign an array to any of the following variables with the 
> > declare builtin causes bash to immediately exit with no diagnostic message 
> > being issued.
> > 
> >BASH_ARGC
> >BASH_ARGV
> >BASH_LINENO
> >BASH_SOURCE
> >GROUPS
> 
> These are all `noassign' variables; assignments to them are ignored.
> The bash debugger variables cannot be unset either. Other noassign
> variables can be unset; that's why they're not readonly.
> 
> (Before you ask, noassign variables have been in bash since 1996.)

Thanks for this information. While I had no expectation of them being 
assignable, some of this might have a valid place in the manual.

> 
> In this case, assignment to the noassign variable is being treated like an
> assignment error, which aborts the current command (a compound command in
> your examples) but does not exit the shell. If you were to separate the
> commands with a newline, you'd see the difference.

Ah. I thought that I had tried it at some point. Evidently, my manner of 
testing was faulty.

> 
> This came up in 2021 in the GROUPS case:
> 
> https://lists.gnu.org/archive/html/bug-bash/2021-08/msg00013.html
> 
> The other variables in that list inherited their noassign property from
> changes to support the bash debugger.
> 
> > It does not happen if trying to assign a string.
> 
> It should be consistent, at least. I think the string assignment behavior
> is the most reasonable: assignments return 1 but there's no assignment
> error. I'll look at how compound assignments are different.

Thanks. I would just add that 'noassign' variables are not consistently 
indicated in the manual. For example, it is said for GROUPS that "Assignments 
to GROUPS have no effect" but there is no such wording for the others that I 
mentioned. That might be worth addressing, even if only by inserting similarly 
worded sentences where appropriate.

> 
> > 
> >$ bash -c 'declare BASH_ARGC=1; echo FIN'
> >FIN
> > 
> > There are various other variables bearing the readonly attribute for which 
> > this also happens. In the following case, bash does, at least, complain 
> > that the variable is readonly.
> > 
> >$ bash -c 'declare BASHOPTS=(); echo FIN'
> >bash: line 1: BASHOPTS: readonly variable
> 
> Attempted assignment to readonly variables is an assignment error.
> 
> > This seems rather inconsistent. Also, it is confusing for bash to quit 
> > without indicating why it did so.
> 
> Technically, it exited because it hit EOF after aborting the compound
> command.

I see.

-- 
Kerin Millar



Bash silently exits where attempting to assign an array to certain built-in variables using declare

2023-06-28 Thread Kerin Millar
This report is based on an observation made within the depths of this thread: 
https://lists.gnu.org/archive/html/bug-bash/2023-06/msg00094.html.

Attempting to assign an array to any of the following variables with the 
declare builtin causes bash to immediately exit with no diagnostic message 
being issued.

  BASH_ARGC
  BASH_ARGV
  BASH_LINENO
  BASH_SOURCE
  GROUPS

Here is an example.

  $ bash -c 'declare -p BASH_VERSION; declare BASH_ARGC=(); echo FIN'; echo $?
  declare -- BASH_VERSION="5.2.15(1)-release"

It does not happen if trying to assign a string.

  $ bash -c 'declare BASH_ARGC=1; echo FIN'
  FIN

There are various other variables bearing the readonly attribute for which this 
also happens. In the following case, bash does, at least, complain that the 
variable is readonly.

  $ bash -c 'declare BASHOPTS=(); echo FIN'
  bash: line 1: BASHOPTS: readonly variable

However:

  $ bash -c 'declare BASHOPTS=1; echo FIN'
  bash: line 1: BASHOPTS: readonly variable
  FIN

This seems rather inconsistent. Also, it is confusing for bash to quit without 
indicating why it did so.

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-28 Thread Kerin Millar
On Wed, 28 Jun 2023 12:42:16 +0200
Fabien Orjollet  wrote:

> On 28/06/2023 00:40, Kerin Millar wrote:
> > On Tue, 27 Jun 2023 21:52:53 +0200
> > of1  wrote:
> > 
> >> On 27/06/2023 21:05, Kerin Millar wrote:
> >>> It doesn't work at all for >=5.2. The reason for this is interesting and 
> >>> I may make a separate post about it.
> >>>
> >>> Prior to 5.2, it can easily be tricked into printing names that do not 
> >>> exist.
> >>>
> >>> $ VAR=$'\nNONEXISTENT=' ./declare-P | grep ^NONEXISTENT
> >>> NONEXISTENT
> >>>
> >> Thank you. I was just reading the discussion in Gentoo forum and
> >> realizing that I've been to quickly: it doesn't pass the
> >> FOO=$'\nBAR BAZ QUUX=' test. But what about this? (I also quickly
> >> tested with Bash 5.2.15).
> >>
> >>
> >> FOO=$'\nBAR BAZ QUUX='
> >> VAR=$'\nNONEXISTENT='
> >>
> >> declare-P() {
> >>  local curVar
> >>  declare -a curVars
> >>
> >>  readarray -t curVars <<<"$1"
> >>  curVars=( "${curVars[@]%%=*}" )
> >>  curVars=( "${curVars[@]##* }" )
> >>  for curVar in "${curVars[@]}"; do
> >> ### we can use [[ -v "$curVar" ]] at some point!
> >> [[ "${curVar//[a-zA-Z0-9_]}" || \
> >>"${curVar:0:1}" == [0-9] || \
> >>! -v "$curVar" || \
> >>! "$curVar" =~ $2 ]] || printf '%s\n' "$curVar"
> >>  done
> >> }
> >>
> >> declare-P "$(declare -p)"
> >> echo "##"
> >> declare-P "$(declare -p)" "QU|^NON|VAR"r
> >>
> > 
> > This use of test -v is probably sufficient to address that particular 
> > issue. It is just as well that the test occurs after determining that the 
> > string is a valid identifier because test -v has the power to facilitate 
> > arbitrary code execution.
> > 
> > My assertion that your code is broken in 5.2 was incorrect. Rather, the 
> > problem seems to be that regcomp(3) behaviour differs across platforms, 
> > where given an empty expression to compile. To address that, you could 
> > determine whether a second positional parameter was given and is non-empty 
> > before proceeding to use it.
> 
> Thank you, good to know. I think that ${2:-.} should be correct.
> 
> > 
> > Curiously, repeated invocation may lead to obvious issues of resource 
> > consumption.
> > 
> > # This is excruciatingly expensive
> > time for i in {1..100}; do
> > declare-P "$(declare -p)"   
> > done
> > 
> > # This is not so expensive (!)
> > time for i in {1..100}; do
> > declare-P "$(declare -p)"
> > :
> > done
> 
> I made some testing by adding
> echo -n "$(( ++COUNT ))>$(wc -c <<<"$1") "
> echo "$1" >"/tmp/declare-P$COUNT"
> to the function:
> 
> 1>4651 2>9785 3>15283 4>21504 5>29173 6>39739 7>56095 8>84036
> 9>135145 10>232592 11>422710 12>798174 13>1544326 14>3031854
> 15>6002134 16>11937918 ^C
> 
> ls -rtnh /tmp/declare-P*
> -rw-r--r-- 1 1000 1000 4,6K 28 juin  11:09 /tmp/declare-P1
> -rw-r--r-- 1 1000 1000 9,6K 28 juin  11:09 /tmp/declare-P2
> -rw-r--r-- 1 1000 1000  15K 28 juin  11:09 /tmp/declare-P3
> -rw-r--r-- 1 1000 1000  21K 28 juin  11:09 /tmp/declare-P4
> -rw-r--r-- 1 1000 1000  29K 28 juin  11:09 /tmp/declare-P5
> -rw-r--r-- 1 1000 1000  39K 28 juin  11:09 /tmp/declare-P6
> -rw-r--r-- 1 1000 1000  55K 28 juin  11:09 /tmp/declare-P7
> -rw-r--r-- 1 1000 1000  83K 28 juin  11:09 /tmp/declare-P8
> -rw-r--r-- 1 1000 1000 132K 28 juin  11:09 /tmp/declare-P9
> -rw-r--r-- 1 1000 1000 228K 28 juin  11:09 /tmp/declare-P10
> -rw-r--r-- 1 1000 1000 413K 28 juin  11:09 /tmp/declare-P11
> -rw-r--r-- 1 1000 1000 780K 28 juin  11:09 /tmp/declare-P12
> -rw-r--r-- 1 1000 1000 1,5M 28 juin  11:09 /tmp/declare-P13
> -rw-r--r-- 1 1000 1000 2,9M 28 juin  11:09 /tmp/declare-P14
> -rw-r--r-- 1 1000 1000 5,8M 28 juin  11:09 /tmp/declare-P15
> -rw-r--r-- 1 1000 1000  12M 28 juin  11:09 /tmp/declare-P16
> 
> But by adding set -p to the subshell:

Your best option would probably be to run declare -p within the function while 
avoiding the use of command substitution altogether, so as not to 'pollute' the 
_ variable.

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-27 Thread Kerin Millar
On Tue, 27 Jun 2023 21:52:53 +0200
of1  wrote:

> On 27/06/2023 21:05, Kerin Millar wrote:
> > It doesn't work at all for >=5.2. The reason for this is interesting and I 
> > may make a separate post about it.
> > 
> > Prior to 5.2, it can easily be tricked into printing names that do not 
> > exist.
> > 
> > $ VAR=$'\nNONEXISTENT=' ./declare-P | grep ^NONEXISTENT
> > NONEXISTENT
> > 
> Thank you. I was just reading the discussion in Gentoo forum and
> realizing that I've been to quickly: it doesn't pass the
> FOO=$'\nBAR BAZ QUUX=' test. But what about this? (I also quickly
> tested with Bash 5.2.15).
> 
> 
> FOO=$'\nBAR BAZ QUUX='
> VAR=$'\nNONEXISTENT='
> 
> declare-P() {
> local curVar
> declare -a curVars
> 
> readarray -t curVars <<<"$1"
> curVars=( "${curVars[@]%%=*}" )
> curVars=( "${curVars[@]##* }" )
> for curVar in "${curVars[@]}"; do
>### we can use [[ -v "$curVar" ]] at some point!
>[[ "${curVar//[a-zA-Z0-9_]}" || \
>   "${curVar:0:1}" == [0-9] || \
>   ! -v "$curVar" || \
>   ! "$curVar" =~ $2 ]] || printf '%s\n' "$curVar"
> done
> }
> 
> declare-P "$(declare -p)"
> echo "##"
> declare-P "$(declare -p)" "QU|^NON|VAR"r
> 

This use of test -v is probably sufficient to address that particular issue. It 
is just as well that the test occurs after determining that the string is a 
valid identifier because test -v has the power to facilitate arbitrary code 
execution.

My assertion that your code is broken in 5.2 was incorrect. Rather, the problem 
seems to be that regcomp(3) behaviour differs across platforms, where given an 
empty expression to compile. To address that, you could determine whether a 
second positional parameter was given and is non-empty before proceeding to use 
it.

Curiously, repeated invocation may lead to obvious issues of resource 
consumption.

# This is excruciatingly expensive
time for i in {1..100}; do
declare-P "$(declare -p)"   
done

# This is not so expensive (!)
time for i in {1..100}; do
declare-P "$(declare -p)"
:
done

At present, I do not have a concrete explanation as to why this is, though it 
may well have something to do with the value of the _ parameter. Suffice to say 
that I'll be sticking to multiple ${!prefix*} expansions until such time as 
bash offers a better way of going about it that doesn't also require readline.

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-27 Thread Kerin Millar
On Tue, 27 Jun 2023 18:37:37 +0200
Fabien Orjollet  wrote:

> I'm far from having the skills of the people here. However, I found the
> problem interesting. I think I've come up with a reasonable solution
> (you tell me). Although it's not as short as Kerin Millar's, I think it
> offers some improvements. I hope there are no particular weaknesses.
> If it's of any use to anyone.
> 
> 
> declare-P() {
> local curVar
> declare -a curVars
> 
> readarray -t curVars <<<"$1"
> curVars=( "${curVars[@]%%=*}" )
> curVars=( "${curVars[@]##* }" )
> 
> for curVar in "${curVars[@]}"; do
>### unfortunately, we cannot use [[ -v "$curVar" ]]
>[[ "${curVar//[a-zA-Z0-9_]}" || \
>   "${curVar:0:1}" == [0-9] || \
>   ! "$curVar" =~ $2 ]] || printf '%s\n' "$curVar"
> done
> }
> 
> declare-P "$(declare -p)"
> echo "##"
> declare-P "$(declare -p)" "TERM"
> echo "##"
> declare-P "$(declare -p)" "^BASH|^SHELL"
>

It doesn't work at all for >=5.2. The reason for this is interesting and I may 
make a separate post about it.

Prior to 5.2, it can easily be tricked into printing names that do not exist.

$ VAR=$'\nNONEXISTENT=' ./declare-P | grep ^NONEXISTENT
NONEXISTENT

All of this lends further credence to Eli's post. Parsing declare -F is a minor 
nuisance, whereas parsing declare -p is broken by design. While the format of 
declare -p improved for 5.2, there is no guarantee of forward-compatibility.

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-26 Thread Kerin Millar
On Tue, 27 Jun 2023 02:23:23 +0700
Robert Elz  wrote:

> Date:Mon, 26 Jun 2023 10:32:19 +0100
> From:    Kerin Millar 
> Message-ID:  <20230626103219.0f74c089c616248cee6ab...@plushkava.net>
> 
> 
>   | Further, declare is granted special treatment, even after having been
>   | defined as a function (which might be a bug).
> 
> That will be because of the way that "declaration" utilities (which get
> special syntax rules for parsing their args) are detected.   POSIX allows
> (and I think most shells which implement that absurd idea, for this there
> is very little choice) those to be detected merely by examining the
> characters that make up the command word, well before any actual
> interpretation of them is attempted.
> 
> In bash, "declare" (the built-in) is a a declaration utility, so its
> args always get special treatment by the parser, regardless of whether
> or not the declare that will be run is the declaration utility, or
> something else (eg: a function).
> 
> In POSIX, this is not a problem, as the declaration utilities are all
> special built-in commands, and those are recognised before functions
> (ie: it is not meaningfully possible to define a function to replace
> a special built-in).   Bash, at least in non-posix mode, doesn't have
> that restriction.
> 
> Avoiding the declaration utility nonsense arg parsing semantics if
> desired is easy, all that's needed is something like
> 
>   E=
>   $E declare ...
> 
> and then since "$E" is the command word position, and is not a
> declaration utility (regardless of what E is set to, an alternative
> that does the same thing is:
> 
>   D=declare
>   $D ...
> 
> as "$D" is not a declaration utility either - before it is expanded).
> 
> Both of them run "declare" with the args processed in the same way
> that would be used for any other utility (like echo, printf, or awk).
> 
> Unfortunately, it can be (fractionally) simpler to write code where
> the magic declaration utility parsing rules are used, though as best
> I can tell it is never the only way - to avoid it, one simply needs
> to separate the declaration from the assignment, putting one before
> the other (which order depends upon the declaration utility, for export
> it doesn't matter, either order works, for readonly the assignment
> must appear before the readonly, for declare the declaration probably
> usually needs to precede the assignment).   Simple variable assignments
> (as distinct from things which appear to be variable assignments but
> which are actually just args to a utility, regardless of what that
> utility does with them) are always processed with the special arg
> processing syntax rules.
> 
> Of course, none of this is relevant to finding a solution to the
> original problem - but is to deciding whether or not the way that
> bash gives special treatment to "declare" is a bug or not.   My
> impression is that it is not, and that if an application (ie: you)
> decide to define a function with the same name as a declaration
> utility, you need to understand that it is still going to get
> declaration utility arg parsing applied to it, rather than what
> happens to everything else.
> 
> Of course, Chet might decide to change that - shells aren't
> required to only use the chars that make up the command word
> position (before anything is expanded) in order to determine
> the expansion rules - another way (much more complex to code,
> and probably slower to execute) would be the expand the words
> in a simple command, one by one, finding the (expanded) word
> which will actually be the command that is run, and then
> parsing and expanding (in POSIX it is only expanding that
> matters - the parsing rules do not alter, as posix has no arrays
> and hence '(' is always an operator, never just a part of a
> variable value) the remainder of the command line according to
> the rules appropriate for the command that will actually be run.
> Of course, if that were done, the trivial mechanisms to avoid
> declaration utility semantics being applied given above ($E or $D)
> no longer work.   But separating the declaration from the
> assignment should always be possible - if it isn't, something is
> going badly wrong (it might sometimes mean 3 commands need to be
> executed instead of one - one declaration before the assignment,
> and another after, but that's OK for more rational and understandable
> syntax).

I thank you for your insight. I am by no means certain that it should be 
considered a bug, though I did find it surprising (this behaviour does not 
appear to be documented). What was more surprising was the eventual realisation 
that bash silently exits upon any attempt to assign to any of a particular 
subset of its variables. That certainly complicated the matter of debugging 
Martin's code.

-- 
Kerin Millar



Re: Enable compgen even when programmable completions are not available?

2023-06-26 Thread Kerin Millar
On Mon, 26 Jun 2023 12:03:47 +0200
alex xmb ratchev  wrote:

> On Mon, Jun 26, 2023, 12:01 Kerin Millar  wrote:
> 
> > On Mon, 26 Jun 2023 11:51:58 +0200
> > alex xmb ratchev  wrote:
> >
> > > On Mon, Jun 26, 2023, 11:33 Kerin Millar  wrote:
> > >
> > > > On Mon, 26 Jun 2023 17:09:47 +1000
> > > > Martin D Kealey  wrote:
> > > >
> > > > > Hi Eli
> > > > >
> > > > > How about using the shell itself to parse the output of "typeset" (an
> > > > alias
> > > > > for "declare"), but redefining "declare" to do something different.
> > This
> > > > is
> > > > > a bit verbose but it works cleanly:
> > > > >
> > > > > ```
> > > > > (
> > > > >   function declare {
> > > > > while [[ $1 = -* ]] ; do shift ; done
> > > > > printf %s\\n "${@%%=*}"
> > > > >   }
> > > > >   eval "$( typeset -p )"
> > > > > )
> > > > > ```
> > > >
> > > > Unfortunately, this is defective.
> > > >
> > > > $ bash -c 'declare() { shift; printf %s\\n "${1%%=*}"; }; eval
> > "declare -a
> > > > BASH_ARGC=()"'; echo $?
> > > > 1
> > > >
> > > > In fact, bash cannot successfully execute the output of declare -p in
> > full.
> > > >
> > > > $ declare -p | grep BASH_ARGC
> > > > declare -a BASH_ARGC=([0]="0")
> > > > $ declare -a BASH_ARGC=([0]="0"); echo $? # echo is never reached
> > > >
> > > > While it is understandable that an attempt to assign to certain shell
> > > > variables would be treated as an error, the combination of not
> > printing a
> > > > diganostic message and inducing a non-interactive shell to exit is
> > rather
> > > > confusing. Further, declare is granted special treatment, even after
> > having
> > > > been defined as a function (which might be a bug).
> > > >
> > > > $ bash -c 'declare() { shift; printf %s\\n "${1%%=*}"; }; eval
> > "declare -a
> > > > BASH_ARGC=()"'; echo $?
> > > > 1
> > > >
> > > > $ bash -c 'declare() { shift; printf %s\\n "${1%%=*}"; }; eval
> > "declare -a
> > > > BASH_ARG=()"'; echo $?
> > > > BASH_ARG
> > > > 0
> > > >
> > > > $ bash -c 'f() { shift; printf %s\\n "${1%%=*}"; }; eval "f -a
> > > > BASH_ARGC=()"'; echo $?
> > > > bash: eval: line 1: syntax error near unexpected token `('
> > > > bash: eval: line 1: `f -a BASH_ARGC=()'
> > > > 2
> > > >
> > > > $ bash -c 'f() { shift; printf %s\\n "${1%%=*}"; }; eval "f -a
> > > > BASH_ARG=()"'; echo $?
> > > > bash: eval: line 1: syntax error near unexpected token `('
> > > > bash: eval: line 1: `f -a BASH_ARG=()'
> > > > 2
> > > >
> > >
> > > you forgot
> > > see u cmd foo bar=()
> > > u still need as always escape ( and )
> >
> > I didn't forget anything. Martin's proposal was intended to work by
> > evaluating the unmodified output of typeset -p. That ( and ) normally need
> > to be escaped simply demonstrates further that it is untenable as a
> > solution.
> 
> 
> 1. making bash eat func foo=() never worked without escaping ( ) .. same
> err msg
> 2. cab u paste me the specific msg / url to that declare -p .. im parser
> pro ..

With all due respect, you are taking this thread way out into the weeds. The 
given solution doesn't work, and may have inadvertently revealed a bug in bash. 
I have absolutely nothing more to add that hasn't already been stated in my 
reply to Martin. Everything you need to know is in that post.

If you are interested in a solution that is clumsy - but that actually works - 
see https://bugs.gentoo.org/909148#c10.

-- 
Kerin Millar



  1   2   >