Re: Scope change in loops with "read" built-in
"Linde, Evan" writes: > In a loop constructed like `... | while read ...`, changes to > variables declared outside the loop only have a loop local > scope, unlike other "while" or "for" loops. Yeah, that's a gotcha. But it's a general feature of *pipelines*, documented in Each command in a pipeline is executed as a separate process (i.e., in a subshell). See COMMAND EXECUTION ENVIRONMENT for a description of a subshell environment. If the lastpipe option is enabled using the shopt builtin (see the description of shopt below), the last element of a pipeline may be run by the shell process. To circumvent that, I've sometimes done things like exec 3<( ... command to generate stuff ... ) while read VAR <&3; do ... commands to process stuff ... ; done exec 3<- You may be able to condense that to { while read VAR <&3; do ... commands to process stuff ... ; done } <( ... command to generate stuff ... ) Changing {...} to (...) won't work, because (...) again executes things in a subshell. (But don't trust that code without checking it!) Dale
Re: feat: exit 1 "file not found"
teknopaul writes: > Hi not sure if this is the correct forum... I believe this is the correct forum. > exit builtin accepts one and only one arg currently. > > Would it be backwards compatible and generally useful to support > echoing a reason for exiting? > > test -f file || exit 2 "file not found" Well, you can get a similar effect tersely via test -f file || { echo "file not found"; exit 2; } but you may want to say test -f file || { echo >&2 "file not found"; exit 2; } And I think that may be the difficulty with making this an additional function of "exit", there may well be too many variants in common use to allow a simple extension to "exit" to cover the bulk of all uses. And again, the present alternative has few disadvantages. Dale
Re: human-friendly ulimit values?
Christian Convey writes: > When setting memory-size limits via "ulimits", users have to manually > convert from their intuitive units. > > E.g., for limiting virtual memory to 8 gigabytes, the invocation is "ulimit > -v 8388608", rather than something like "ulimit -v 8gb". > > If I were to submit a patch for this, is there any chance of it getting > accepted? I can see value in this, but you'd need to show that you had carefully designed the change. Most importantly, has any other shell implemented syntax like that, and are you compatible with that? Which values do you provide alternative syntax for and why? What position does it take on the "GB" vs. "GiB" business? Dale
Re: Slow history load with some values of HISTSIZE
Casey Johnson writes: > In a clean shell, execute: > HISTFILE=alt-history.txt > HISTSIZE=15 > history -r > and then observe how long the last command runs before returning. Though I expect that when you exit bash, the history file gets trimmed to 150,000 lines, and then the "over 10 seconds to load" doesn't happen again. Dale
Re: wait -n misses signaled subprocess
Chet Ramey writes: >> echo "wait -n $pid return code $? @${SECONDS} (BUG)" > > The job isn't in the jobs table because you've already been notified about > it and it's not `new', you get the unknown job error status. The man page gives a lot of details and I'm trying to digest them into a structure. It looks like the underlying meaning of "-n" is to only pay attention to *new* job completions, and anything "in the past" (already notified and moved to the table of terminated background jobs) is ignored. The underlying meaning of providing one or more ids is that "wait" is to only be concerned with those jobs. The man page doesn't make clear that if you don't specify "-n" and do supply ids and one of them has already terminated, you'll get its status (from the terminated table); the wording suggests that "wait" will always *wait for* a termination. There's also an interaction in that "wait" will only look at the terminated table if "-n" is not specified *and* ids are specified. Am I understanding this correctly? Dale
Re: wait -n misses signaled subprocess
Steven Pelley writes: > wait -n > fails to return for processes that terminate due to a signal prior to > calling wait -n. Instead, it returns 127 with an error that the > process id cannot be found. Calling wait (without -n) then > returns its exit code (e.g., 143). My understanding is that this is how "wait" is expected to work, or at least known to work, but mostly because that's how the *kernel* works. "wait" without -n makes a system call which means "give me information about a terminated subprocess". The termination (or perhaps change-of-state) reports from subprocesses are queued up in the kernel until the process retrieves them through "wait" system calls. OTOH, "wait" with -n makes a system call which means "give me information about my subprocess N". In the first case, if the subprocess N has terminated, its report is still queued and "wait" retrieves it. In the second case, if the subprocess N has terminated, it doesn't exist and as the manual page says "If id specifies a non-existent process or job, the return status is 127." What you're pointing out is that that creates a race condition when the subprocess ends before the "wait". And it seems that the kernel has enough information to tell "wait -n N", "process N doesn't exist, but you do have a queued termination report for it". But it's not clear that there's a way to ask the kernel for that information without reading all the queued termination reports (and losing the ability to return them for other "wait" calls). Then again, I might be wrong. Dale
Re: completion very slow with gigantic list
Eric Wong writes: > Hi, I noticed bash struggles with gigantic completion lists > (100k items of ~70 chars each) A priori, it isn't surprising. But the question becomes "What algorithmic improvement to bash would make this work faster?" and then "Who will write this code?" Dale
Re: Unexpected Quick Substitution in string literals
Chet Ramey writes: >> While declaring a string literal across multiple lines, a line starting >> with the ^ character is resulting in some sort of quick substitution >> processing. > > This is a standard form of history expansion, described in the man page. I just checked. Certainly, the use of ^ is mentioned in the man page, but it isn't very clearly described, unfortunately. In particular, ^ is only active at the beginning of a line, which doesn't seem to be described at all. The only relevant passages seem to be: histchars The two or three characters which control history expansion and tokenization (see HISTORY EXPANSION below). The first character is the history expansion character, the character which signals the start of a history expansion, normally `!'. The second character is the quick substitution character, which is used as shorthand for re-running the previous command entered, substiâ tuting one string for another in the command. The default is `^'. [...] Note that this doesn't mention that ^ has to appear at the beginning of a line. The other passage is: Event Designators An event designator is a reference to a command line entry in the history list. Unless the reference is absolute, events are relative to the current position in the history list. ! Start a history substitution, except when followed by a blank, newline, carriage return, = or ( (when the extglob shell opâ tion is enabled using the shopt builtin). !n Refer to command line n. !-nRefer to the current command minus n. !! Refer to the previous command. This is a synonym for `!-1'. !string Refer to the most recent command preceding the current posiâ tion in the history list starting with string. !?string[?] Refer to the most recent command preceding the current posiâ tion in the history list containing string. The trailing ? may be omitted if string is followed immediately by a newline. If string is missing, the string from the most recent search is used; it is an error if there is no previous search string. ^string1^string2^ Quick substitution. Repeat the previous command, replacing string1 with string2. Equivalent to ``!!:s^string1^string2^'' (see Modifiers below). !# The entire command line typed so far. A quick read suggests that "^string1^string2^" is an event designator which presumably is "the thing that follows !", leading one to guess that the syntax is "!^string1^string2^". A more careful reading reveals that the list of event designators all include the initial !, which suggests that "^string1^string2^" alone is the correct syntax. That's correct, but it doesn't reveal that it must be at the beginning of a line. Indeed, the ^ form really should be removed from that list and mentioned as a separate thing. Dale
Re: Regex: A case where the longest match isn't being found
"Dan Bornstein" writes: > I found a case where the regex evaluator doesn't seem to be finding > the longest possible match for a given expression. The expression > works as expected on an older version of Bash (3.2.57(1)-release > (arm64-apple-darwin22)). > > Here's the regex: ^(\$\'([^\']|\\\')*\')(.*)$ This would be *much* easier to understand if you'd rewritten it into a test case where none of the characters that the regex was consuming were regex metacharacters, e.g. letters. That would make everything far easier to read. > (FWIW, this is meant to capture a string that looks like an ANSI-style > literal string, plus a "rest" for further processing.) *If* I read the regex correctly, it must match the entire string, ^...$. Then it matches in two parenthesized strings. The second string is .*, as you san "the rest for further processing". The first string is $, followed by a single ', followed by any number of: - one character that is not ' - the character \ followed by ' followed by a single ' Note that this regex does not give the matcher any leeway; it either matches a string or not, and if it matches, it matches in only one way. This isn't a question of whether it is choosing "the longest match". *If* I read the subject string correctly, it is $'foo\' x' bar (with two internal spaces and none at the beginning or end). So it does seem to say that the first parenthesized string should match $'foo\' x' > For example, run this: > > [[ $'$\'foo\\\' x\' bar' =~ ^(\$\'([^\']|\\\')*\')(.*)$ ]] && echo > "${BASH_REMATCH[1]}" > > On v5.2, this prints: $'foo\' > On v3.2.57, this prints: $'foo\' x' Executing this suggests that the subject string is being interpreted as intended: $ echo -E $'$\'foo\\\' x\' bar' $'foo\' x' bar $ OK, here's the problem. Compare these executions, which have an additional \\ inserted in the character class specification [...]: $ [[ $'$\'foo\\\' x\' bar' =~ ^(\$\'([^\']|\\\')*\')(.*)$ ]] && echo -E "${BASH_REMATCH[1]}" $'foo\' $ [[ $'$\'foo\\\' x\' bar' =~ ^(\$\'([^\']|\\\')*\')(.*)$ ]] && echo -E "${BASH_REMATCH[3]}" x' bar $ [[ $'$\'foo\\\' x\' bar' =~ ^(\$\'([^\\\']|\\\')*\')(.*)$ ]] && echo -E "${BASH_REMATCH[1]}" $'foo\' x' $ [[ $'$\'foo\\\' x\' bar' =~ ^(\$\'([^\\\']|\\\')*\')(.*)$ ]] && echo -E "${BASH_REMATCH[3]}" bar $ What you wrote was [^\']. The backslash is consumed while the regexp is being read and is interpreted as causing the succeeding ' to be non-magic. Of course, that ' isn't magic in that context. But it means that the character class includes all non-newline characters other than '. Specifically, that *includes* backslash. So the regexp matcher sees the first alternative of the | as matching an isolated backslash. That means that when the matcher is processing the iterator *, at that iteration, it will first match the character class, which matches, then attempt to iterate further (which will fail, exiting the iterator), match the closing ' (which will succeed), match the (.*) (which will succeed), match the $ (which will succeed), then return that match. Now, if the part of the pattern following the iterator failed, the matcher would backtrack until it attempted the *second* branch of the alternative in the final iteration, which would *also* match, but would leave the matcher lookng at " x'...", which would allow it to continue iterating. The complication is that if you'd written [^\\\'], the matcher would have only one way to match any subject string, and the matching process could be ignored. But with [^\'], there are multiple ways to match, and you get the first one the matcher finds. Specifically, ?, +, and * are "greedy", they start by matching as many copies of their sub-pattern as they are allowed, and if backtracked into, reduce the number of iterations one at a time as far as they're allowed. Alternations, though, start by attempting the first alternative, then the second, etc., and if backtracked into after succeeding with one alternative, try the next one. Ugh, that's a rough outline; it's not quite that simple. I suspect the difference between the versions is how the regexp is unquoted while it is being read, with version 3 interpreting [^\'] as "character class excluding newline, backslash, and quote" and version 5 interpreting it as "character class excluding newline and quote". Dale
Handling of SIGHUP
The basic explanation of how Bash handles SIGUP is (in 5.1): The shell exits by default upon receipt of a SIGHUP. Before exiting, an interactive shell resends the SIGHUP to all jobs, running or stopped. Stopped jobs are sent SIGCONT to ensure that they receive the SIGHUP. Bash doesn't execute .bash_logout in this situation, although that isn't made entirely clear: When an interactive login shell exits, or a non-interactive login shell executes the exit builtin command, bash reads and executes commands from the files ~/.bash_logout and /etc/bash.bash_logout, if the files exists. However, if I set a trap on SIGHUP, Bash's behavior seems to change. In particular, if I set a no-op trap: trap ':' SIGHUP then .bash_logout *is* run during the processing of the HUP. I have not tested to see when SIGHUP is resent to child jobs. The documentation suggests that SIGHUP is resent if Bash exits on SIGHUP, but if Bash exits normally, SIGHUP is sent only if huponexit is set, and huponexit seems to be unset by default. But it's possible that behavior interacts with trap as well. It seems to me that the documentation needs to be tweaked to explain how trap of SIGHUP interacts with the default SIGHUP processing. Dale
Re: variable set in exec'ing shell cannot be unset by child shell
Ti Strga writes: > The summary is that if a parameter is set specifically for a '.'/'source' > command, and the source'd file calls 'exec' to run another script, then > that exec'd script cannot unset the parameter; if we want that parameter > to not be present in the exec'd script, then the source'd file must do > the unsetting prior to exec. I was too lazy to chew through your example, but I coded an instance of your description (above), and it does not show the dubious behavior that you report. Specifically, $ bash -version GNU bash, version 5.1.0(1)-release (x86_64-redhat-linux-gnu) ... $ ls -l total 20 -rwxr-xr-x. 1 worley worley 112 Oct 13 14:14 inner -rw-r--r--. 1 worley worley 113 Oct 13 14:13 middle -rw-r--r--. 1 worley worley 123 Oct 13 14:12 outer $ cat outer echo "Value in ./outer at the beginning: $OUTSIDE" OUTSIDE=xyzzy . ./middle echo "Value in ./outer at the end: $OUTSIDE" $ cat middle echo "Value in ./middle at the beginning: $OUTSIDE" exec ./inner echo "Value in ./middle at the end: $OUTSIDE" $ cat inner echo "Value in ./inner at the beginning: $OUTSIDE" unset OUTSIDE echo "Value in ./inner at the end: $OUTSIDE" $ bash ./outer Value in ./outer at the beginning: Value in ./middle at the beginning: xyzzy Value in ./inner at the beginning: xyzzy Value in ./inner at the end: $ Here, when I run the command "bash ./outer", the subshell executes the critical command "OUTSIDE=xyzzy . ./middle". The source'd script middle then exec's inner. But inner seems to be able to unset $OUTSIDE as one expects. (Of course, when inner finishes, the command I typed exits, as executing inner replaced executing outer. Dale
Re: using exec to close a fd in a var crashes bash
Is there any way to write a redirection "Redirect the fd whose number is in $FOO to file /foo/bar?" OK, you can write 'bash -c "..."' and assemble a command string however you want. But is there a direct way to write it? The "{var}>..." mechanism *assigns* to $var, rather than taking its existing value. Dale
Re: ! history expansion occurs within arithmetic substitutions
Andreas Schwab writes: >> More troublesome, I think, are several variable substitutions which >> include "!" followed by a name. But I doubt they're used much in >> interactive mode. > > The history expansion is smart enough to not interfere with ${!var}. Yes... Also, the same magic seems to apply to $!, even if it isn't at the end of a word. The manual page should probably mention the criteria for suppressing history expansion there, whatever they are. Perhaps "history expansion is not triggered by the history expansion character when it is part of a variable reference" is unambiguous enough; quoting (anti-recognition) of ! and $ have the same rules, I think. I was checking the manual page again, and I think it would be clearer (certainly, would be to me) if a paragraph break was inserted as follows: History expansion is performed immediately after a complete line is read, before the shell breaks it into words, and is performed on each line individually without taking quoting on previous lines into acâ count. It takes place in two parts. The first is to determine which line from the history list to use during substitution. The second is to select portions of that line for inclusion into the current one. The line selected from the history is the event, and the portions of that line that are acted upon are words. Various modifiers are availâ able to manipulate the selected words. The line is broken into words in the same fashion as when reading input, so that several metacharacâ ter-separated words surrounded by quotes are considered one word. Hisâ tory expansions are introduced by the appearance of the history expanâ sion character, which is ! by default. Only backslash (\) and single quotes can quote the history expansion character, but the history exâ pansion character is also treated as quoted if it immediately precedes the closing double quote in a double-quoted string. The first part talks mostly about what history substitutions *do* and the second talks about how they are triggered, which isn't a continuation of the first part. Indeed, I think that it would be clearer to rearrange them into: History expansion is performed immediately after a complete line is read, before the shell breaks it into words, and is performed on each line individually without taking quoting on previous lines into account. The line is broken into words in the same fashion as when reading input, so that several metacharacter-separated words surrounded by quotes are considered one word. History expansions are introduced by the appearance of the history expansion character, which is ! by default. Only backslash (\) and single quotes can quote the history expansion character, but the history expansion character is also treated as quoted if it immediately precedes the closing double quote in a double-quoted string. History expansion takes place in two parts. The first is to determine which line from the history list to use during substitution. The second is to select portions of that line for inclusion into the current one. The line selected from the history is the event, and the portions of that line that are acted upon are words (as determined above). Various modifiers are available to manipulate the selected words. Dale
Re: ! history expansion occurs within arithmetic substitutions
Zachary Santer writes: > When messing around in interactive mode to test things I may want to do in > a script, it's nice if it actually behaves the same. There are probably > some other set and possibly shopt things I should have somewhere if that's > what I want. I checked, and it doesn't seem that there's an explicit way to start bash noninteractively, but running "cat | bash" seems to work. Of course, it's noninteractive, so you get e.g. empty prompt strings! My understanding is that "interactive" directly affects only two things, the startup file(s) read and the initial values of various shell options. You can fix the startup files with explicit options on invocation, and you can fix the shell options that matter to you with commands. Dale
Re: !; is interpreted as an history expansion that can never match anything
Emanuele Torre writes: > ! followed by a ; or another terminator is interpreted as an history > expansion with no pattern that can never match anything. > > $ !; echo hi > bash: !: event not found > $ !&& echo hi > bash: !: event not found IMHO it is more to the point that in the manual page it says !string Refer to the most recent command preceding the current position in the history list starting with string. without defining "string". It looks like the actual definition is "everything from the ! to the end of the word", taking into account The line is broken into words in the same fashion as when reading input, so that several metacharacâ ter-separated words surrounded by quotes are considered one word. Hisâ tory expansions are introduced by the appearance of the history expanâ sion character, which is ! by default. With the significant detail that the ! need not be the first character of the word. So I think the manual page could be improved by adding *...*: !string Refer to the most recent command preceding the current position in the history list starting with string. *All characters until the start of the word designator or end of the word are part of string.* The line is broken into words in the same fashion as when reading input, so that several metacharacâ ter-separated words surrounded by quotes are considered one word. Hisâ tory expansions are introduced by the appearance of the history expanâ sion character, which is ! by default*, within a word*. Dale
Re: ! history expansion occurs within arithmetic substitutions
Zachary Santer writes: > Description: > Similarly, history expansion occurs within arithmetic substitutions. This > will never, ever be what the user wants. And now I know how to make it not > happen. > > Repeat-By: > $ set +o histexpand Well, yes, if you turn off history expansion, then it won't occur within arithmetic substitutions. But I would not take it as given that nobody would ever want to use history expanstion within an arithmetic substitution. Let me concoct an example: $ i=4 $ echo (( i+1 )) bash: syntax error near unexpected token `(' $ echo $(( !:3 )) echo $(( i+1 )) 5 $ (This would be more likely if "i+1" was a much more complicated expression.) But it seems to me that the way to avoid this is to always put a space after "!" in arithmetic expressions. "!" followed by space is never subject to history expansion. More troublesome, I think, are several variable substitutions which include "!" followed by a name. But I doubt they're used much in interactive mode. Dale
Re: comments inside command subst are handled inconsistently
Denys Vlasenko writes: > Try these two commands: > > $ echo "Date: `date #comment`" > Date: Thu Jul 27 10:28:13 CEST 2023 > > $ echo "Date: $(date #comment)" >> )" > Date: Thu Jul 27 10:27:58 CEST 2023 > > As you see, #comment is handled differently in `` and $(). > I think the handling in `` makes more sense. Or more exactly, the handling of a ")" after a "#" is different from the handling of a "`" after a "#". I suspect the parsing is done differently. Likely `...` is treated as a quoting operation, and "date #comment" is first extracted as a string and then parsed. Whereas it looks like the parser, upon seeing "$(", does not first scan for ")" but instead adjusts its context and continues parsing characters. With that method, the "#" turns all of "comment)" into comment. Dale
Re: git amend commit with backticks on cli causes tty to crash
Wiley Young writes: > I'm seeing this behavior on Fedora 38. Possibly it's just some user error > again, since i'm figuring out how to use vim and git at a slightly above > novice level. :-) Well, I think I can get an idea of what you're doing. I'm not sure, you really want to create a "minimal example" without all the surrounding bric-a-brac. Particularly, there seems to be some sort of framework, perhaps "LiveUsb1", through which both the commands and responses are filtered, which makes it really hard to track what is doing what. Also, why is there the stuff about vim? Also, you don't state explicitly what you think the error is. I assume you mean that "Press [ctrl-d] again and tab (tty 5) crashes." should not be happening. But it looks to me like you're sending ctrl-d into a shell. That, of course, is EOF, and usually tells the shell to exit. If it's a login shell, the whole login session ends. Now I don't know how pts's are managed, but they are "pseudo ttys" and I wouldn't be surprised if, when a login session on a pts ends, the pts of that session is deleted. My Fedora 34 seems to create and delete pts's as needed. Dale
Re: nofork command substitution
Chet Ramey writes: > Bash allows the close brace to be joined to the remaining > characters in the word without being followed by a shell metacharacter > as a reserved word would usually require. I had to read this a couple of times to figure out what it means. In particular "the word" isn't well-bound here. Perhaps better is > The characters immediately following the close brace continue the word > that the command substitution is part of; the close brace need not be > followed by a shell metacharacter as a reserved word would usually > require. This text is clear in its description of the behavior but entirely unclear regarding what the point is: > If the first character is a '|', the construct expands to the value of > the 'REPLY' shell variable after COMMAND executes, without removing any > trailing newlines, and the standard output of COMMAND remains the same > as in the calling shell. Bash creates 'REPLY' as an initially-unset > local variable when COMMAND executes, and restores 'REPLY' to the value > it had before the command substitution after COMMAND completes, as with > any local variable. My guess is that the intention is for COMMAND to be "read" with no names: ${| read; } If so, can you make this an example for the "${|" form? Also, can you clarify the semantics on this point: The text above says "the construct expands to the value of the 'REPLY' shell variable after COMMAND executes, without removing any trailing newlines". However, the documentation of "read" says: "If no names are supplied, the line read, without the ending delimiter but otherwise unmodified, is assigned to the variable REPLY." So the sequence of processing is: read inputs a line; read deletes the ending newline and assigns the remainder to REPLY; command substitution obtains the value of REPLY (which does not contain a newline); command substitution returns that value "without removing any trailing newlines". Is that what you intend the code to do/what the code does? Dale
Re: nofork command substitution
Ãngel writes: > I suggest: > >> There is an alternate form of command substitution: >> >> ${ COMMAND; } > > and clarify later the other variants in addition to a space: > >> The character following the open brace must be a space, tab, >> newline, '(', or '|', and the close brace must be in a position... I support either that change, or reordering the text to specify "C" at the beginning, when the syntax is being described, before the semantics are described: > There is an alternate form of command substitution: > > ${C COMMAND; } > > where the character C following the open brace is a space, tab, newline, > '(', or '|'. This form executes COMMAND in the current execution > environment. This means that side effects of COMMAND take effect > immediately in the current execution environment and persist in the > current environment after the command completes (e.g., the 'exit' > builtin will exit the shell). > > The close brace must be in a position where a > reserved word may appear (i.e., preceded by a command terminator such as > semicolon). ... Dale
Re: How difficult would it be to add a timeout to "wait"?
Chet Ramey writes: > On 4/20/23 1:53 PM, Dale R. Worley wrote: >> How difficult would it be to add an optional timeout argument to the >> "wait" builtin? > > Try a variant of this. > ... My interest here isn't "Can I accomplish this task with Bash?" but quite literally, Can I make this a *feature* of Bash so I don't have to set up the mechanics? (I already know of a couple of other ways to do it with Bash.) An interesting point, which you may or may not have intended, was that this Bash code can be turned into C code fairly easily: Fork a subprocess to wait for the timeout and when it fires, send USR1 to Bash. Then make sure Bash is waiting for the timeout subprocess as well as whatever other children it should be waiting for. My *reflex* is that this is a really heavyweight implementation, but on consideration, I think I'm wrong: fork() is cheap in Un*x, indeed, that may have been its central design innovation, make processes cheap and use them freely. So then fleshing this out, let me ask: Is it reasonable to add an optional timeout to the "wait" builtin using this mechanism? Dale
How difficult would it be to add a timeout to "wait"?
How difficult would it be to add an optional timeout argument to the "wait" builtin? I occasionally run into situations where this functionality is desirable, more often than I expect, really. And though the last thing Bash needs is an additional feature, naively it seems like it wouldn't be too difficult. E.g. "read" already has a timeout feature. But I looked into the code, and all the variations and implementations of "wait" map reasonably directly into one or another of the Posix "wait" calls. I'm no expert there, but I tried looking and there doesn't seem to be any version of those that allows the program to specify a timeout time. I considered setting an alarm() before calling wait() but a quick look on Stack Overflow said that is a good way to lose the status information from child jobs. Dale
Re: Document that here strings don't support brace expansion.
Chet Ramey writes: > If they're linked, why wouldn't saying filename generation isn't performed > be enough to imply that brace expansion isn't performed either? Hmm, yes, good point. Dale
Re: Document that here strings don't support brace expansion.
Alex Bochannek writes: > "The WORD undergoes tilde expansion, parameter and variable expansion, > command substitution, arithmetic expansion, and quote removal. Filename > expansion and word splitting are not performed." > > It is missing brace expansion, which is not supported: Interesting ... I would recommend adding brace expansion to the list of things-not-done because I think it's a common cognitive error to include brace expansion as part of filename expansion -- it's one of those things you do on the command line to generate a list of the files you want. In particular, *I* am subject to that cognitive error; whenever I don't think carefully about it, I don't distinguish the two. Dale
Re: The memory occupied by bash has been increasing due to the fork bomb
zju <21625...@zju.edu.cn> writes: >> Interactive shells always ignore SIGTERM. > > I confirmed that the fork bomb through bash would cause the system > oom! This indicates that anybody can use this flaw to crash the > system.It is quite dangerous. > > If you think the behavior of ignoring the SIGTERM is reasonable. Maybe > the only way to solve the problem is to deal with the increasing of > the memory? The Un*x convention has always been that SIGTERM kills the process but the process can override that, and SIGKILL kills the process and the process cannot override that. So if systemd isn't protecting the system adequately with its current operation, it should instead send SIGKILL. In regard to OOM, if the goal is to prevent fork bombs, the system administrator would need to set a hard limit on "ulimit -u", "The maximum number of processes available to a single user" as well as "ulimit -d", "The maximum size of a process's data segment". Changing the behavior of bash alone could not prevent an attacker from forcing OOM, it would just require the attacker to be more sophisticated. Dale
Re: Having an alias and a function with the same name leads to some sort of recursion
Chet Ramey writes: > On 2/14/23 2:58 PM, Dale R. Worley wrote: >>>> Looking at the manual page, it says >>>> >>>> ALIASES >>>> Aliases allow a string to be substituted for a word when it is >>>> used as >>>> the first word of a simple command. >> >> Martin suggested (but IIUC didn't sent to this list): >>> "Beginning of a simple command" should probably be replaced by something >>> more along the lines of "beginning of any command that does not start with >>> a keyword (such as "while", "if", "case", etc) or assignment. >> >> Though I think by "keyword" he means "reserved word". > > I think the issue is that he's applying a grammar interpretation (simple > command) to something that is a completely lexical operation. Alias > expansion happens in the lexical analyzer, and it happens when a token > can potentially be the first word of a simple command. Well, (1) I'm looking at it based on the *documentation*, which says "simple command". And my essential point is that the documentation should be adjusted to handle this specific case, viz. alias-izing the name of a function that one wants to define without a "function" reserved word. Let me reiterate that, for a lot of these odd points, I'm much more fussy that the documentation describes what Bash does than that I particularly prefer the choice the Bash implementation does. All of that is qualified by (2) The actual workings of aliases are complicated, and as you note, in the lexing rather than the parsing. But contrary to my point (1), I'm willing to tell anyone who uses aliases to modify things that are grammatically salient (rather than command names) that they are in "here be dragons" territory, it's their problem if the documentation doesn't clearly delineate what will happen, and they need to test examples to tell. So then, (3) What is a practical change to the manual page? The first three sentences in version 5.1.0(1) are: ALIASES Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. The shell maintains a list of aliases that may be set and unset with the alias and unalias builtin commands (see SHELL BUILTIN COMMANDS below). The first word of each simple command, if unquoted, is checked to see if it has an alias. I think this change covers the case we're talking about, clarifies the second sentence a bit, and seems to be well-aligned with the more detailed truth: Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. The alias and unalias builtin commands (see SHELL BUILTIN COMMANDS below) set and unset aliases. Reserved words may not be aliased, but all other tokens may. A word in a position which could start a simple command, if unquoted, is checked to see if it has an alias. (A side effect is that the function name in a function definition that does not start with the "function" keyword is checked for alias expansion.) Dale
Re: Having an alias and a function with the same name leads to some sort of recursion
>> Looking at the manual page, it says >> >> ALIASES >>Aliases allow a string to be substituted for a word when it is used >> as >>the first word of a simple command. Martin suggested (but IIUC didn't sent to this list): > "Beginning of a simple command" should probably be replaced by something > more along the lines of "beginning of any command that does not start with > a keyword (such as "while", "if", "case", etc) or assignment. Though I think by "keyword" he means "reserved word". Dale
Re: number of bugs
There is a bug tracker at https://savannah.gnu.org/support/?group=bash. Dale
Re: "builtin jobs" does not output to stdout.
Chet Ramey writes: > Yes. As described in later messages, there is a special case that allows > the output of `jobs' to be piped. This came in at some point between > bash-1.05 (March 1990) and bash-1.08 (May 1991). I never extended this > special case to `builtin' or `command'. Is this (are these) special cases documented? I just looked through my (admittedly obsolete) bash man page and couldn't find any mention of special behavior of the "jobs" builtin. But presumably careful reading of the documentation of "command", "builtin", and "jobs" would show which cases work in which way. Dale
Re: Having an alias and a function with the same name leads to some sort of recursion
Robert Elz writes: > | Aliases are not used in bash scripts, unless bash is invoked in POSIX > | compatibility mode, or the "expand_aliases" shopt is turned on. > > I think that's what must have happened ... the infinite loop of > echo commands suggests that the function definition > > cmd() { echo "$@" ; } > > was converted by the alias info > > echo() { echo "$@" ; } > > and when you see that, it is obvious why cmd a b c (which becomes echo a b c) > just runs echo which runs echo which runs echo which ... Heh -- but OTOH, if you use function cmd() { echo "$@" ; } you *don't* get that behavior. Looking at the manual page, it says ALIASES Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. That makes it clear why the second case behaves as it does. But my reading of the definition of "simple commands" implies that function defintions are not simple commands, and alias substitution should not be done on them (that is, the initial part) in any case. Dale
Re: [PATCH] local_builtin: do not overwrite previously saved options with local -
This is tricky. The documentation is If NAME is '-', the set of shell options is made local to the function in which 'local' is invoked: shell options changed using the 'set' builtin inside the function are restored to their original values when the function returns. It does seem to mean that executing 'local -' causes the "original" values to be restored. I could argue that "original" in this context could mean either just before the "set" is executed or just as the funciton is entered, but both have the same result. The tricky part is this is *retrospective*, the options may have already been changed in this function invocation: And bash 5.1.0 doesn't do that: $ function a () { set -C ; local -; } $ echo $- bhimBHs $ a $ echo $- bhimBCHs $ The behavior of bash appears to be is "future changes in shell options using the 'set' builtin inside the current function invocation are restored to their prior values when the function returns". Dale
Re: unset does not remove functions like a[b] unless -f is specified
Greg Wooledge writes: > I'd be totally OK with restricting the function namespace a bit more. > Function names should not be allowed to contain backticks or less-than > or greater-than signs (in my opinion). I'm still undecided about > parentheses, but I'm leaning toward "denied". I'd be perfectly fine if function names had to be "names", and my memory is that old versions of Bash enforced that. But the manual page hints that was changed, and probably for a reason. So proceed with caution. When in posix mode, fname must be a valid shell name and may not be the name of one of the POSIX special builtins. In default mode, a function name can be any unquoted shell word that does not contain $. Dale
Re: unset does not remove functions like a[b] unless -f is specified
Emanuele Torre writes: > bash-5.1$ a[0] () { echo;} > bash-5.1$ unset 'a[0]' Admittedly, I'm running Bash 5.1.0, but the manual page says: fname () compound-command [redirection] function fname [()] compound-command [redirection] ...in posix mode, fname must be a valid shell name and may not be the name of one of the POSIX special builtins. In default mode, a function name can be any unquoted shell word that does not contain $. ... So it does seem that a function named "a[0]" is valid in default mode. (And as long as there is no file named "a0" and nullglob is not enabled (the default), you don't have to quote the name to have it be the word that is the argument of the command. So "unquoted" is true.) unset [-fv] [-n] [name ...] ... If no options are supplied, each name refers to a variable; if there is no variâ able by that name, a function with that name, if any, is unset. So taking that text strictly, "unset a[0]" should attempt to remove the variable a[0], and if it does not exist, attempt to remove the function a[0] that name. Whether that change would make Bash more useful is debatable. It's possible that changing the documentation to match the code would be more useful. Dale
Re: More convenient tracing
Greg Wooledge writes: > On Wed, Jan 25, 2023 at 03:00:27PM -0500, Dale R. Worley wrote: >> >> Tracing with -x prints a lot of (usually) useless lines. >> >> $ bash -x ./tt >> [300+ lines of Bash initializations] >> + echo 'Now in tt.' >> Now in tt. > > Why does it do this? Have you got BASH_ENV set to something in your > environment? I do have BASH_ENV set, to ~/.bashrc. I need that so that my scripts can use my .bashrc customizations. Though I do know that I haven't gotten the Bash startup scripts set correctly for the effects I want; fixing that would cut most but by no means all of the "Bash initializations" listed above. Dale
More convenient tracing
Some time ago I proposed a new option to Bash to cause it to trace commands (in the manner of -x) but not within the Bash initialization scripts. People advised me that this could be accomplished without a new option. I also picked up various suggestions for how to design it. This is my latest version. Comments are requested. The feature is configured by adding the following lines to the end of ~/.bashrc. Setting the environment variable BASH_XTRACE causes Bash to trace the body of the script which it is currently executing. The value of BASH_XTRACE is the file name into which to write the trace. Similarly BASH_XTRACE_ALL causes Bash to trace the body of the current script and any scripts which are invoked as subprocesses. The aliases XTRACE and XTRACE_ALL cause tracing into stderr, which is the common case. $ cat ~/.bashrc [various customizations] # Insert at the end of ~/.bashrc. # "BASH_XTRACE=file shell-script" causes the body of shell-script to be executed # with xtrace and xtrace output sent to "file". # "BASH_XTRACE_ALL=file command" causes the bodies of all descendant # shell scripts to be executed with xtrace and xtrace output sent to # "file". # Aliases to make using these features in the standard way easy. alias XTRACE='BASH_XTRACE=/dev/stderr ' alias XTRACE_ALL='BASH_XTRACE_ALL=/dev/stderr ' if [[ -n "$BASH_XTRACE" ]] || [[ -n "$BASH_XTRACE_ALL" ]] then exec {BASH_XTRACEFD}>${BASH_XTRACE:-$BASH_XTRACE_ALL} unset BASH_XTRACE set -x fi Two test scripts; uu invokes tt. $ cat ./tt #! /bin/bash echo 'Now in tt.' $ cat ./uu #! /bin/bash echo 'Now in uu.' ./tt Tracing with -x prints a lot of (usually) useless lines. $ bash -x ./tt [300+ lines of Bash initializations] + echo 'Now in tt.' Now in tt. XTRACE traces only the commands in the script itself. $ XTRACE bash ./tt + echo 'Now in tt.' Now in tt. XTRACE of uu shows the invocation of tt but not the commands within tt. $ XTRACE bash ./uu + echo 'Now in uu.' Now in uu. + ./tt Now in tt. XTRACE_ALL of uu also shows the commands within tt. $ XTRACE_ALL bash ./uu + echo 'Now in uu.' Now in uu. + ./tt + echo 'Now in tt.' Now in tt. $ Dale
Re: Possible bug in bash
Greg Wooledge writes: > On Fri, May 13, 2022 at 10:36:56PM -0400, Dale R. Worley wrote: >> Reading your message, I believe that the rule can be stated as follows, >> and I'd thank you to check it: && and || have the same precedence, and >> they both "associate left". So for example >> x && yy || zz >> is equivalent (as a control structure) to >> { x && yy ;} || zz > > Not really. Let's say you have a bunch of commands strung together like > this: > > a && b || c && d || e && f || g [ most of the exposition snipped ] > And so on, until the entire line has been processed. Each simple command > in the line is either executed, or not, depending on the current value > of $? and the operator which precedes it. > > That's why this has no equivalence to a regular "if/then/else" command. > The implementation is just entirely different. Uh, I didn't say it was equivalent to a 'regular "if/then/else"' command, I said it was equivalent to { { { { { a && b ;} || c ;} && d ;} || e ;} && f ;} || g which indeed has the same effect as you described in your message. Dale
Re: `declare -f "a="' fails unnecessarily
Andreas Schwab writes: >> In default mode, you actually can do >> $ function a=b { printf hi\\n; } >> though you can't execute it: >> $ a=b foo >> bash: foo: command not found > > You just have to quote any part of the function name upto the equal sign > to stop if from being interpreted as an assignment. > > $ \a=b foo > hi Oh, wow! I guess that makes sense but I'd never imagine that one would want it to make sense! Dale
Re: `declare -f "a="' fails unnecessarily
Emanuele Torre writes: > `declare -f "something="' fails with the following error: > > $ declare -f 'a=x' > bash: declare: cannot use `-f' to make functions > That error is not very useful. Bash makes `declare -f' fail with that > error when an argument looks like an assignment. It's an interesting mess. Looking at the definition of "declare", the "=" is used to separate the name from the value being assigned to the name: declare [-aAfFgiIlnrtux] [-p] [name[=value] ...] So the statement above is an attempt to declare the name "a" as a function, with the value somehow being "x". There's a difficulty because recent Bashes have allowed function names that are not "names" in the Bash sense: fname () compound-command [redirection] function fname [()] compound-command [redirection] This defines a function named fname. [...] When in posix mode, fname must be a valid shell name and may not be the name of one of the POSIX special builtins. In default mode, a function name can be any unquoted shell word that does not contain $. name A word consisting only of alphanumeric characters and underâ scores, and beginning with an alphabetic character or an underâ score. Also referred to as an identifier. In default mode, you actually can do $ function a=b { printf hi\\n; } though you can't execute it: $ a=b foo bash: foo: command not found You say the error is not very useful, but it seems to me that the error is doing exactly what is intended; you *shouldn't* have an argument that looks like an assignment. IMO the fact that you can use "function" to declare a function with "=" in its name is a mis-feature. Dale
Re: declare XXX=$(false);echo $?
Chet Ramey writes: > On 12/2/22 5:28 AM, Ulrich Windl wrote: >> Surprisingly "declare XXX=$(false);echo $?" outputs "0" (not "1") >> There is no indication in the manual page that "declare" ignores the >exit code of commands being executed to set values. > > Why do you think it should? `declare' has a well-defined return status. There it is, end of "SIMPLE COMMAND EXPANSION": If there is a command name left after expansion, execution proceeds as described below. Otherwise, the command exits. If one of the expanâ sions contained a command substitution, the exit status of the command is the exit status of the last command substitution performed. If there were no command substitutions, the command exits with a status of zero. and: declare [-aAfFgiIlnrtux] [-p] [name[=value] ...] typeset [-aAfFgiIlnrtux] [-p] [name[=value] ...] [...] The return value is 0 unless an invalid option is encountered, an attempt is made to define a function using ``-f foo=bar'', an attempt is made to assign a value to a readonly variable, an attempt is made to asâ sign a value to an array variable without using the compound asâ signment syntax (see Arrays above), one of the names is not a valid shell variable name, an attempt is made to turn off readâ only status for a readonly variable, an attempt is made to turn off array status for an array variable, or an attempt is made to display a non-existent function with -f. If you input "XXX=$(false)", there isn't a command name, it's a sequence of assignments, and "the exit status of the command is the exit status of the last command substitution performed". But if you input "declare XXX=$(false)", you're executing the "declare" command, and the exit status doesn't depend on the command substitution. Dale
Re: Bad leaks file fd to child processes
Greg Wooledge writes: > The fact that pvs *complains* about this instead of just ignoring it makes > it fairly unique. I don't know why the authors of pvs chose to do this. > Perhaps they were working around some known or suspected bug in some other > program that was commonly used as pvs's parent. I've always assumed that there was some code inside pvdisplay and other LVM programs that verified that all the fd's opened by the process were properly closed before it terminated, to give feedback to the developers. And I assumed that the fact that pvdisplay (in the older version) always printed such a message showed that there was such a bug in its code. But it's possible it was due to some aspect of my environment. Dale
Re: Bad leaks file fd to child processes
Alexey via Bug reports for the GNU Bourne Again SHell > Same behavior was in bash 4.4 (as well as now in bash 5.2): > > # echo $BASH_VERSION > 4.4.0(1)-release > # exec 66 # pvs > File descriptor 66 (/etc/hosts) leaked on pvs invocation. Parent PID > 1057606: ./bash > > But we use the fact tat bash doesn't close FD for example to preliminary > open log file for utility that we will `exec' later. > Unfortunately bash doesn't provide any fcntl() mechanism to control FD > flags. > It'll be great to have ability to manage CLOEXEC or any other FD flags > from the script. Interesting! I had misunderstood the complaint, which is that the fd leaks *from* bash *to* the program being executed (pvs in the example). One aspect is that is the specified behavior of bash. It's fairly straightforward to prevent a particular fd from leaking to a single command by explicitly closing it in a redirection on the command: # echo $BASH_VERSION 5.1.0(1)-release # exec 66
Re: Bad leaks file fd to child processes
"åå¶æ£" via Bug reports for the GNU Bourne Again SHell writes: > I find that the file descriptor leaks when I execute the command pvs > in bash 5.2, The abnormal scenario is similar to the bug which > reported by > http://lists.gnu.org/archive/html/bug-bash/2017-01/msg00026.html > > When I execute pvs in the terminal opened through xshellï¼itâs ok > > PV VG Fmt Attr PSizePFree > /dev/sda2 euleros lvm2 a-- <126.00g0 My memory is that older versions of pvs (which were called pvdisplay) would always report that a descriptor was leaked. It wasn't clear why they reported it, since pvdisplay must have been noticing that *it* wasn't closing the descriptor(s) properly, but since the process terminated within seconds, the descriptors were closed by process termination anyway. This report seems likely to be the same thing, the descriptor leakage is in the pvs process, and will be cleaned up by process termination. A simple way to verify whether bash is leaking fd's is: # # List all the fd's that Bash has open. # ls -l /proc/$$/fd total 0 lrwx--. 1 root root 64 Nov 27 22:12 0 -> /dev/pts/2 lrwx--. 1 root root 64 Nov 27 22:12 1 -> /dev/pts/2 lrwx--. 1 root root 64 Nov 27 22:12 2 -> /dev/pts/2 lrwx--. 1 root root 64 Nov 27 22:13 255 -> /dev/pts/2 lr-x--. 1 root root 64 Nov 27 22:12 3 -> /var/lib/sss/mc/passwd lrwx--. 1 root root 64 Nov 27 22:13 4 -> 'socket:[1390116]' # # Run some program. # pvs PV VG Fmt Attr PSizePFree /dev/sda3 Hobgoblin01 lvm2 a-- <300.00g <40.00g /dev/sda5 Hobgoblin01 lvm2 a-- <630.02g <580.02g /dev/sdb3 Hobgoblin01 lvm2 a--<1.82t1.77t # # Again list all the fd's that Bash has open. # ls -l /proc/$$/fd total 0 lrwx--. 1 root root 64 Nov 27 22:12 0 -> /dev/pts/2 lrwx--. 1 root root 64 Nov 27 22:12 1 -> /dev/pts/2 lrwx--. 1 root root 64 Nov 27 22:12 2 -> /dev/pts/2 lrwx--. 1 root root 64 Nov 27 22:13 255 -> /dev/pts/2 lr-x--. 1 root root 64 Nov 27 22:12 3 -> /var/lib/sss/mc/passwd lrwx--. 1 root root 64 Nov 27 22:13 4 -> 'socket:[1390116]' # Dale
Re: feature request: new builtin `defer`, scope delayed eval
The Go programming language has a "defer" statement which is used considerably for exactly this purpose. So we know that it's useful in practice. The question remains what is a good way to introduce it into Bash. As others have noted, there is already a "trap" with similar functionality. I'm not familiar with it, but it seems that it does not have "signal" choices that cover the the situation(s) we want "defer" to be triggered by. However, that seems to be a straightforward extension. More important is "safely appending to a trap can be filled with holes". Why don't we allow a trap to be a sequence of strings, and define a new option to "trap" that prepends the argument string to the sequence? (Go specifies that "defer" actions are executed in reverse order from the execution of the "defer" statements that establishd them.) This approach seems to have the functionality we want while being a small extension of an existing, similar mechanism. Dale
Re: Found Bug and generated report using Bashbug
Chet Ramey writes: > On 9/5/22 6:13 PM, Aryan Bansal wrote: > >While testing how bash handles foreground and background processes > >using "&", I found out that the behaviour is sometimes undefined. The > >prompt sometimes prints just as the background process finishes executing > >resulting in a glitch that makes the prompt unable to be used until Enter > >is pressed. > > This is not a bug. Since these are different processes, scheduled > independently, bash doesn't have control of when the background processes > print their output. In addition, the characters you input are handled asynchronously to both the process you background and foreground execution. You can "use the prompt" by typing whenever you want. The characters you type are queued up by the kernel and get read by whichever process tries to read the input first. In your case, only the foreground process will listen for input. Dale
Re: test or [ does not handle parentheses as stated in manpage
Julian Gilbey writes: > Upgrading to bash 5.2.0(1)-rc2 did not help, neither did using \( > instead of (, and neither did putting spaces around the parentheses. It's ugly. The first point is that ( and ) are special characters and if unquoted are isolated tokens that have special syntax. So in order to get [ to see them as arguments, you have to quote ( and ). But the quoted characters are ordinary characters and to make them be separate arguments to [, you have to separate them from the adjacent arguments with spaces. So this version works: if [ \( "$1" = "yes" -o "$1" = "YES" \) -a \( "$2" = "red" -o "$2" = "RED" \) ] then echo "Yes, it's red" else echo "No, it's not red" fi Dale
Re: Light weight support for JSON
Greg Wooledge writes: > The standard idiom for this sort of thing is > > eval "$(external-tool)" > > This means you need to *trust* the external-tool to produce safe code. True. And I use that idiom with ssh-agent routinely. But it still strikes me as unnatural. Dale
Re: Light weight support for JSON
The "obvious" way to support Json in Bash would be a utility that parses Json and produces e.g. a Bash associative array, and conversely a utility that reads a Bash associative array and produces Json. The real limitation is that it's difficult to have a subprocess set Bash's variables. As far as I know, there's no good idiom for that. Dale
Re: add custom environment variable in bash source code
Sam writes: > You probably want to edit /etc/ld.so.conf or /etc/ld.so.conf.d/* instead. The overall concept is that you almost certainly don't want to modify the bash source code (and thus executable) to do this. In general, if you want to have a particular environment variable set "all the time", you insert "export X=..." in one of your startup files. If you want it visible to everybody, you insert that in one of the system startup files. Depending on exactly what processes you want affected and how your OS handles these things determines which file to modify. However, the original question is > I want to automatically add LD_PRELOAD before starting bash to make > this dynamic library work I see looking at the ld.so manual page: /etc/ld.so.preload File containing a whitespace-separated list of ELF shared libraries to be loaded before the program. So if what you want is for all processes to preload a particular library, add its name to this file. Or rather, check how your particular OS provides this facility. Dale
Re: local -r issue in conjunction with trap
Robert Stoll writes: > test1 # works as exit happens outside of test1 > test2 # fails with ./src/test.sh: line 6: local: readonlyVar: readonly > variable Beware that you haven't specified what you mean by "works" and "fails". I assume from the context that "fails" means "produces an error message", but it's much hard to guess what "works" means. In general, when reporting a problem, always explicitly answer "What do I expect to happen?" and "What happens?". Dale
Re: Gettings LINES and COLUMNS from stderr instead of /dev/tty
Martin Schulte writes: > I'm just wondering that bash (reproduced with 5.2-rc1 under Debian 11) > seems to determine LINES and COLUMNS from stderr. It's not clear to me that the manual page says where the LINES and COLUMNS values are obtained from. Dale
Re: Arithmetic expression: interest in unsigned right shift?
Steffen Nurpmeso writes: > I realized there is no unsigned right shift in bash arithmetic > expression, and thought maybe there is interest. This would be difficult to define cleanly. Currently, arithmetic values are considered to be signed, and >> operates on them as such. So $ echo $(( 1 >> 1 )) 0 $ echo $(( 2 >> 1 )) 1 $ echo $(( 3 >> 1 )) 1 $ echo $(( (-1) >> 1 )) -1 $ echo $(( (-2) >> 1 )) -1 $ echo $(( (-3) >> 1 )) -2 $ echo $(( (-4) >> 1 )) -2 $ For positive values, unsigned right shift would be the same as >>. But for negative numbers, the value has to be cast into an unsigned value, which is then right-shifted (equivalently, divided by a power of 2), and the resulting value then has to be cast back into a signed value. But that will depend on (reveal) the word length of Bash arithmetic computation: (-1) >>> 1 will be equal to 2#0..., which prints as a positive number. In contrast the current Bash arithmetic model is "word-length agnostic as long as you don't overflow", it acts as if the values are mathematical integers. Dale
Re: Parallelization of shell scripts for 'configure' etc.
Paul Eggert writes: > In many Gnu projects, the 'configure' script is the biggest barrier to > building because it takes s long to run. Is there some way that we > could improve its performance without completely reengineering it, by > improving Bash so that it can parallelize 'configure' scripts? It seems to me that bash provides the needed tools -- "( ... ) &" lets you run things in parallel. Similarly, if you've got a lot of small tasks with a complex web of dependencies, you can encode that in a "makefile". It seems to me that the heavy work is rebuilding how "configure" scripts are constructed based on which items can be run in parallel. I've never seen any "metadocumentation" that laid out how all that worked. Dale
Re: Unfortunate error message for invalid executable
AA via Bug reports for the GNU Bourne Again SHell writes: > I.e., something like "I'm not sure what's going on, but your path > definitely exists, yet the kernel says otherwise." > > ... something like fprintf(STDERR,"No such file or directory while > attempting to execute %s (it exists, but cannot be executed)",path); Historically, the way to get something like this to happen is to design and code the modification that does it. That has the advantage that you have to bite the bullet and instead of just describing the general idea, decide on a concrete implementation. That sounds obvious, but there is a long history of ideas in software that *sound good* but for which there is no implementation that sucks less than the problem the idea seeks to solve. Dale
Re: Unfortunate error message for invalid executable
Chet Ramey writes: > On 5/26/22 2:27 PM, AA via Bug reports for the GNU Bourne Again SHell wrote: >> When a user attempts to execute an executable that is not >> recognized as an executable by the system, the generated error is "No such >> file or directory" > > In this case, it's the errno value returned from execve(2), and it's > exactly correct, at least from the kernel's perspective. > > It's not that the executable isn't recognized or in an invalid format, in > which case execve would return ENOEXEC. It's that the ELF header specifies > a particular interpreter to run on the file (e.g., ld.so), and that file is > the one that is not found (ENOENT). This parallels the annoying Unixism that if you attempt to execute a file that is marked executable that has a #! interpreter specification, but the specified interpreter does not exist, the generated error is "No such file or directory". It would be nice if the kernel generated a separate errno for "a supporting executable for this executable file does not exist" but nobody's bothered to do that. Dale
Re: Possible bug in bash
Robert Elz writes: > Note particularly that there is no operator precedence between > && and || - they are the same (unlike in C for example) Reading your message, I believe that the rule can be stated as follows, and I'd thank you to check it: && and || have the same precedence, and they both "associate left". So for example x && yy || zz is equivalent (as a control structure) to { x && yy ;} || zz Dale
Re: bash 5.1 heredoc pipes problematic, shopt needed
Sam Liddicott writes: > Listed in the changes: > c. Here documents and here strings now use pipes for the expanded >document if it's smaller than the pipe buffer size, reverting >to temporary files if it's larger. > > This causes problems with many programs suffering from the TOCTOU > bug of checking whether or not the input is actually a file > instead of just using it as one. Ugh! Because of course, the bash manual does not specify that a here document input will be a file, just that it will be an FD from which the text can be read. Dale
Re: Bash regexp parsing would benefit from safe recursion limit
willi1337 bald writes: > A deeply nested and incorrect regex expression can cause exhaustion of > stack resources, which crashes the bash process. Further, you could construct a deeply nested regex that is correct but would still crash the process. It's hard to define what should happen in a way that is implementable -- there are innumerable programs that are theoretically correct but exhaust the stack if you try to execute them. More or less what you want is some sort of "checkpoint" of the status of the overall computation (shell process) that you would return to. (a continuation!) Your suggestion is effectively that the checkpoint is "when bash prompts for the next command". But in a sense, that's what crashing the process is, too -- you return to the checkpoint "before you started the shell". If you had to worry about this in practice, you'd turn $ command1 $ command2 $ command3 into $ bash -c command1 $ bash -c command2 $ bash -c command3 Dale
Re: Sus behaviour when cmd string ends with single backslash
vzvz...@gmail.com writes: > The mentioned bug is indeed fixed by this change. However, in case of > another edge case following new behaviour is observable: > > $ bash -c 'echo \' > \ > $ # backslash appears on output It's an interesting case, since the command that Bash is executing is e-c-h-o-space-backslash with no character at all after the backslash. The manual page says A non-quoted backslash (\) is the escape character. It preserves the literal value of the next character that follows, with the exception of . If a \ pair appears, and the backslash is not it- self quoted, the \ is treated as a line continuation (that is, it is removed from the input stream and effectively ignored). Which doesn't seem to consider this case at all. The two a-priori plausable behaviors are for the backslash to be taken literally (which is what happens) or for it to vanish as some sort of incomplete escape construct. So you could plausibly say that the behavior of a backslash before "end of file" isn't defined. Dale
Re: Zero-length indexed arrays
Lawrence Velázquez writes: > Did you mean to say that ${#FOO[*]} causes an error? Because > ${FOO[*]} does not, a la $*: The case that matters for me is the Bash that ships with "Oracle Linux". Which turns out to be version 4.2.46(2) from 2011, which is a lot older than I would expect. But it *does* cause an error in that verison: $ ( set -u ; FOO=() ; echo "${FOO[@]}" ) bash: FOO[@]: unbound variable $ bash -uc ': "${FOO[*]}"' bash: FOO[*]: unbound variable $ > Like ${FOO[*]}, ${FOO[@]} and $@ are exempt from ''set -u''. It looks like that's a change since 4.2.46. Is there text in the manual page about that? Dale
Zero-length indexed arrays
A bit ago I was debugging a failing script at work. It turns out that when you say FOO=(x y z) then the variable FOO is an array and is defined. But when you say FOO=() then the variable FOO is an array (because ${#FOO[*]} substitutes an integer viz. 0) but it is *not* defined (because ${FOO[*]} generates an error when "set -u" is in effect). I really don't like this. But it is according to the manual (because FOO has no index that has a value), and no doubt there are scripts out there that depend subtly on this case. It turns out that this can matter, if you do something like this: set -u# for safety ... if ... then FOO=(...) else FOO=() fi ... FOO_PLUS=("${FOO[@]}" x y z w) Dale
Re: some.. bug .. ?
Alex fxmbsw7 Ratchev writes: > printf 'e=. ; (( $# > 1 )) && $e afile "${@:1:$# -1}"' >afile ; . afile 1 2 > 3 > > bash: : command not found This looks like another instance where you've constructed a command whose first word is the empty word. Try running with "set -x" and see what the command lines are when they're expanded. Dale
Re: read -t0 may report that input is available where none is possible
More to the essence: $ read -t0
Re: coproc does not work with a lower case variable name
g...@as.lan writes: > Description: > coproc gives a syntax error if I try to use it with a lower case > variable name and a compound command: > bash: syntax error near unexpected token `}' > bash: syntax error near unexpected token `(' > > Repeat-By: > coproc bc { bc; } > coproc bc (bc) What happens if give the coproc a name that is *not* a command you're using? I suspect that "bc" is an alias, and given that you can omit the NAME of a coproc, bash is expanding the first use of "bc" as an alias, because it might be the first word of a simple command. Dale
Re: bash conditional expressions
Mischa Baars writes: > Using Fedora 32 (bash 5.0.17) this returns a true, while on Fedora 35 (bash > 5.1.8) this returns a false: > touch test; if [[ -N test ]]; then echo true; else echo false; fi; Well, looking at the manual page, I see -N file True if file exists and has been modified since it was last read. so it's clear what "-N" tests, in terms of the access and modification times of the file: mod time > access time One thing you have to worry about is how "touch" behaves, and whether *that* has changed between Fedora versions. I've run a few test uses of "touch" (in Fedora 34) and examined the consequences with "stat", and it's not clear to me exactly how "touch" behaves. In any case, to report an error against bash, you need to show what the modification and access times of the file are, and that -N does not behave as specified with regard to the file. In addition, as others have noted, the conventional build-control program "make" compares the modification time of the "output" file against the modification time of the "input" file, and uses that to control whether to reexecute a build step. It never looks at access times. This suggests you might want to reconsider whether "-N" is performing the best possible test for your application. Dale
Re: cmd completion.. normal that it doesnt complete dirs when cmds are available ?
Alex fxmbsw7 Ratchev writes: > i have a dir with some xbl*/ dirs and few xbl* commands ( via $PATH ) > i tried there xbl,tabtab but it only completed cmds > is this desired ? Naively, I would expect that when doing completion in the first position in a command, that the completion process will only show you commands. In this case, do any of the directories contain executable files? If they do not, then there would be no use to show you the directories when attempting to complete the command name. Dale
Re: Why should `break' and `continue' in functions not break loops running outside of the function?
OÄuz writes: >> It's a violation of scope. > > It's a violation of lexical scope, I'm asking why not implement > dynamic scope, what's wrong with it? Yes ... but the history of programming languages has been the history of learning that dynamic scoping is dangerous to program and lexical scoping is the Right Thing. >> Can you name *any* other language where functions can break out of their >> caller's loops? The only thing that comes to mind for me is C's "longjmp", >> which I've never used even once. (Not that I do any C programming these >> days, but back in the 1990s, I did.) The way to think about longjmp is that it's a goto that wrenches you out of nested contexts, somewhat like calling a completion in Scheme. But if you're using it sanely, it's clear what location the longjmp takes you to, whereas a dynamically-scoped break is not. Dale
Re: Arbitrary command execution from test on a quoted string
elettrino via Bug reports for the GNU Bourne Again SHell writes: > The following shows an example of bash testing a quoted string and as > a result executing a command embedded in the string. > > Here I used the command "id" to stand as an example of a command. The > output of id on this machine was as follows: > > user@machine:~$ id > uid=1519(user) gid=1519(user) groups=1519(user),100(users) > user@machine:~$ > > So to demonstrate: > > user@machine:~$ USER_INPUT='x[$(id>&2)]' > user@machine:~$ test -v "$USER_INPUT" > uid=1519(user) gid=1519(user) groups=1519(user),100(users) > user@machine:~$ > > This means that if variable USER_INPUT was indeed input from a user, > the user could execute an arbitrary command. This is true, but two qualifications should be applied: 1. Executing "test -v" on user input doesn't make sense, as the variable-name space inside the shell isn't something the user should interact with. 2. It isn't a security problem, because the user could execute the command directly. I leave it to people more steeped in the aracana whether this action by "test -v" is an irregularity that should be changed. Dale
Re: bash support for XDG Base Directory spec (~/.config/bash/)
Using XDG will have the complication that one normally sets environment variables like XDG_CONFIG_DIRS in ~/.bash_login. So I would add to the conversation, How much added value is there if XDG_CONFIG_DIRS is not used in practice? Dale
Re: ?maybe? RFE?... read -h ?
L A Walsh writes: > I know how -h can detect a symlink, but I was wondering, is > there a way for bash to know where the symlink points (without > using an external program)? My understanding is that it has been convention to use the "readlink" program for a very long time, so there's never been much demand to add it to bash. Of course, looking at the options to readlink shows that there are several different meanings of "where a symlink points". Dale
Re: Interactive commands cant be backgrounded if run from bashrc
"C. Yang" writes: > This may be because the bashrc file is still running, and bash itself > perhaps does not finish initializing until everything in the bashrc > completes. This may be why CTRL+Z does not work correctly (it might > require bash to finish initializing first) I'm sure that's the problem in your first case: If you put "emacs foo.bar" in .bashrc, it instructs Bash to run Emacs, wait until the job completes, and implicitly: finish processing .bashrc, then start listening to the terminal. It gets messier because the processing of ctrl-Z is done in the kernel, manipulating the state of processes, but it's possible that Bash doesn't activate job control in the terminal handler until processing .bashrc is finished. In any case, I wouldn't expect this case to work like you want. > emacs test.txt & > > fg > The first line, instead of backgrounding emacs, appeared to run it > simultaneously with bash. This had the consequence that both bash and > emacs were taking the same Uh, "backgrounding" *is* "run it simultaneously*. I think what you mean is "stop and background it", which is what ctrl-Z does. > bash: fg: no job control It sounds like Bash doesn't activate the job-control features until .bashrc is completed. It sounds like what you want is for the last thing .bashrc (or more likely, .bash_login) does is to start an Emacs that acts exactly as if you typed "emacs" at the prompt. In particular, you want to be able to background it in the ordinary way, which seems to require that Bash has finished processing .bashrc. Dale
Re: bash command alias has a problem in using brace.
Hyunho Cho writes: > If i enter a command line like below then '>' prompt appears for the > input of here document. > > bash$ { cat <<\@ > foo ;} 2> /dev/null >> 111 >> 222 # '>' prompt >> @ > > > but if i use alias then '>' prompt does not appear and default bash > prompt appears > > bash$ alias myalias='{ cat <<\@ > foo ;} 2> /dev/null' > bash$ myalias > bash$ 111 > bash$ 222 # bash$ prompt > bash$ @ > > > this only occurs in brace > > bash$ alias myalias='( cat <<\@ > foo ) 2> /dev/null' > bash$ myalias >> 111 >> 222 # '>' prompt >> @ I don't know the details, but it must have something to do with what event triggers the reading of the here-document. That event isn't the execution of the command containing it, I don't think, but more like when the here-document-redirection is parsed. Note that in your first example, << is parsed after you type RET. In the second two, the << is inside quotes, and those characters are parsed only when "myalias" is discovered in an input line and the alias definition is substituted for it. Dale
Re: Defect in manual section "Conditional Constructs" / case
"Dietmar P. Schindler" writes: > Doesn't the example I gave above show that quotes are removed? If they > weren't, how could word aa with pattern a""a constitute a match? As you say, >a""a< matches as a case-pattern when the case word is >aa<. But that's not due to quote removal, because what the case does is not testing for string equality. As is done in a number of other places in Bash, this is a pattern and it is input to a process of pattern matching. See where "pattern" is used in the manual page, and the phrase "using the matching rules described under Pattern Matching". What the Pattern Matching section doesn't seem to say quite directly, is that the quoting present in the pattern is considered only in regard to that some characters are quoted and some are not. And that certain special characters, when unquoted, have special meanings to the pattern matching process. So when >a""a< is a pattern, it consists of two characters, both unquoted-a, and the pattern matches only the string >aa<. If >a*a< is a pattern, it consists of three unquoted characters, but because the second of them is special, it matches any string that starts and ends with >a<. But >a"*"a< consists of three characters, and >"a*a"< does too, but because neither of them contains an unquoted special character, they both match only >a*a<. They are the same rules that are used for doing pathanme expansion. Dale
Re: GROUPS
Robert Elz writes: > | What seems to be the case with sh-style shells and Posix is that > | all-caps variable names are subject to implementation-specific use, and > | so users should not use them except when using them in the way that is > | specific to the implementation the script is to be executed on. > > That could have been done, once perhaps, but it is way too late now. > There is no such rule anywhere in POSIX sh, and scripts using upper > case var names abound in the world, there's no chance that they're > going away. I was more looking at it as what sort of advice one should give to people who want to write "portable" scripts that won't be blindsided by a shell that makes some upper-case word special. Though it's quite possible that the bash manual page contains that advice somewhere that I've not looked at. Dale
Re: RFE: new option affecting * expansion
"Chris F.A. Johnson" writes: > It would be nice if there were an option to allow * to expand sorted > by timestamp rather than aphabetically. Generally, a new option is not a good way to accomplish this, as an option has global effects and can cause other parts of the code to malfunction. Back in the old, old days, there was a program named "glob" that did pathname expansions. So you wouldn't say cat * you'd say cat $( glob * ) where glob would get one argument, "*", and output a list of file names. A glob-by-modification-date program would be a better solution for this need, IMHO. Dale
Re: GROUPS
It seems to me that people are avoiding both the core issue and its solution. A standard is what allows people to write software that can be ported without having to reassess every detail of the program. To take C as an example, the standard defines what identifiers look like, which identifiers are reserved words, and which identifiers are reserved for implementation-specific use, and thus which identifiers can be used freely by programmers. What seems to be the case with sh-style shells and Posix is that all-caps variable names are subject to implementation-specific use, and so users should not use them except when using them in the way that is specific to the implementation the script is to be executed on. This principle isn't particularly tricky, but what is missing is a statement of this principle in the documentation. Dale
Re: Crash on large brace expansion
Gabríel Arthúr Pétursson writes: > Executing the following results in a fierce crash: > >$ bash -c '{0..255}.{0..255}.{0..255}.{0..255}' >malloc(): unaligned fastbin chunk detected 2 >Aborted (core dumped) As others have noted, you are attempting to construct 2^32 words. Now it's probable that you could run it on a computer with enough RAM to do the job. (I work with ones that have 128 GiB.) But the indexes involved are large enough that they are likely overflowing various 32-bit counters/pointers. So bash dies on an internal error in malloc(). Dale
Re: 'wait' in command group in pipeline fails to recognize child process
> It's not the brace command; it's the pipeline. > >> Minimal repro: >> >>$ sleep 1 & { wait $!; } | cat >>[1] 665454 >>bash: wait: pid 665454 is not a child of this shell Interestingly, this is almost trivial to clairfy: $ sleep 5 & { wait $!; } [1] 19069 [1]+ Donesleep 5 $ > I was also misled by the term "subshell" which is not a proper shell > like a subprocess is just another process. The criticial point is that "being a subshell" is a relationship between the child bash process and its parent. Intuitively, a subshell is created whenever a bash child process is created "implicitly". The manual page et al. enumerate all the ways a subshell can be created; search for the word "subshell". Dale
Re: `&>' doesn't behave as expected in POSIX mode
Oğuz writes: > $ set -o posix > $ uname &>/dev/null > $ > > `uname &' and `>/dev/null' should be parsed as two separate commands; > that, if I'm not missing anything, is what POSIX says. But bash > doesn't do that in POSIX mode, and redirects both stderr and stdout to > `/dev/null'. An interesting point! At least according to the 2018 edition, a Posix shell parses that command as uname & > /dev/null which is two commands, "uname &" and ">/dev/null". The second command is a no-op. Whereas default mode Bash parses it as uname with a redirection. This may be the only situation where Posix mode differs from default mode in *lexing*. And at least in my antique version, the --posix switch doesn't make that change. Dale
Re: [patch #10070] toggle invert flag when reading `!'
>> >[[ ! 1 -eq 1 ]]; echo $? >> >[[ ! ! 1 -eq 1 ]]; echo $? >> > >> > would both result in `1', since parsing `!' set CMD_INVERT_RETURN >> > instead of toggling it. Definitely, the section of the man page for "[[" says that "!" is a negation operator, so "! ! foo" must yield the same results as "foo". > I will try this as: > > $ [ 1 -eq 1 ]; echo $? > 0 > $ ! [ 1 -eq 1 ]; echo $? > 1 > $ ! ! [ 1 -eq 1 ]; echo $? > 0 That last one isn't defined by the manual page. I'm surprised you don't get a syntax error. Pipelines A pipeline is a sequence of one or more commands separated by one of the control operators | or |&. The format for a pipeline is: [time [-p]] [ ! ] command [ [|||&] command2 ... ] Dale
Re: select syntax violates the POLA
Robert Elz writes: > You're assuming that the manual is a precise specification of what > is allowed. Um, of course. "The documentation is the contract between the user and the implementer." If something undocumented happens to work, there is no requirement on the implementer to maintain its functionality. Dale
Re: select syntax violates the POLA
Robert Elz writes: > From:wor...@alum.mit.edu (Dale R. Worley) > > | I was going to ask why "else {" works, > > The right question would be why '} else' works. Yeah, typo on my part. The manual page says if list; then list; [ elif list; then list; ] ... [ else list; ] fi so clearly there should be a ; or newline before the list in the else-clause. But the grammar doesn't seem to enforce that: if_clause : If compound_list Then compound_list else_part Fi I'm sure that the real answer involves decrypting the logic inside Bash that turns on recognition of reserved words, and that must be more complicated than the rule in the manual page. Dale
Re: select syntax violates the POLA
Chet Ramey writes: > Yes, you need a list terminator so that `done' is recognized as a reserved > word here. `;' is sufficient. Select doesn't allow the `done' unless it's > in a command position. Some of the other compound commands have special > cases, mostly inherited from the Bourne shell, to allow it. I was going to ask why "else {" works, since according to the manual page, "{" should not be recognized as a reserved word in this situation. Dale
Re: select syntax violates the POLA
Greg Wooledge writes: > It's amazing how many people manage to post their code with NO comments > or explanations of what it's supposed to do, what assumptions are being > made about the inputs, etc. This leaves us to guess. It seems to be a modern style. When I was learning to program, poorly commented code was considered a failing. But recently, I have had managers object that I put too many comments in. Dale
Re: missing way to extract data out of data
Andreas Schwab writes: >> I've never tracked down why, but the Perl executable is a lot smaller >> than the Bash executable. > > Is it? > > $ size /usr/bin/perl /bin/bash >textdata bss dec hex filename > 2068661 27364 648 2096673 1ffe21 /usr/bin/perl > 1056850 22188 61040 1140078 11656e /bin/bash > > Of course, a lot of perl is part of loadable modules. On mine, I get: $ size /usr/bin/perl /bin/bash textdata bss dec hex filename 8588 876 0946424f8 /usr/bin/perl 898672 36064 22840 957576 e9c88 /bin/bash I do suspect that's because perl is using more loadable modules than bash, but I don't know much about how the object code is organized and I've never been motivated enough to track down the truth. Dale
Re: how does ssh cause this loop to preterminate?
Eduardo Bustamante writes: > The summary is that SSH is reading from the same file descriptor as > "read". Use '-n' (or redirect SSH's stdin from /dev/null) to prevent > this. Oh, yeah, I've been bitten by that one many times. Another solution, though more awkward, is feeding the data for read into a higher-numbered fd and using "read -u [fd]" to read it. Dale
Re: Wanted: quoted param expansion that expands to nothing if no params
L A Walsh writes: > It would be nice to have a expansion that preserves arg boundaries > but that expands to nothing when there are 0 parameters > (because whatever gets called still sees "" as a parameter) Fiddling a bit, I found this is a nice way to show how "$@" (or any other construction) affects the parameters given a to command. $ function show { ( set -x ; echo "$@" >/dev/null ) } $ show a + echo a $ show a b + echo a b $ show '' + echo '' $ show a '' + echo a '' $ show + echo $ What makes it particularly effective is that "set -x" shows the sequence of arguments unambiguously. (Hmmm, I could have used ":" instead of "echo".) And the manual page does make this clear if you read the entire description: When there are no positional parameters, "$@" and $@ expand to nothing (i.e., they are removed). Dale
Re: missing way to extract data out of data
Greg Wooledge writes: > Partly true. seq(1) is a Linux thing, and was never part of any > tradition, until Linux people started doing it. Huh. I started with Ultrix, and then SunOS, but don't remember learning seq at a later date. > (Before POSIX, it involved using expr(1) for every increment, which > is simply abominable.) And let's not forget the original "glob"! But my point is that using external programs to do minor processing tasks has a long history in shells. > but beyond a certain > point, trying to force a shell to act like a Real Programming Language > is just not reasonable. I've never tracked down why, but the Perl executable is a lot smaller than the Bash executable. So it's likely that turning a shell into a Real Programming Language is also likely to be unusually expensive. Dale
Re: about the local not-on-every-function-separately var issue
Greg Wooledge writes: > Now, the big question is WHY you thought something which is not correct. > > The most common reasons that people think something which is wrong are: In my experience, a common reason is that the documentation does not concentrate in one place that users are certain to read, a complete, clear description of the situation. For instance, you give a complete, clear description: Bash uses "dynamic scope" when it expands variables. This means that it looks first in the current functions local variables; if the variable isn't found there, it looks in the caller's local variables, and then in the caller's caller's local variables, and so on, until it reaches the global scope. And that behavior is implied if you read the definition of "local" closely. But I couldn't find any text in the manual page that states that directly. (*Very* closely, as the text refers to "the function and its children". But function definitions don't nest; what it means is "the function *invocation* and its child invocations.) In principle it should be stated at the point where parameter expansion is introduced, as it is the *definition* of what parameter expansion does. Dale
Re: missing way to extract data out of data
Alex fxmbsw7 Ratchev writes: > yea well it does wonders, however was looking for a way without spawning > externals like gawk.. maybe in future there will be =) Traditionally, shell scripts depended on external binaries to do a lot of the processing. At the least, what newer shells do with "{NNN..MMM}" and "[[" used to be done by "seq" and "test" a/k/a "[". And what can be done by the complex parameter expansions ${...%...} and ${...#...} was done by "sed". Dale
Re: Likely Bash bug
Jay via Bug reports for the GNU Bourne Again SHell writes: > I have no idea what the "ash" the bug report refers to > is (there is an ancient shell of that name, but I cannot imagine any > distribution including that, instead of one of its bug fixed and updated > successors, like say, dash) Well, I read years ago of a lightweight shall named "ash" that was popular within initrd's. And the report speaks of "the 'init' script in initrd.gz". So perhaps that initrd uses some version of ash. Don't know if that's any help, Dale
Re: Probably not a bug but I was surprised: $' does not work inside "..." close.
Greg Wooledge writes: > $'...' is a form of quoting, not an expansion. It won't "work" inside > of another type of quoting, just like '...' will not "work" inside "...". Yes, that's true, and makes sense when I think about it. But the manual page doesn't consistently phrase it that way: Words of the form $'string' are treated specially. The word expands to string, ... As you say, it isn't "expand". And, being a quote construct, it's not a word per se, it prevents contained characters from breaking words. On the flip side, it doesn't look like there's a consistent word for "the effective value of a quote construct", although "quote removal" is used in some places. Dale
Re: missing way to extract data out of data
Alex fxmbsw7 Ratchev writes: > there is way to crop data to wanted, by cropping the exclulsions away > but what about a way to extract data, eg @( .. ) match > not about using [[ =~ but only var internal stuff > .. or do i miss there something If you want to do a pattern-match against a string, then extract a part of it, a common way is to use sed. For instance, to extract a word and remove spaces before and after it: $ STRING=' abc def ' $ WORD="$( <<<$STRING sed -e 's/^ *\([^ ]*\) *$/\1/' )" $ echo "$WORD" It is common to use common Unix utilities to modify data items within shell scripts. Dale
Probably not a bug but I was surprised: $' does not work inside "..." close.
I have it encoded in my head that $ inside "..." is respected. Subtly, the $'...' construction is not respected inside "...". After reading the manual page carefully, I realized this is because the interpretation of $'...' is not part of parameter expansion (despite its $) but rather it is a special form of quote interpretation, "Words of the form $'string' are treated specially." However, I'm not sure that text is entirely accurate, either, as $ echo xxx$'aa\ta'yyy xxxaa ayyy "Word" is generally used to mean non-space characters (separted by spaces), but the argument of "echo" is only one word, which is not, as a whole, of the form $'...'. Dale
Re: 'exec' produced internal code output instead of normal
> 2021年3月13日(土) 8:06 Alex fxmbsw7 Ratchev : >> but using it resulted sometimes output of code of the script in the output >> of the files >> removing exec made it work normally as supposed One possibility is a typo, using "<<" rather than "<". Koichi Murase writes: > I don't know about `socat', but maybe it's just the file descriptor > collision. One needs to make sure that the file descriptor is not yet > used when a new file descriptor is opened. For example, in Bash > scripts, one should use the file descriptor 3--9 if you manually > specify it because the file descriptors larger than 9 may be already > used for other purposes. bash has the useful ability to select an unused file descriptor, such as exec {new_descr}>wherever That opens the file "wherever" for writing on some previously closed file descriptor and assigns the number to the variable "new_descr". After that you can do echo >&$new_descr etc. Dale
Re: Shell Grammar man page function definition
Mike Jonkmans writes: > Some examples that work: > function x { :; } ## as expected > function x if :; then :; fi > function x (( 42 )) > function x [[ 42 ]] > function x for x do :; done > function x for (( ; 42 - 42 ; )); do :; done > > What does not work: > function x ( : ) Check your Bash version. IIRC, recent versions (e.g. 5.1) have a minor change in the Bison grammar (parse.y) for function definitions, IIRC that I provided. The purpose was to stop Bison from giving an annoying "parse conflict" message when compiling the grammar, but looking at it, it should allow your example to work, because it gets the parser to handle all the cases as they are specified in the manual page. Dale
Re: syntax error while parsing a case command within `$(...)'
Lawrence Velázquez writes: >>> `;;' is optional for the last case item. >> >> The manual page (for my version) says it's required. If, in some >> certain circumstances, it works without, that's nice. > > It's also required by POSIX. Ah, now that's different. Has the current man page been updated to match? Dale
Re: syntax error while parsing a case command within `$(...)'
Oğuz writes: >> Before we worry about what to change, I want to note that the original >> example is syntactically incorrect. The example is >> >> $ bash -c ': $(case x in x) esac)' >> >> But the manual page makes it clear that each case must be ended with >> ";;". > > `;;' is optional for the last case item. The manual page (for my version) says it's required. If, in some certain circumstances, it works without, that's nice. But there's no commitment that it will work now, or in future releases. > `case x in esac' (without the linebreak) works fine outside the command > substitution. The manual page (for my version) says that "esac" will be recognized in positions where a simple command may appear. If, in some other circumstances, it works, that's nice. But there's no commitment that it will work now, or in future releases. Now, if you want to advocate that it *should* always work, go ahead. But that's a feature request, not a bug report. Dale
Re: syntax error while parsing a case command within `$(...)'
Before we worry about what to change, I want to note that the original example is syntactically incorrect. The example is $ bash -c ': $(case x in x) esac)' But the manual page makes it clear that each case must be ended with ";;". case word in [ [(] pattern [ | pattern ] ... ) list ;; ] ... esac Now, I haven't investigated what cleverness Bash uses, but all the cases I've tested that conform to the case syntax are handled correctly inside this $(...): $ bash -c ': $( case x in x) : ;; esac )' $ bash -c ': $( case x in x) true ;; esac )' $ bash -c ': $( case x in (x) true ;; esac )' It even works with the degenerate case where there are no coices, though writing it is hard because "esac" is a keyword: $ bash -c ': $( case x in more> esac )' This is with an old version, 4.2.53(1). Dale
Re: obscure bug "extern void free (void *__ptr) __attribute__ ((__nothrow__ , __leaf__));"
You could improve this bug report. A few pointers: Mathias Steiger writes: > Repeat-By: > > git clone https://github.com/LibreELEC/LibreELEC.tv > cd LibreELEC.tv > ARCH=aarch64 PROJECT=Amlogic DEVICE=AMLGX ./scripts/build linux I attempted to follow this, but the output I got was: $ ARCH=aarch64 PROJECT=Amlogic DEVICE=AMLGX no_proxy ./scripts/build linux GET linux (archive) Usage: wget [OPTION]... [URL]... Try `wget --help' for more options. Usage: wget [OPTION]... [URL]... Try `wget --help' for more options. Usage: wget [OPTION]... [URL]... [a dozen or more times] Does your build process actually make external web fetches? You really need to warn people about that if you are presenting a recipe for duplicating a bug. > -> the build fails after a minute in the Autoconfig step due to > wrongful insertion of silenced command output into file config.status at > line 533 > In: build.LibreELEC-AMLGX.aarch64-9.80-devel/build/ccache-3.7.12/configure > Go to line 6532: if diff "$cache_file" confcache >/dev/null 2>&1; then > :; else > Hint: $cache_file is always /dev/null , hence the if-statement will > evaluate false I understand that a line appears in config.status that you believe is incorrect, but you don't quote what the line is (preferably with some context around it). You assert that it is "silenced command output", but you don't actually know that (without dissecting the script you're running), only that it *appears* to be silenced command output. If you've genuinely tracked down how it got there, you should explain your reasoning in detail. > This diff command is the source of the insertion in > build.LibreELEC-AMLGX.aarch64-9.80-devel/build/ccache-3.7.12/config.status : > 0a1,97: > > # This file is a shell script that caches the results of > configure > > # tests run on this system so they can be shared between > configure > ... > Remove the line and the corresponding "fi" that closes the if-statement > -> script inserts "extern void free ..." instead into > ./config.status at line 533 You say "remove the line" but it isn't clear what line you are referring to. Dale
Re: non-executable files in $PATH cause errors
> On 2021/01/09 23:52, n952162 wrote: >> I consider it a bug that bash (and its hash functionality) includes >> non-executable files in its execution look-up Of course, as described in the manual page, Bash first searches for an executable with the right name in the PATH, and then if that fails, it searches for a non-executable file in the PATH. The first part resembles one of the exec() functions and seems reasonable, but the second is weird. My belief is that the reason is compatibility with historical usage. I have dim memories that there were days before shell scripts had the executable bit set and the first line started with "#!". Instead, they weren't marked as executable but the first line started with ": " (either mandatory or by convention). And the scripting facility was implemented entirely in the shell -- if the shell's call to the kernel to execute the script failed, it would decide that the file was a script, then spawn and initialize a subshell to execute it. Of course, there was no check that your file actually was a script, so if you had named a data file, the subshell would spew a stream of errors. But the consequence to this day is that scripts without the executable bit can be executed if they are given as command names to bash, and that executable scripts take precedence over similarly-named non-executable scripts earlier in PATH. Dale