RE: bash will not link against ncursesw and readline in /usr/local

2018-11-13 Thread John Frankish
> > Using bash-4.4.18
> > Intel core i7 laptop running 32-bit or 64-bit linux Using gcc-8.2.0
> > 
> > The configure script does not find libncursesw on a system where
> > only the wide version of ncurses exists  - even when readine is linked 
> > against ncursesw.
> >
> I haven't seen a distro where ncursesw is installed without a link to ncurses.
> Which distribution are you using?
> 
The 64-bit version of tinycorelinux - all non-base packages are installed to 
/usr/local.
Since ncursesw is now the default, I'm trying to compile against that.

> I could add a check for ncursesw, but that's the kind of thing the distro 
> usually does.
>
If ncursesw is now the default, maybe it would make sense to check for that 
rather than a symlink?

> > The configure scripts does not find libreadline when it is compiled to
> > /usr/local and when using the configure switch 
> > "--with-installed-readline=/usr/local"
> > 
> I don't have any trouble finding readline in /usr/local/lib/libreadline.so 
> after
> installing it, editing /etc/ld.so.conf, and running ldconfig. I tried with 
> readline-8.0-beta
> and bash-5.0-beta, so at least it will be working when those hit release 
> status.
>
It appears that the readline check relies on the ncurses check being successful.

If I configure without an ncurses symlink the check for readline fails.

If I add an ncurses symlink the check for readline succeeds.

checking for tgetent... no
checking for tgetent in -ltermcap... no
checking for tgetent in -ltinfo... no
checking for tgetent in -lcurses... no
checking for tgetent in -lncurses... no
checking which library has the termcap functions... using gnutermcap
checking version of installed readline library... configure: WARNING: Could not 
test version of installed readline library.
configure: WARNING: installed readline library is too old to be linked with bash
configure: WARNING: using private bash version

configure:5194: result: no
configure:5213: checking which library has the termcap functions
configure:5216: result: using gnutermcap
configure:5242: checking version of installed readline library
configure:5296: gcc -flto -fuse-linker-plugin -mtune=generic -Os -pipe -o 
conftest -g -O2 -Wno-parentheses -Wno-format-security -I/usr/local/include   
-L./lib/termcap -L/usr/local/lib conftest.c  -lreadline 
./lib/termcap/libtermcap.a >&5
gcc: error: ./lib/termcap/libtermcap.a: No such file or directory

$ cd /usr/local/lib
$ sudo ln -s libncursesw.so.6.1 libncurses.so
$ sudo ldconfig

checking for tgetent... no
checking for tgetent in -ltermcap... no
checking for tgetent in -ltinfo... no
checking for tgetent in -lcurses... no
checking for tgetent in -lncurses... yes
checking which library has the termcap functions... using libncurses
checking version of installed readline library... 7.0



Re: Control characters in declare output

2018-11-13 Thread L A Walsh




On 10/31/2018 11:01 PM, Rob Foehl wrote:
Prompted (pun intended) by the recent thread on detecting missing newlines 
in command output, I'd had another look at my own version, and discovered 
a potential issue with control characters being written as-is in declare 
output.  Minimal (harmless) reproducer:


╶➤ x () { echo $'\e[31m'"oops"$'\e[0m'; }

╶➤ declare -f x
x ()
{
 echo ''"oops"''
}

Emits the string in red in a terminal.  Any instances with control 
sequences that do anything more invasive with the terminal cause more 
  

BTW, to keep that red from turning your terminal red, I used:
 read _CRST <<<"$(tput sgr0)"   #Reset
 read _CRED <<<"$(tput setaf 1)"  #RED
 read _CGREEN <<<"$(tput setaf 2)"  #GREEN
 read _CBLD <<<"$(tput bold)"   #Bold

And in usage:
[[ $UID -eq 0 ]] && {
   _prompt_open="$_CBLD$_CRED"
   _prompt="#"
   _prompt_close="$_CRST"
 }


   That way I can display the file on a screen without the control 
characters

actually changing the color of the terminal.

(just in case you might be looking for a way around that behavior).





Re: built-in '[' and '/usr/bin/[' yield different results

2018-11-13 Thread Eric Blake

On 11/13/18 10:29 AM, Service wrote:

     # ensure that file1 exists and that file2 does not exist


There's your problem. It is inherently ambiguous what timestamp to use 
when a file is missing (infinitely new or infinitely old, or always an 
error for not existing); bash's -nt picked one way, while other shells 
have picked the other.  POSIX is silent on the matter (-nt is an 
extension outside of POSIX), so there is nothing portable you can rely on.



     /bin/touch file1
     /bin/rm -f file2
     # built-in
     if  [ file1 -nt file2 ]; then echo nt; else echo not_nt; fi
     # external
     if /usr/bin/[ file1 -nt file2 ]; then echo nt; else echo not_nt; fi

     # Output is as expected:
     nt
     nt


That is, bash's builtin '[' and coreutil's external '[' happened to pick 
the same thing: a missing file is treated as infinitely old.




     2. This does not work:

     # Put the above commands into a script, say check.sh
     # Run with: /bin/sh < check.sh
     # Or  : /bin/sh ./check.sh
     # Or  : /usr/bin/env ./check.sh

     # Output is always not ok:
     not_nt
     nt


Most likely, this is because your /bin/sh is not bash, but probably 
dash, and dash has picked a missing file as being treated as always an 
error.  That does not make it a bug in bash, though, but a difference in 
behavior of your /bin/sh.


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3266
Virtualization:  qemu.org | libvirt.org



Re: built-in '[' and '/usr/bin/[' yield different results

2018-11-13 Thread Ilkka Virta

On 13.11. 18:29, Service wrote:

     # Put the above commands into a script, say check.sh
     # Run with: /bin/sh < check.sh
     # Or  : /bin/sh ./check.sh
     # Or  : /usr/bin/env ./check.sh

     # Output is always not ok:
     not_nt
     nt


 $ cat check.sh
 export PATH=""
 /bin/touch file1
 /bin/rm -f file2
 if  [ file1 -nt file2 ]; then echo nt; else echo not_nt; fi
 if /usr/bin/[ file1 -nt file2 ]; then echo nt; else echo not_nt; fi

 $ bash ./check.sh
 nt
 nt

 $ /bin/sh ./check.sh
 not_nt
 nt

Isn't that Windows Linux thingy based on Ubuntu? /bin/sh isn't Bash by 
default on Debian and Ubuntu, so it might be you're just not running the 
script with Bash.



--
Ilkka Virta / itvi...@iki.fi



Re: built-in '[' and '/usr/bin/[' yield different results

2018-11-13 Thread Greg Wooledge
On Tue, Nov 13, 2018 at 05:29:42PM +0100, Service wrote:
> Repeat-By:
>     Under Windows 10, WSL.

Then why did you send this to a debian.org address?

>     Start "bash", terminal with shell pops up.

>     2. This does not work:
> 
>     # Put the above commands into a script, say check.sh
>     # Run with: /bin/sh < check.sh
>     # Or  : /bin/sh ./check.sh

In both of these cases, you're explicitly calling /bin/sh rather than
bash.  So you are not using bash's builtin [.  You are using WSL's sh's
builtin [ command.  You might start by identifying which shell WSL uses
as its sh.

>     # Or  : /usr/bin/env ./check.sh

In this case, it would use the shebang line of check.sh to decide which
shell to run.  However, I suspect you did not include a shebang line at
all.  In that case, /usr/bin/env will decide how to handle the exec
format error.  It might decide to spawn sh for you, in which case this
would explain why the result matches the first two results.



built-in '[' and '/usr/bin/[' yield different results

2018-11-13 Thread Service

Configuration Information [Automatically generated, do not change]:
Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' 
-DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gnu' 
-DCONF_VENDOR='pc' -DLOCALEDIR='/usr/share/locale' -DPACKAGE='bash' 
-DSHELL -DHAVE_CONFIG_H   -I.  -I../. -I.././include -I.././lib 
-Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 
-fdebug-prefix-map=/build/bash-vEMnMR/bash-4.4.18=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wall 
-Wno-parentheses -Wno-format-security
uname output: Linux COOL4 4.4.0-17134-Microsoft #345-Microsoft Wed Sep 
19 17:47:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux

Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.4
Patch Level: 19
Release Status: release

Description:
    It seems that the built-in 'test' does not work properly when run 
in a sub-shell.


Repeat-By:
    Under Windows 10, WSL.
    Start "bash", terminal with shell pops up.

    1. This works when typed directly:

    # make everything explicit
    export PATH=""
    # ensure that file1 exists and that file2 does not exist
    /bin/touch file1
    /bin/rm -f file2
    # built-in
    if  [ file1 -nt file2 ]; then echo nt; else echo not_nt; fi
    # external
    if /usr/bin/[ file1 -nt file2 ]; then echo nt; else echo not_nt; fi

    # Output is as expected:
    nt
    nt

    2. This does not work:

    # Put the above commands into a script, say check.sh
    # Run with: /bin/sh < check.sh
    # Or  : /bin/sh ./check.sh
    # Or  : /usr/bin/env ./check.sh

    # Output is always not ok:
    not_nt
    nt




Re: Strange behaviour from jobs -p in a subshell

2018-11-13 Thread Greg Wooledge
On Tue, Nov 13, 2018 at 09:59:51AM -0500, Chet Ramey wrote:
> On 11/13/18 4:28 AM, Christopher Jefferson wrote:
> > Consider the following script. While the 3 sleeps are running, both jobs 
> > -p and $(jobs -p) will print 3 PIDs. Once the 3 children are finished, 
> > jobs -p will continue to print the 3 PIDs of the done Children, but 
> > $(jobs -p) will only print 1 PID. $(jobs -p) always seems to print at 
> > most 1 PID of a done child.
> 
> Since the $(jobs -p) is run in a subshell, its knowledge of its parent's
> jobs is transient. In this case, the subshell deletes knowledge of the
> jobs it inherits from its parent, but hangs onto the last asynchronous job
> in case the subshell references $!.
> 
> Chet

If the goal is to obtain the result of "jobs -p" and use it in a script,
I would suggest redirecting the output of jobs -p to a temp file, then
reading it.  That skips the subshell.



Re: Strange behaviour from jobs -p in a subshell

2018-11-13 Thread Chet Ramey
On 11/13/18 4:28 AM, Christopher Jefferson wrote:
> Consider the following script. While the 3 sleeps are running, both jobs 
> -p and $(jobs -p) will print 3 PIDs. Once the 3 children are finished, 
> jobs -p will continue to print the 3 PIDs of the done Children, but 
> $(jobs -p) will only print 1 PID. $(jobs -p) always seems to print at 
> most 1 PID of a done child.

Since the $(jobs -p) is run in a subshell, its knowledge of its parent's
jobs is transient. In this case, the subshell deletes knowledge of the
jobs it inherits from its parent, but hangs onto the last asynchronous job
in case the subshell references $!.

Chet

-- 
``The lyf so short, the craft so long to lerne.'' - Chaucer
 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRUc...@case.eduhttp://tiswww.cwru.edu/~chet/



Strange behaviour from jobs -p in a subshell

2018-11-13 Thread Christopher Jefferson
Consider the following script. While the 3 sleeps are running, both jobs 
-p and $(jobs -p) will print 3 PIDs. Once the 3 children are finished, 
jobs -p will continue to print the 3 PIDs of the done Children, but 
$(jobs -p) will only print 1 PID. $(jobs -p) always seems to print at 
most 1 PID of a done child.


#!/usr/bin/bash

(sleep 2 ) &
(sleep 2 ) &
(sleep 2 ) &

while /bin/true
do
     echo A
     echo $(jobs -p)
     echo B
     jobs -p
     echo C
     sleep 1
done