Re: change in OS defaults (was: bug in [ -f file ] test)
� wrote: Yes, the can be overridden in /etc/sysctl.d. But I am going to modify the troublesome scripts in the $50k software and leave the security of the OS intact. Thanks. Actually, setting them to "1", is not "leaving the OS security intact" -- it's adding new, non-default security in Redhat and Suse distro's. The default OS setting is 0 for those as it makes the way hard & softlinks incompatible with past behavior. I ran into the new settings when upgrading from SUSE 13.1 -> 13.2. Many new "security pollution" problems are being introduced due to people forgetting about how groups can be used to control access. Various programs (ssh, sendmail, sudo, lilo, to name a few), complain or fail, if you use groups to control access and try allow access by group. The new settings broke a use case where where I use permissions, different UID's and hardlinks to save considerable space. Case in point. I control access (*to myself*) in my linux source trees. Under my linux-source dir, I have "unmodified" dirs for each version of linux I've been working with. When a new version comes out, I create a 2nd copy using 'cp -al', and, as root, apply the patch necessary to create the new version. I.e. Currently I have source trees going back to 3.10. Though I think I started with a new copy with linux 4.0. To upgrade to 4.0.2 or 4.0.4, I apply a patch to the 4.0 source tree in a new directory. This saves on disk space, as well as propagates local patches for my build environment. Looking at "virgin" sources with "du" for some 4.0 derived source dirs: du -sh linux-4.0{,.[0-9]} 638M linux-4.0 2.6M linux-4.0.0 14M linux-4.0.2 19M linux-4.0.4 Linux-4.0.2 and linux-4.0.4 share about 620M with the linux-4.0 sources. The linux-4.0.0 is an exact copy of the 4.0 tree with the "2.6M" being space used for the duplication of the directories which "cp -al" creates as new copies, owned by the user (so they can write in them!). Between main-releases: du -sh linux-4.[034] 638M linux-4.0 220M linux-4.3 231M linux-4.4 About 220-240M changes. Looking at 4.4 derived trees, we see the stand-alone size of the linux-4.4 dir: du -sh linux-4.4{,.[13]} 699M linux-4.4 6.5M linux-4.4.1 17M linux-4.4.3 I.e. linux-4.4, by itself is 699M. But it shares over 400M with linux-4.0. To ensure I don't accidently change my "reference" sources, the dirs are writable by the owner or the group, with files only being writable by the owner: llg -d linux-4.[034] drwxrwsr-x 23 root root 4096 Aug 12 2015 linux-4.0/ drwxrwsr-x 24 root root 4096 Feb 1 21:42 linux-4.3/ drwxrwsr-x 24 root root 4096 Feb 1 21:52 linux-4.4/ To actually work & build, as non-root user, I do a "cp -al" from the reference-dirs into work dirs, so du looks like: du -sh linux-4.[034] ish-4[034]* 638M linux-4.0 396M linux-4.3 231M linux-4.4 3.0G ish-400 3.5G ish-400-nlnk 3.0G ish-402 3.0G ish-404 3.3M ish-43 3.2G ish-433 476M ish-441 440M ish-443 Note -- the "GB"-size dirs have object files in them. Ish-441 & Ish-443 likely had a make-clean done in them which leaves stuff. When I copy them as a local user, the mode-bits get copied over to the new tree, but where files are owned by root (if unchanged) or by me (if changed). Pruned ls output: llg ish-443 total 50876 -rw-r--r-- 99 root root18693 Jan 19 2014 COPYING -rw-r--r-- 7 root root97181 Feb 1 21:41 CREDITS drwxrwsr-x 112 lin root 8192 Feb 1 21:52 Documentation/ -rw-r--r-- 24 root root 2622 Sep 10 2015 Kbuild -rw-rw-r-- 1 lin root 2943284 Feb 25 22:53 System.map drwxrwsr-x 2 lin root 178 Feb 25 17:51 usr/ -rwxrwxr-x 1 lin root 24768740 Feb 25 22:53 vmlinux* My script to download patches and create new 'linux' source trees and then create a "work dir" for me, started failing when I moved from SuSE 13.1->13.2, since SuSE copied Redhat's new settings for the links. The new setting prohibited me creating a links to root-owned files -- even though I could read them, and was only creating links in my user-owned directory. I.e. the new protected links (at least for hard), protect me from using the normal unix mode-bits and groups to create secure, read-only copies of unmodified files in my source tree. The links (in the case of hardlinks) are not protected, but the ability to create links to a common source was restricted to root -- which would defeat my userid's ability to create a copy of root's source tree that I could work in without duplicating all the files. The difference size for ~55 versions of the kernel is ==>> 6.2G linked, and 74G not linked.<<== The hard-link protection doesn't give any added protection, but does prevent getting easy protection using permissions & UID's to disable accidental writes to "original sources". Needless to say, on my systems, I don't modify those settings from the OS's default. On Thu, 2016-07-28 at 15:06 -0500, John McKown wrote: On Thu, Jul 28, 2016 at 2:49 PM,
Odd bugs when no job control support
Hi, (Please Cc me as I'm not subscribed to the list; Thank you.) So I was cross-compiling bash (version 4.3.42(1)-release, x86_64-pc-linux-musl) and came accross some "odd" bugs. It's not quite easy to describe how they occur, so let me use a script to help: ---8<--- #!/usr/bin/bash foo() { while read; do echo ERROR done < <(true) } echo before foo shopt -u execfail echo after foo --->8--- Expected output would be: before after Simple as that. However, I would get the following: before after ERROR But to be clear: this happened simply because shopt was used to set, or unset, an option. Regardless of which option or even whether it was already (un)set or not. The simple fact to call shopt to (un)set is what triggered things, causing code in loops as shown to be executed when it shouldn't be. After investigations, this in fact turned out to be linked to me cross-compiling, and specifically the fact that, then, some things are disabled by configure since checking for them isn't possible; Namely named pipes & job control -- The later being apparently the root of this bug. In my tries, I also noticed at some point that a test from run-inverts would fail, namely: $ ! (false); echo $? 1 However, doing `export bash_cv_job_control_missing=present` before running configure allowed to have job control built in, and also made both issues go away. As mentionned earlier, the bash cross-compiled was linked against musl, I just tried "cross-compiling" against glibc (as in, feeding configure a different triplet for --host, e.g. x86_64-pc-linux-musl, but it was a native gcc) and got similar results, only the script behaves a little differently: before ERROR after ERROR Seems it doesn't work as expected even before/without calling shopt then. However: $ ! (false); echo $? 1 just the same. Then, I tried (regular) compiling (against glibc) but passing --disable-job-control to configure, and got: before ERROR after ERROR So the bug occurs then as well, making things even easier to reproduce. :) Things are a little different with inverts, but still not quite right: $ ! (false); echo $? 0 $ (false); echo $? 0 $ false; echo $? 1 Lastly I tried the same thing but linking musl instead (via musl-gcc), and the script gets me: before after ERROR So again with musl, a call to shopt is needed to trigger the issue, no idea why. As for the inverts, same results as when linked to glibc: $ ! (false); echo $? 0 $ (false); echo $? 0 $ false; echo $? 1 I believe that's all I know about this; Hopefully this can be helpful. Regards,
Passing script by fd on Linux and setuid scripts
Hi. The bash reference mentions that /dev/fd/* files are treated specially, as filedescriptors, when used in conditionals. However, they're not if they're passed as script arguments to the bash binary (e.g., `bash /dev/fd/3`). If they were, it would be possible to have a small setuid-root indirection binary start bash scripts via filedescriptors, thereby eliminating the race condition setuid-script vulnerability mentioned in Gilles's answer at http://unix.stackexchange.com/questions/364/allow-setuid-on-shell-scripts#2910. (Whether the rest of the execution would be secure with setuid is another question) Running readlink on the actual /dev/fd/$something could then be used to set $0. I think it would make sense. What do you think? Attached is a tentative patch that implements the above described functionality. Best regards, Petr Skocik >From a4d348abfcb88bc5d6c26f6f92458a9043829b64 Mon Sep 17 00:00:00 2001 From: Petr SkocikDate: Sun, 31 Jul 2016 22:07:10 +0200 Subject: [PATCH] passing scripts via fds --- shell.c | 101 +--- 1 file changed, 78 insertions(+), 23 deletions(-) diff --git a/shell.c b/shell.c index 2fd8179..62e20a3 100644 --- a/shell.c +++ b/shell.c @@ -108,6 +108,9 @@ extern int gnu_error_format; extern char *primary_prompt, *secondary_prompt; extern char *this_command_name; +//readlink into mallocated memory +char* readlink_dyn(const char* path); + /* Non-zero means that this shell has already been run; i.e. you should call shell_reinitialize () if you need to start afresh. */ int shell_initialized = 0; @@ -1416,6 +1419,7 @@ open_shell_script (script_name) char sample[80]; int sample_len; struct stat sb; + int rawfd = 0; #if defined (ARRAY_VARS) SHELL_VAR *funcname_v, *bash_source_v, *bash_lineno_v; ARRAY *funcname_a, *bash_source_a, *bash_lineno_a; @@ -1423,29 +1427,59 @@ open_shell_script (script_name) filename = savestring (script_name); - fd = open (filename, O_RDONLY); - if ((fd < 0) && (errno == ENOENT) && (absolute_program (filename) == 0)) -{ - e = errno; - /* If it's not in the current directory, try looking through PATH - for it. */ - path_filename = find_path_file (script_name); - if (path_filename) - { - free (filename); - filename = path_filename; - fd = open (filename, O_RDONLY); - } - else - errno = e; -} +#define DEV_FD "/dev/fd/" +#define PROC_FD "/proc/self/fd/" + if (0 == strncmp(filename, DEV_FD, sizeof(DEV_FD)-1)){ + const char* ptr = filename + sizeof(DEV_FD)-1; + char* endptr; + rawfd = 1; + fd = strtoul(ptr, , 10); + if (*endptr){ + errno = ENOENT; + file_error (filename); + } + }else if (0 == strncmp(filename, PROC_FD, sizeof(PROC_FD)-1)){ + const char* ptr = filename + sizeof(PROC_FD)-1; + char* endptr; + rawfd = 1; + fd = strtoul(ptr, , 10); + if (*endptr){ + errno = ENOENT; + file_error (filename); + } +#undef DEV_FD +#undef PROC_FD + }else { + fd = open (filename, O_RDONLY); + if ((fd < 0) && (errno == ENOENT) && (absolute_program (filename) == 0)) +{ + e = errno; + /* If it's not in the current directory, try looking through PATH + for it. */ + path_filename = find_path_file (script_name); + if (path_filename) +{ + free (filename); + filename = path_filename; + fd = open (filename, O_RDONLY); +} + else +errno = e; +} + + if (fd < 0) +{ + e = errno; + file_error (filename); + exit ((e == ENOENT) ? EX_NOTFOUND : EX_NOINPUT); +} + } + + if (rawfd){ + free(exec_argv0); + exec_argv0 = readlink_dyn(script_name); + } - if (fd < 0) -{ - e = errno; - file_error (filename); - exit ((e == ENOENT) ? EX_NOTFOUND : EX_NOINPUT); -} free (dollar_vars[0]); dollar_vars[0] = exec_argv0 ? savestring (exec_argv0) : savestring (script_name); @@ -1477,7 +1511,7 @@ open_shell_script (script_name) #endif /* Only do this with non-tty file descriptors we can seek on. */ - if (fd_is_tty == 0 && (lseek (fd, 0L, 1) != -1)) + if (fd_is_tty == 0 && !rawfd && (lseek (fd, 0L, 1) != -1)) { /* Check to see if the `file' in `bash file' is a binary file according to the same tests done by execute_simple_command (), @@ -1895,3 +1929,24 @@ run_shopt_alist () shopt_alist = 0; shopt_ind = shopt_len = 0; } + +char* readlink_dyn(const char* path){ +struct stat sb; +char *data; +ssize_t r; + +if ( 0 > lstat(path, ) ) +return NULL; +if (NULL == (data = malloc(sb.st_size + 1))) +return NULL; +if (0>(r = readlink(path, data, sb.st_size + 1))) +goto err_out; +if (r > sb.st_size) +goto err_out; +data[r] = '\0'; +return data;