> So that's why I was thinking of adding cp --big-long-optinon-name to > the toybox install.sh, but I just went with "rm -f oldname && cp new oldname" > in the recent cleanup instead, because I can't trust the host's cp to have > that option _before_ toybox is installed, and we can't use the toybox > cp to install itself when cross compiling.
OK thanks for the test case -- I ran it and it helps me understand what --remove-destination does. Although I'm still wondering what is wrong with rm && cp? As you say yourself, having it in one command doesn't remove any race conditions. I guess you're saying you can get by with -f alone with toybox, because you can control whether it is writable. But that doesn't solve the problem with busybox or other multiplexer binaries which you don't control? Did I paraphrase right? Either way I still don't see what's wrong with rm && cp. I thought Aboriginal Linux was supposed to be the minimal set of things needed to build itself... and if toybox followed that philosophy then it would also leave out what can be accomplished by sequential composition in the shell :) Not that I am really arguing against adding --remove-destination -- just curious. Honestly this entire discussion is reminding me a Unix deficiency I've noticed. For background, in the "cloud" world (as opposed to the embedded world), people tend to set up their base image with something like Chef, Puppet, or Ansible, which are basically horrible Ruby/Python DSLs with embedded shell snippets (and of course nobody knows how to quote correctly when shell is embedded in yet another language...). My reaction is: why don't you just use shell scripts to configure your servers? (And others have had the same thought, e.g. https://github.com/brandonhilkert/fucking_shell_scripts , although ironically it depends on Ruby ...) Well the one good argument is that those systems are supposed to be idempotent, whereas shell is not idempotent. To be idempotent, you would basically describe a final state, without regard to the existing state -- which is not really possible with shell. In distributed systems, safe retries are essential. Perhaps a more immediate example is that you would want to be able to Ctrl-C your shell script at an *arbitrary* point in time and have it work correctly the second time (without resetting the state back to what it was the first time.) Examples: # mkdir can't be run twice; it fails the second time because the dir exists. mkdir -p oddly conflates the behavior of ignoring existing dirs with creating intermediate dirs $ mkdir dir # likewise rm can't be run twice; the second time it will fail because the file doesn't exist. --force conflates the behavior of ignoring missing arguments with not prompting for non-writable files $ rm foo # behavior depends on whether bar is an existing directory, -T / --no-target-directory fixes this I believe $ cp foo bar Anecdotally, it seems like a lot of shell script issues are caused by unexpected existing state, but in a lot of cases you don't CARE about the existing state -- you just want a final state (e.g. a bunch of symlinks to toybox). That seems to be a common thread in a lot of situations you're describing if I'm not mistaken. So if Unix tools had flags that made them behave in an idempotent manner, there would be less objection to use them for cloud server management. They would be more "declarative" and less imperative. Anyway, that's a bit of a tangent... (One reason I'm interested in toybox is that I've had a longstanding plan to write my own shell and toybox/busybox are obvious complements to a shell. Though it's interesting that busybox has two shells and toybox has zero, I think my design space is a little different in that I want it to be sh/bash compatible but also have significant new functionality.) Andy _______________________________________________ Toybox mailing list [email protected] http://lists.landley.net/listinfo.cgi/toybox-landley.net
