On Thu, May 12, 2022 at 10:55 AM Rob Landley <r...@landley.net> wrote:
> On 5/11/22 15:00, enh wrote: > > On Wed, May 11, 2022 at 7:16 AM Rob Landley <r...@landley.net> wrote: > >> > >> On 5/10/22 12:04, enh wrote: > >> > right now i think the "can't bootstrap without an existing toybox > >> > binary" is the worst mac problem. (i think there's already a thread > >> > about how your sed skills are too much for BSD sed...) > >> > >> It has a SED= environment variable so you can point it at gsed on mac, > but > >> GETTING gsed on the mac is outside my expertise... > > > > yeah, that's the "homebrew" i was talking about. (for all i know, it > > might actually be easier to just download and build gnu sed alone, but > > "if you're planning on using a mac for development, you'll want > > homebrew sooner or later" has meant i've never yet not given in and > > installed the whole thing.) > > You know, if we get enough of toybox running on mac and AOSP already has > toolchain binaries... > > Meh, I'm not volunteering my time to make Tim Cook richer. The FSF guys can > "properly" support the mac the same way they did cygwin. > > https://www.youtube.com/watch?v=g3j9muCo4o0 > > >> > (this morning i had them ask "does toybox tar support $TAR_OPTIONS?" > >> > >> $ man tar | grep TAR_OPTIONS > >> $ > >> > >> I don't know what that is? > > > > i was about to celebrate (because i'd already said to them that i > > personally _hate_ `GREP_OPTIONS` _because_ it messes with hermetic > > builds unless you know about it and explicitly clobber it, > > I squashed those with env -i : > > https://github.com/landley/toybox/blob/master/scripts/mkroot.sh#L5 > > That said, it means mkroot is not supporting distcc and ccache, despite > https://github.com/landley/toybox/blob/master/scripts/install.sh#L120 > supporting > them... > > > and the > > idea of having random other commands grow similar warts doesn't > > exactly fill me with joy) ... but then i noticed you only said "man", > > and this is a gnu thing, so _of course_ the man page won't mention it. > > how else could they make you use their stupid "info" crap? > > It wasn't in tar --help either. :P > > > anyway, checking whether this is a real thing the One True Way: > > > > $ strings `which tar` | grep OPTION > > TAR_OPTIONS > > cannot split TAR_OPTIONS: %s > > [OPTION...] > > > > it's also described on the web: > > > https://www.gnu.org/software/tar/manual/html_section/using-tar-options.html > > > > (but i still think it's a bad idea, personally.) > > alias tar='tar $TAR_OPTIONS' > > >> > wrt to > https://android-review.googlesource.com/c/kernel/build/+/2090303 > >> > where they'd like to be able to factor out the various "reproducible > >> > tarball please" options [like in the toybox tar tests].) > >> > >> It supports --owner and --group and I made it so you can specify the > numeric IDs > >> for both with the :123 syntax so you can specify a user that isn't in > >> /etc/passwd. (Commit 690526a84ffc.) > > > > yeah, that's what they want to not have to keep repeating. > > Is the alias solution sufficient? (In theory that lets you add this > support to > any command without the command having to know...) > (yeah, sounds like they're happy with the alias.) > Checking the corner cases: > > $ alias freep='echo $POTATO' > $ freep walrus > walrus > $ POTATO=42 freep walrus > walrus > $ POTATO=42 > $ freep walrus > 42 walrus > > It's not QUITE a full replacement because the prefixed environment > variables are > set after command line options are evaluated. (Well, technically what's > happening is they're only exported into the new process's space and the > command > line is evaluated against the parent's environment variable space.) > > And yes, I need to get this right in toysh, where "right" matches bash... > > In theory I could add a global "$COMMAND_OPTIONS" that automatically picks > them > up for each command name, which would get grep and tar and ls and rm and > everything. In practice, that sounds horrific and is GOING to have security > implications somehow... > exactly. on the one hand "if you're going to do any $<FOO>_OPTIONS you really should do all of them" but on the other "omg, i don't want to have to deal with all the fallout". > >> Meanwhile I was hitting > >> https://lkml.iu.edu/hypermail/linux/kernel/1002.2/02231.html > regularly. Right > >> now I'm trying to add a coldfire toolchain to to mkroot and it's all > >> https://www.denx.de/wiki/U-Boot/ColdFireNotes > >> > >> > Since gcc team seems to keep m68k issues in a very low priority, these > >> > toolchains have the libgcc.a, libgcov.a and multilibs copied from an > old > >> > toolchain. > >> > >> Thank you Wolfgang. Thanks EVER SO MUCH. Embedded guys just stop > engaging with > >> "upstream" and keep using 10 year old kernels and toolchains because > they got it > >> to work once and don't care what the crazy people are off doing. I'm > nuts for > >> trying to get current stuff to work on the full range of theoretically > supported > >> thingies, including NATIVE COMPILING on them. > >> > >> Sigh. > > > > could be worse ... could be a _proprietary_ toolchain from a decade > > ago. not that _that_ ever happens... > > Don't get me started on ARM jtag software. Either add support for your > board and > dongle to Open Obsessive Compulsive Disorder or admit you haven't got jtag > support. (But no, that's now how they see it...) > > (And yes, however it goes one of the hardware guys sets it up for me and > leaves > me with dangly ribbon cables over my desk and a software package I didn't > install/configure except maybe via rote wiki instructions, but when you're > using > stupidly expensive proprietary jtags there's always a finite number of > licenses > insufficient to the team at hand and I never get one and wind up standing > at > another engineer's desk debugging the problem over their shoulder. Of > COURSE > when you have 15 boards and 3 jtags nobody learns to use a jtag and nobody > thinks to apply a jtag to the problem at hand, bit of a chicken and egg > situation there isn't it?) > > >> >> (See, with aboriginal linux I was making my automated Linux From > Scratch build > >> >> work for whatever host architecture you ran it on, x86, arm, mips, > powerpc, sh4, > >> >> sparc, and so on. 95% of what autoconf dies boils down to 1) I was > unaware of > >> >> all the symbols "cc -E -dM - < /dev/null" shows you, 2) #if > >> >> __has_include(<file>) hadn't been invented yet. But unfortunately, > if you > >> >> snapshot the output it tries to use the arm answers on sparc, and > you have to > >> >> preprepare versions for each target architecture in which case you > might as well > >> >> just ship binaries? So I put in the work to make it actually perform > its stupid > >> >> dance and get the right answers, so that when I added m68k or s390x > it would > >> >> mostly Just Work. Not having autoconf at all is, of course, the much > better > >> >> option...) > >> > > >> > aka "the only winning move is not to play" :-) > >> > > >> > +1 to that! > >> > >> I had a rant years ago about how configure/make/install needed to be > replaced > >> the way git replaced CVS. Here's a 2-part version, I'm sure I didn't > better > >> writeups but can't find them.... > >> > >> > http://lists.landley.net/pipermail/aboriginal-landley.net/2011-June/000859.html > >> > http://lists.landley.net/pipermail/aboriginal-landley.net/2011-June/000860.html > >> > >> Unfortunately, all I'd seen when I wrote that was a lot of svn and > perforce, and > >> not a real proper "everybody moves to the new thing and universally > agrees its' > >> better" complete rethink the way git finally rendered cvs properly > irrelevant. > >> And sadly, that's STILL the case. (Otherwise we wouldn't have this > >> cmake/ninja/kaiju cycling every 5 years with the kernel still using > gmake.) > > > > i think the trouble is that no-one's found the "big thing" here that > > git was able to offer. i don't think we're in the git/bk/bzr/hg/... > > phase, i think we're still in the cvs/svn phase. > > +1 to that! > > > version control also had the advantage that you could use the same one > > for all languages; every individual language community seems to have a > > strong preference for "their" build system, even if/though it's > > useless for everyone else. > > > > i wouldn't hold my breath for this getting any better before we're all > retired. > > A big advantage of scripting languages is you don't build. You run the > source > code, there's no "make" step. > > Sigh, back before Eric Raymond succumbed to Nobel Disease (he didn't even > need > to win the award, but neither did Bill Joy, Richard Stallman, Richard > Dawkins...) we were working on a paper about the two local peaks in > language > design space, and how C was kind of the "static everything, implementation > completely transparent" hill and scripting languages covered the "dynamic > everything, implementation completely hidden" hill, and in between you had > a > no-mans-land of languages that tried to half-ass it and leaked > implementation > details up through thick layers of abstraction. > > C exposes all the implementation details and gives the programmer complete > manual control of everything (including resource allocation), which is a > tedious > but viable way of working. Even stuff like alignment and endianness are ok > as > long as you avoid using libraries that make assumptions: the programmer > gets to > make all their own assumptions, and when it breaks you get to keep the > pieces > and weld them back together in a new shape. > > In something like Python, everything is reference counted and you can call > a > method on an object that isn't there, catch the exception, ADD THE METHOD > (modifying the existing object), and then retry. Your container type is > based on > a dictionary, which might be a hash table under the covers, or might be a > tree, > or even a sorted resizeable array it's binary searching to look stuff up > in... > and it doesn't MATTER because it's completely opaque and just works. They > could > change HOW it works under the covers every third release and it's not your > problem, the implementation details never leak into the programmer's > awareness > except as performance issues, and that you can just throw hardware at. > > It was a long paper, we wrote at least 2/3 of it before our working > relationship > broke down circa 2008. I'm still kind of sad we didn't get to finish it... > > Anyway, the point is people working in python/ruby/php/javascript/lua/perl > don't > need a make replacement, except for any native code they're doing. > > > (i'll let the reader decide for themselves whether rpi pico > > introducing embedded folks to cmake is a positive step or not :-) ) > > > >> Rob > > > > P.S. since i had a few minutes before my next meeting, i gave in and > > built gnu sed from source ... it took literally _minutes_ to run > > configure on this M1 mac, and then a couple of _seconds_ to actually > > build. so so wrong... > > I know! > > https://landley.net/notes-2009.html#14-10-2009 > > Back under Aboriginal Linux, I was running a build system under qemu that > used > distcc to call out to the cross compiler running on the host (through the > virtual 10.0.2.2->host 127.0.0.1 thing), which moved the heavy lifting of > compilation outside the emulator and let me do about a -j 3 build. (The > QEMU > system would preprocess the file, stream out the resulting expanded.c, > read back > in the .o file, and then link it all at the end. I was looking at using > tinycc's > preprocessor instead of gcc's because that might let me do more like -j 5 > builds. QEMU used a single host processor so you didn't usefully get SMP > within > the VM.) > > This meant the actual COMPILE part was reasonably snappy, but the configure > stage could literally take 99% of the build time. So what I did was > statically > link the busybox instance that was providing most of the command line > utilities, > which sped up ./configure by 20%. > > (Part of this was an artifact of how QEMU works: it translated a page at a > time > to native code, with a cache of translated code pages. Every time an > executable > page was modified, the cached translated copy got deleted and would be > re-translated when it tried to execute it. Doing the dynamic linking > fixups not > only deleted the translated codepages, but it reduced the amount of sharing > between instances because the shared pages got copy-on-write when they were > modified. These days they collate more stuff into PLT/GOT tables but that > just > partially mitigates the damage...) > > But yes, autoconf is terrible, it doesn't parallelize like the rest of the > build > does, 90% of the questions it asks can be answered by compiler #defines or > are > just TOO STUPID TO ASK IN THE FIRST PLACE: > > https://landley.net/notes-2009.html#02-05-2009 > > And then of course, half of cross compiling is best described as "lying to > autoconf". (It asks questions about the HOST and uses them for the TARGET.) > > Rob >
_______________________________________________ Toybox mailing list Toybox@lists.landley.net http://lists.landley.net/listinfo.cgi/toybox-landley.net