Re: [Celinux-dev] Invitation and RFC: Linux Plumbers Device Tree track proposed

2015-04-11 Thread Rob Landley
On Sat, Apr 11, 2015 at 2:20 PM, Rowand, Frank
frank.row...@sonymobile.com wrote:
 In recent years there have been proposed tools to aid in the creation of valid
 device trees and in debugging device tree issues.  An example of this is the
 various approaches proposed (with source code provided) to validate device 
 tree
 source against valid bindings.  As of today, device tree related tools,
 techniques, and debugging infrastructure have not progressed very far.  I have
 submitted a device tree related proposal for the Linux Plumbers 2015 
 conference
 to spur action and innovation in such tools, techniques, and debugging
 infrastructure.

 The current title of the track is Device Tree Tools, Validation, and
 Troubleshooting.  The proposal is located at


 http://wiki.linuxplumbersconf.org/2015:device_tree_tools_validation_and_trouble_shooting

 I am looking for several things at the moment:

1) Suggestions of additional topics to be discussed.

2) Emails or other messages expressing an interest in attending the
   device tree track.

3) Commitments to attend the device tree track (the conference committee
   is looking at attendee interest and commitments as part of the process
   of accepting the device tree track).

4) Identifying additional people who should attend the device tree track.

 The desired outcome of the device tree track is to encourage the future
 development of tools, process, etc to make device tree related development,
 test, review and system administration more efficient, faster, easier, more
 robust, and to improve troubleshooting and debugging facilities.  Some
 examples of areas of interest could include:
- make it easier to create correct device tree source files
- support for debugging incorrect device tree source files
- create a kernel that correctly boots one or more specific device trees
  (eg a kernel configured to include the proper drivers and subsystems)
- create drivers that properly work for a device tree binding definition
- create drivers that support detecting errors in the related node(s) in
  a device tree

 The wiki page lists additional areas of interest.

Is there a device tree porting HOWTO anywhere? If I have a board
that's using explicit C initialization, and I want to convert it over
to device tree, step by step what do I do?

If I'm writing a new board support, what device tree bits do I need to
get a shell prompt on a serial port running out of initramfs?
(Physical memory, interrupt controller, timer to drive the scheduler,
serial chip...)

There's a bunch of device tree reference material out there, but no
tutorial material at all, that I can find...

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: embedding dtb file into kernel

2015-02-12 Thread Rob Landley
On 02/12/2015 06:56 PM, Tim Bird wrote:
 
 
 On 02/12/2015 02:30 PM, K Richard Pixley wrote:
 On 2/12/15 14:01 , Tim Bird wrote:
 On 02/12/2015 11:33 AM, K Richard Pixley wrote:
 I'm having trouble figuring out how to embed a dtb file into my kernel.
 I'm thinking that there should be a standard, architecture independent
 facility for this akin to initramfs, yes?

 Could someone please either point me to the standard facility, relevant
 doc, a currently building board that uses the standard facility, or
 outline what needs to be done to do this with a new board?

 If it matters, (I can't think why it would), I'm working with powerpc on
 a 3.10 kernel.  But if there are better facilities in other versions I'd
 appreciate hearing about that too.
 The normal method is just to cat the two files together, like so:
   $ cat zImage filename.dtb  zImage_w_dtb

 See https://community.freescale.com/thread/315543 for one example, on ARM.
 I'm not sure what the status is for appended DTBs on powerpc, but it's
 easy enough you can just try it and see what happens.
   -- Tim

 Thanks!

 How do I tell the kernel where to find that dtb?  Is there a relevant 
 config option?
 
 Usually you make the dtb from sources in the kernel.
 I don't know how it works on powerpc, but on arm, the .dts
 files are located in arch/arm/boot/dts, and you would make
 the dtb for the corresponding foo.dts source
 by typing:
 $ make foo.dtb

It's probably somewhere in:

https://www.kernel.org/doc/Documentation/devicetree/booting-without-of.txt

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Why is the deferred initcall patch not mainline?

2014-10-24 Thread Rob Landley
On 10/23/14 19:36, Nicolas Pitre wrote:
 On Thu, 23 Oct 2014, Rob Landley wrote:
 3) You, too, conveniently avoided to define the initial problem so far.
That makes for rather sterile conversations about alternative 
solutions that could score higher on the mainline acceptance scale.

With modules, you can already defer large portions of kernel-side system
bringup until userspace is ready for them. With static linking, you can't.

This patch series sounds like it lets static drivers hold off their
initialization until userspace sends them an insmod-equivalent event
through sysfs, possibly with associated arguments since the module
codepath already implements that so exposing it through the new
mechanism in the static linking case would be trivial.

Seems conceptually fairly straightforward to me, but I'm just guessing
since nobody's yet linked to the patches during this thread (that I've
noticed).

 In some cases, the system may want to defer initialization of some drivers
 until explicit action through the user interface.  So the trigger may not 
 be
 called until well after boot is completed.

 In that case the trigger for initializing those drivers should be the 
 first time they're accessed from user space.

 Which gets us back to one of the big reasons strikesystemd/strike
 devfsd failed years ago: you have to probe the hardware in order to know
 which /dev nodes to create, so you can't have accessing the /dev node
 probe the hardware. (There's no /dev node for a usb controller...)
 
 There is /sys/bus/usb/devices that could be accessed in order to trigger 
 the initial setup and probe.  It is most likely that libusb does that, 
 but this could be made to work with a simple 'cat' or 'touch' invocation 
 as well.

Please let me know which devices to trigger to launch an encrypted
ramdisk driver that has nontrivial setup because it needs to generate
keys (and collect enough entropy to do so). Or how about a driver that
programs a set of gpio pins to a specific behavior, obviously that's
triggered by examining the hardware.

A module can produce multiple /dev nodes from one piece of hardware, a
piece of hardware can produce no dev nodes (speaking of usb, the actual
bus-level driver), dev nodes may not have any associated hardware but
still require setup (/dev/urandom if you care about the quality of the
entropy pool)...

This is why devfs didn't work. You're trying to do this at the wrong
level. If you want to defer a module's init, doing so at the _module_
level is the only coherent way to do it.

 That could be the very first time libusb or similar tries to 
 enumerate available USB devices for example.  No special interface 
 needed.

 So now you're requiring libusb enumerating usb devices, when before this
 you could just reach out and open /dev/ttyUSB0 and it would be there.
 
 You can't just reach out with the deferred initcall scheme either, do 
 you?

You can already can do this with modules. Just don't insmod until you're
ready.

Right now the implementation ties together the code is in kernel with
the code starts running, so you can't both statically link the module
and control when it starts doing stuff. That really _seems_ like it's
just an implementation detail: decoupling them so the code is in kernel
but doesn't call its init function until userspace tells it to does not
sound like a huge conceptual stretch.

Is there an actual reason to invent a whole new unrelated thing instead?

 This is an embedded solution?

 I'm suggesting that they no longer prevent user space from executing
 earlier.  Why would you then still want an explicit trigger from user
 space?
 Because only the user space knows when it is now OK to initialize those
 drivers, and begin using CPU cycles on them.

 So what?  That is still not a good answer.

 Why?

 I believe Tim's proposal was to take a category of existing device
 probing, one already done on a background thread, and wait to start it
 until userspace says go. That's about as nonintrusive a change as you get.
 
 You might still be able to do better.

We have a mechanism available in one context. Would you rather make that
mechanism available in another context, or design a whole new mechanism
from scratch?

 If you really want to be non intrusive, you could e.g. make those 
 background threads into SIGSTOP and let user space SIGCONT them as it 
 sees fit.  No new special interfaces needed.

We have an existing module namespace, and existing mechanisms that use
it to control this sort of thing. Are you suggesting a lookup mechanism
that says here's the threat that would be initializing this module if
we hadn't started the thread SIGSTOP? (With each one in its own thread
so you have the same level of granularity the existing mechanism provides?)

 You're talking about requiring weird arbitrary things to have side effects.
 
 Like if stalling arbitrary initcalls wouldn't have side effects?

You're arguing that modules, as the exist today, couldn't

Re: Why is the deferred initcall patch not mainline?

2014-10-23 Thread Rob Landley


On 10/23/14 12:21, Bird, Tim wrote:
 On Wednesday, October 22, 2014 8:49 AM, Nicolas Pitre [n...@fluxnic.net] 
 wrote:
 On Wed, 22 Oct 2014, Rob Landley wrote:
 Otherwise the standard hotplug notification mechanism is already
 available.
 
 I'm not sure why this attention to reading the status.  The salient feature
 here is that the initializations are deferred until user space tells the 
 kernel
 to proceed.  It's the initiation of the trigger from user-space that matters.
 The whole purpose of this feature is to defer some driver initializations 
 until
 the product can get into a state where it is already ready to perform it's 
 primary
 function.  Only user space knows when that is.
 
 There seems to be a desire to have an automatic mechanism for triggering
 the deferred initializations.  I'm OK with this, as long as there's some 
 reasonable
 use case for it.  There are lots of possible trigger mechanisms, including 
 just
 a simple timer, but I think it's important that the primary use case of 
 'trigger-when-user-space-says-to' is still supported.

The patches were reference but not (re-?)posted. People were talking
about waiting for the real root filesystem to show up, which strike me
as the wrong approach. Glad to hear the patch series is taking a better one.

 This code is really intended for a very specialized kernel configuration, 
 where all
 the modules are statically linked, and indeed module loading itself is turned 
 off. 
 I think that's a minority of Linux deployments out there.

Yeah, but not as rare as you're implying. That's how I build most of my
systems, for example.

Modules mean you need bits of the kernel to live in the root filesystem
image (and to match it exactly due to stable-api-nonsense.txt), which
complicates both build and upgrade. Unloading modules has never really
been properly supported, so there's no actual size or complexity
advantage to modules: you need it once and the resource is consumed
until next reboot. And of course there's security fun (spraying it down
with cryptography makes it awkward more than safe, and doesn't
change that you now have a multimode kernel that sometimes does one
thing and sometimes does another).

Not Going There with modules is a valid response for embedded systems if
I want to know what I'm deploying.

 This configuration
 implies some other attributes, like configuration for very small size and/or 
 very
 fast boot, where KALLSYMS may not be present, and other kernel features may
 not be available as well.

A new feature can have requirements. Not every existing deployment can
take advantage of any given new feature anyway. (Your _biggest_ blocker
will be that they're using a ${VENDOR:-broadcom} BSP that's stuck on
2.6.32 in 2014 and upgrading to a kernel version less than 5 years old
will never happen as long as you source hardware from vendors that fork
software rather than getting support upstream.)

 Indeed, in the smallest systems /proc or /sys may not
 be there, so an alternative (maybe a sysctl or even a new syscall) might be
 appropriate. 

A) Those don't interest me. As far as I'm concerned, they're not Linux.

B) If you propose a new syscall for this, it will never be merged. The
mechanism they implemented for this sort of thing is sysfs and hotplug.

 Quite frankly, the hacky way this is often done is to make stuff like this a
 one-time side effect of a rarely called syscall (like sync).  Please note I'm 
 not
 recommending this for mainline, just pointing out there are interesting ways
 that embedded developers just make the existing code work for their weird
 cases.
 
 Maybe there are some use cases for doing deferred initializations, 
 particularly
 automatically, for systems that do have modules turned on (i.e. for modules
 that are, in that case, still statically linked to the kernel for whatever 
 reason).
 I would welcome some discussion of these, to select an appropriate trigger
 mechanism for those cases.  But we should not let the primary purpose of this
 feature get lost in that discussion.

I thought it was common to defer at least some device probing until the
/dev node got opened. Which is a chicken and egg problem with regards to
the dev node showing up so you _can_ open them, which screwed up devfs
to the point of unworkability, and the answer to that was sysfs. So
having sysfs trigger deferred init from userspace makes perfect sense,
doing it that way means history is on your side and the kernel guys are
more likely to approve because it smells like what they've already done.

   -- Tim

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Why is the deferred initcall patch not mainline?

2014-10-23 Thread Rob Landley
On 10/23/14 14:05, Nicolas Pitre wrote:
 On Thu, 23 Oct 2014, Alexandre Belloni wrote:
 
 On 23/10/2014 at 13:56:44 -0400, Nicolas Pitre wrote :
 On Thu, 23 Oct 2014, Bird, Tim wrote:
 Why a trigger?  I'm suggesting no trigger at all is needed.

 Let all initcalls start initializing whenever they can.  Simply that 
 they shouldn't prevent user space from running early.

 Because initcalls are running in parallel, then they must be using 
 separate kernel threads.  It may be possible to adjust their priority so 
 if one of them is actually using a lot of CPU cycles then it will run 
 only when all the other threads (including user space) are no longer 
 running.


 You probably can't do that without introducing race conditions. A number
 of userspace libraries and script are actually expecting init and probe
 to be synchronous.
 
 They already have to cope with the fact that most things can be 
 available through not-yet-loaded modules, or may never be there at all. 
 If not then they should be fixed.
 
 And if you do rely on such a feature for your small embedded 
 system then you won't have that many libs and scripts to fix.

There are userspace libraries distinguishing between init and probe?
I.E. treating them as two separate things already?

So how were they accessing them as two separate things before this patch
set?

 I will refer to the async probe discussion and the
 following thread:

 http://thread.gmane.org/gmane.linux.kernel/1781529
 
 I still don't think that is a good idea at all.  This async probe 
 concept requires a trigger from user space and that opens many cans of 
 worms as user space now has to be aware of specific kernel driver 
 modules, their ordering dependencies, etc.
 
 My point is simply not to defer any initialization at all.  This way you 
 don't have to select which module or initcall to send a trigger for 
 later on.

Why would this be hard?

for i in $(find /sys/module -name initstate)
do
  [ $(cat $i) != live ]  echo kick  $i
done

And I'm confused that you're concerned about init order so your solution
is to do nothing, thereby preserving the existing init order which could
not _possibly_ be exposed verbatim to userspace...

 Once again, what is the actual problem you want to solve?  If it is 
 about making sure user space can execute ASAP then _that_ should be the 
 topic, not figuring out how to implement a particular solution.
 
 Anyway, your userspace will have to have a way to know what has been
 initialized.
 
 Hotplug notifications via dbus.

Wait, we need a _third_ mechanism for hotplug notifications now? (The
/proc/sys/kernel/hotplug helper, netlink, and you want another one?)

 On my side, I was also using that mechanism to delay the network stack 
 init but I still want to know when my dhcp client can start for 
 example.
 
 Ditto.  And not only do you want to know when the network stack is 
 initialized, but you also need to wait for a link to be established 
 before DHCP can work.

Um, doesn't the existing hotplug mechanism _already_ give us
notification that eth0 and similar showed up? (Pretty sure I hit that
while poking at mdev, although it was a while ago...)

Increasingly confused,

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Why is the deferred initcall patch not mainline?

2014-10-23 Thread Rob Landley


On 10/23/14 15:50, Nicolas Pitre wrote:
 On Thu, 23 Oct 2014, Bird, Tim wrote:
 
 On Thursday, October 23, 2014 12:05 PM, Nicolas Pitre wrote:

 On Thu, 23 Oct 2014, Alexandre Belloni wrote:

 On 23/10/2014 at 13:56:44 -0400, Nicolas Pitre wrote :
 On Thu, 23 Oct 2014, Bird, Tim wrote:

 I'm not sure why this attention to reading the status.  The salient 
 feature
 here is that the initializations are deferred until user space tells the 
 kernel
 to proceed.  It's the initiation of the trigger from user-space that 
 matters.
 The whole purpose of this feature is to defer some driver 
 initializations until
 the product can get into a state where it is already ready to perform 
 it's primary
 function.  Only user space knows when that is.

 This is still a rather restrictive view of the problem IMHO.

 Let's step back a bit. Your concern is that some initcalls are taking
 too long and preventing user space from executing early, right?
 Well,  not exactly.

 That is not the exact problem we're trying to solve, although it is close.
 The problem is not that users-space doesn't start early enough, per se,
 it's that there are a set of drivers statically linked to the kernel that are
 not needed until after (possibly well after) user space starts.
 Any cycles whatsoever being spent on those drivers (either in their
 initialization routines, or in processing them or scheduling them)
 impairs the primary function of the device.  On a very old presentation
 I gave on this, the use case I gave was getting a picture of a baby's smile.
 USB drivers are NOT needed for this, but they *are* needed for full
 product operation.
 
 As I suggested earlier, those cycles spent on those drivers may be 
 deferred to a moment when the CPU has nothing else to do anyway by 
 giving a lower priority to the threads handling them.

Unless you're using realtime priorities your kernel will spend about 5%
of its time servicing the lowest priority threads no matter what you do,
to avoid priority inversion lockups of the kind that cost us a mars
probe back in the 90's.

http://research.microsoft.com/en-us/um/people/mbj/Mars_Pathfinder/Authoritative_Account.html

Doing hardware probing at low priorities can cause really _fun_ latency
spikes in the system as something grabs a lock and then sleeps. (And
doing this at the realtime scheduling where it won't do that translates
those latency spikes into the aforementioned hard lockup, so not
actually a solution per se.)

Trying to fix this in the general case is the priority inheritance
problem, and last I heard was really hard. Maybe it's been fixed in the
past few years and I hadn't noticed. (The rise of SMP made it a less
pressing issue, but system bringup is its own little world.)

The reliable fix to priority inversion is to let low priority jobs still
get a decent crack at the CPU so clogs clear themselves naturally. And
this means that scheduling it down as far as it goes does _not_ simply
make low priority jobs go away.

 In some cases, the system may want to defer initialization of some drivers
 until explicit action through the user interface.  So the trigger may not be
 called until well after boot is completed.
 
 In that case the trigger for initializing those drivers should be the 
 first time they're accessed from user space.

Which gets us back to one of the big reasons strikesystemd/strike
devfsd failed years ago: you have to probe the hardware in order to know
which /dev nodes to create, so you can't have accessing the /dev node
probe the hardware. (There's no /dev node for a usb controller...)

 That could be the very
 first time libusb or similar tries to enumerate available USB devices 
 for example.  No special interface needed.

So now you're requiring libusb enumerating usb devices, when before this
you could just reach out and open /dev/ttyUSB0 and it would be there.

This is an embedded solution?

 I'm suggesting that they no longer prevent user space from executing
 earlier.  Why would you then still want an explicit trigger from user
 space?
 Because only the user space knows when it is now OK to initialize those
 drivers, and begin using CPU cycles on them.
 
 So what?  That is still not a good answer.

Why?

I believe Tim's proposal was to take a category of existing device
probing, one already done on a background thread, and wait to start it
until userspace says go. That's about as nonintrusive a change as you get.

You're talking about requiring weird arbitrary things to have side effects.

 User space shouldn't have to care as long as it has all the CPU cycles 
 it wants in priority.

That's not how scheduling works. The realtime people have been trying to
make scheduling work that wasy for _years_ and it's still a flaming pain
to use their stuff without hard lockups and weird inexplicable dropouts.

 But as soon as user space relinquishes the CPU 
 then there is no reason why driver initialization couldn't take over 
 until user space is made runnable again.

Re: Why is the deferred initcall patch not mainline?

2014-10-22 Thread Rob Landley


On 10/21/14 14:58, Nicolas Pitre wrote:
 On Tue, 21 Oct 2014, Bird, Tim wrote:
 
 I'm going to respond to several comments in this one message (sorry for the 
 likely confusion)

 On Tuesday, October 21, 2014 9:31 AM, Nicolas Pitre [n...@fluxnic.net] wrote:

 On Tue, 21 Oct 2014, Grant Likely wrote:

 On Sat, Oct 18, 2014 at 9:11 AM, Bird, Tim tim.b...@sonymobile.com wrote:
 The answer is pretty easy, I think.  I tried to mainline it once but 
 failed, and didn't really
 try again. If it is being found useful,  we should try to mainline it 
 again,  this time with
 more persistence.  The reason it got rejected before IIRC was that you 
 can accomplish
 a similar thing with modules, with no changes to the kernel. But that 
 doesn't cover
 the case where the loadable modules feature of the kernel is turned off, 
 which is
 common in very small systems.

 It is a rather clumsy approach though since it requires changes to
 modules and it makes the configuration static per build. Could it
 instead be done by the kernel accepting a list of initcalls that
 should be deferred? It would depend I suppose on the cost of finding
 the initcalls to defer at boot time.

 Yeah, I'm not a big fan of having to change kernel code in order to
 use the feature.  I am quite intrigued by Geert Uytterhoeven's idea
 to add a 'D' option to the config system, so that the record of which
 modules to defer could be stored there.  This is much better than
 hand-altering code.  I don't know how difficult this would be to add
 to the kbuild system, but the mechanism for altering the macro would
 be, IMHO, very straightforward.
 
 Straight forward but IMHO rather suboptimal. Sure it might be good 
 enough if all you want is to ship products out the door, but for 
 mainline something better should be done.
 
 This patch predated Arjan Van de Ven's fastboot work.  I don't
 know if some of his parallelization (asynchronous module loading), and
 optimizations for USB loading made things substantially better than this.
 The USB spec makes in impossible to avoid a certain amount of delay
 in probing the USB busses

 USB was the main culprit, but we sometimes deferred other modules, if they
 were not in the fastpath for taking a picture. Sony cameras had a goal of
 booting in .5 seconds, but I think the best we ever achieved was about 1.1
 seconds, using deferred initcalls and a variety of other techniques.
 
 Some initcalls can be executed in parallel, but they currently all have 
 to complete before user space is started.  It should be possible to 
 still do the parallel initcall thing, and let user space run before they 
 are done as well.  Only waiting for the root fs to be available should 
 be sufficient.  That would be completely generic, and help embedded as 
 well as desktop systems.

What would actually be nice is if initramfs could read something out of
/proc or /sys to check the status of initcalls. (Or maybe get
notification through the hotplug netlink mechanism.)

Since initramfs is _already_ up really early, before needing any
particular drivers and way before the real root filesystem, we can
trivially punt this sort of synchronization to userspace if userspace
can just get the information about when kernel deferred processing is done.

Possibly we already have this: /sys/module has directories for all the
kernel modules including the ones built static, so possibly userspace
can just wait for /sys/module/zlib_delfate/initstate to say live. It
would just be nice to have a way to notice that's happened without
spinning reading a file.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Boot time: Initial main memory initialization optimizations?

2014-05-05 Thread Rob Landley
On 05/05/14 10:40, Dirk Behme wrote:
 Hi,
 
 regarding boot time optimization, on an embedded ARM Cortex-A9 based
 system with 512MB or 1GB main memory, we found that initializing this
 main memory takes a somehow large amount of time.
 
 Initializing 512MB takes = ~100ms, the additional 512MB on the 1GB take
 = ~100ms additionally, too. So in sum = ~200ms for 1GB.
 
 Having a short look to this, it looks like most of the time is spent in
 arch/arm/mm/init.c in bootmem_init()/arm_bootmem_init()/arm_bootmem_free().
 
 Has anybody already looked into this if there are any optimizations
 possible? Maybe even some hacks, if the main memory size (512MB/1GB) is
 always known? Any pointers?

There were a couple of longish threads last year. Random links into the
middle of 'em:

http://www.spinics.net/lists/linux-mm/msg54027.html
http://lkml.iu.edu//hypermail/linux/kernel/1306.3/01915.html

 I'm looking for reducing (a) the overall init time and maybe (b) the
 dependency on the memory size.

Basically they were deferring init by abusing the hotplug mechanism, so
the system started with less memory and a background thread added more
of it.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


New aboriginal linux release with 3.14 kernel.

2014-05-04 Thread Rob Landley
It's reasonably standlone useful at this point, so I thought you might
want to know:

http://landley.net/aboriginal/about.html

It's a tiny Linux system (9 packages total but they boil down to
busybox, uClibc, and the last gplv2 release of gcc) all cross compiled
to several different targets (arm, mips, powerpc, sparc, x86, sh4, etc)
and packaged up to run under qemu (tested with qemu 2.0).

You basically wget a tarball, extract it, ./run-emulator.sh, and
you've got a shell prompt in an emulated system. (Type exit when done.)
There are also ./dev-environment.sh and ./native-build.sh scripts
that launch qemu in more complicated ways; the about page explains those.

I find it useful for cross platform regression testing (does new release
of package X work on target Y?), and also for replacing cross compiling
with native compiling under emulation. (One of those 9 packages is
distcc so it can call out to the cross compiler while still doing native
compiling; this speeds up the non-autoconf bits of the build by a factor
of 7.)

My smoketest for new releases is building static  strace and dropbear
binaries for each target (you can download those from
http://landley.net/aboriginal/bin even if you don't care about the rest
of the project, or reproduce it yourself with the automated build-image
files in
http://landley.net/aboriginal/control-images/downloads/binaries).

Step 1 of doing real work is often build Linux From Scratch under it,
and then play further in that chroot. The lfs-bootstrap.hdc build-image
might help, and the about page tells you how to use it.

Years ago a friend and I did a GIANT PRESENTATION about all of this.
It's a few years out of date but still covers the theory pretty well:
https://speakerdeck.com/mirell/developing-for-non-x86-targets-using-qemu

There's a mailing list if you've got questions.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [JOB] ARM Embedded Developer - 100% Telecommute

2013-09-09 Thread Rob Landley

On 09/09/2013 09:11:44 PM, Rob Landley wrote:

On 09/09/2013 06:04:49 PM, OSJ wrote:

I have a client looking for a remote ARM Embedded Developer with the
following skills:


The balsa email client is kind of annoying. Removing an email from the  
cc: list doesn't actually take effect until you click somewhere else to  
change focus. I really need to switch to a less broken one.


Sorry for spamming the list,

Rob--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to use DEBUG macros in compressed/head.S

2013-01-16 Thread Rob Landley

On 01/16/2013 02:17:39 AM, zhaoyilong wrote:

When I open the macro DEBUG in the front of  file
arch/arm/boot/decompressed/head.S,the kernel runs and stops  
atUncompressing

Linux... done, booting the kernel.


No, that's just the last thing it output.

I seldom find other people's debug code useful. I stick a printf()  
variant in the code at a place I want to see what it's doing, and then  
I know what the output means. I only enable their debug code if I'm  
sending the output to the person who wrote said debug code. (There are  
exceptions, but they're exceptions to a useful rule of thumb.)


At this level, you can do direct writes to the serial chip ala:

   
http://balau82.wordpress.com/2010/02/28/hello-world-for-bare-metal-arm-using-qemu/


Note that's for qemu, which pauses when it gets a port write. For real  
hardware you need to check the status bit before sending the next  
character, basically spinning on a port read until the relevant bit  
gets set/cleared to indicate the hardware's done with the last thing  
you sent it. I don't remember the check off the top of my head (last  
time I needed it I dug it out of the uboot source), but if you're just  
debugging you can do a delay loop between each write. (This is assuming  
your serial port is already set to the right speed, which it had to be  
to spit out the above message. Presumably the bootloader did it for  
you.)


(And this is assuming your board layout has an 8250/16550a UART variant  
mapped at a knowable address, which basically they all do because it's  
dirt simple and the patents expired years ago. They get weird and  
bespoke about setting buffer sizes but you don't really have to care at  
this level.)


Most likely you can find whatever spit out the text you _did_ see, and  
copy it to produce more text from places in the code you'd like  
visibility into.


Rob--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Minimal x86 memory requirements

2011-02-14 Thread Rob Landley
On 02/14/2011 06:33 PM, Darren Hart wrote:
 I'm looking to build a bare minimum x86 kernel that will boot and run
 busybox, nothing else (and eventually less than that). Assuming I do
 need USB-HID, IDE, and basic TCP/IP, what should I expect to be the
 least RAM I could get away with just to boot off flash, get a getty,
 login, and take a few directory listings.

On a nommu system, if you configure out most of the PRINTK strings, you
can run a reasonably useful system in 4 megabytes.  However, that's
using a small flash-based initramfs with the block layer disabled.  I
don't know if you can fit everything you need in there (USB, the block
layer, and networking stack).  And if you want a MMU system, you'll add
the overhead of page tables in there.

So while you _might_ still be able to trim it down to 4 megabytes, I'd
budget somewhere in the 6 to 8 megs range.  Don't forget to statically
link your busybox binary so you don't dirty physical pages with
relocations.  (Against uClibc of course, Ulrich Drepper deprecated
static linking in glibc because they suck at it so badly.)

I note that the Linux kernel (last I checked, circa 2006) no longer
booted in 2 megabytes of ram due to the relocations required to extract
the thing when it gunzips it, it simply wouldn't let the mappings be
that close.  Maybe that's been addressed, but I doubt it.

Something people were spending time on a while back was mapping the
kernel directly out of flash (I can never keep NAND and NOR straight but
the one that works more like normal memory) since the code segment is
just a big read-only mapping block anyway.  I think these days it works
fine, but I haven't tried it.  Doing that saves DRAM, but it needs
actual mappable flash or ROM, and not a block device that faults pages
into DRAM under under the covers when mapped.  All that XIP work
(execute-in-place and binflat and such) was related to that as well.
Note that flash may not be as FAST as dram so you take a performance
hit, but they were worrying about the power consumption of requiring
less DRAM to refresh.

Does this help?  (Play with QEMU.  QEMU is awesome.  I have system
images at http://landley.net/downloads/binaries which provide a static
uClibc defconfig busybox binary you can try tweaking the kernel .config
and such for.  It's defconfig so it's pretty big, but it should give you
a reasonable idea.  If you want the busybox binaries by themselves, I
copied them to http://busybox.net/downloads/binaries.  Remind me to
update that for the new release...)

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: determine boot device in initrd

2011-01-21 Thread Rob Landley
On 01/20/2011 03:57 AM, Jacob Avraham wrote:
 Hi,
 
 I have a system with several USB disks, each of them is boot-capable.
 Each such device has a kernel and initrd in its filesystem.
 I want to be able to determine in initrd from which USB device it (and the 
 kernel) were
 loaded from, so I can mount the root filesystem from that device.
 Is that doable?
 I don't want to use a static root= entry in grub, since I don't know which 
 sd device it will be.

Your bootloader loads the kernel and hands off control to it.  Unless
the bootloader informs the kernel where it loaded it from, the kernel
has know way of knowing since it was a separate program that did it.
(Who says it was a local device?  It could have been a PXE boot.  It may
have been loadlin from Windows that got the kernel off of a network
share.  It could be the qemu -kernel option that manually populates the
virtual system's DRAM with a kernel image and jumps to the start of it
when it initializes the virtual CPU.)

One of the motivations behind the device tree stuff is that the
bootloader hands the kernel a standardized data structure telling it
where all the hardware is in the system, and one of the things it can
annotate this tree with is and _this_ one was the boot device.  Of
course both the x86 guys and the ARM maintainer have come out against
cross-platform standardization that would require them to change what
they're already doing in any way.

In general, unless the bootloader tells you where it loaded stuff from,
the kernel has no way of knowing.  Once upon a time PC hardware had
standardized layouts for stuff and the kernel could use heuristics based
on knowledge of that to go ah, first hard drive on the first IDE
controller, this partition has the boot flag set, we came from HERE.
Then IBM went this doesn't help us enumerate hardware in our mainframes
with 30,000 disks: Linux on the dekstop must accept a standardized
mechanism that will inflict the full pain of mainframe device
enumeration on the Linux desktop community or this will never be fixed
for us!  There hasn't really been much of a Linux desktop community
ever since that I've noticed, and device enumeration still sucks, but
that's udev for you.

Anyway, the kernel can try to guess this info by looking at various
devices and seeing if it recognizes any of them.  But it's up to you to
annotate your devices with with something it can recognize, and
programming your initrd to look for that annotation.  UUID partition
labels are fairly popular.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Move an assert under DEBUG_KERNEL. (attempt 2)

2011-01-10 Thread Rob Landley
On 01/06/2011 05:41 PM, Andrew Morton wrote:
 Probably a worthwhile thing to do, IMO.  If there's some net-specific
 CONFIG_DEBUG_ setting then that wold be a better thing to use.
 
 However the patch was a) wordwrapped, b) space-stuffed and c) not cc'ed
 to the networking list.  So its prospects are dim.

Ok, either I've beaten thunderbird into submission, or I'll be submitting a 
patch to Documentation/email-clients.txt.  (Whether or not I need to find a 
different smtp server to send this through remains an open question.)

(Confirming: I looked for a more specific DEBUG symbol, but couldn't find one.  
I can add one, but this seems a bit small to have its own symbol, and 
DEBUG_KERNEL is already used in a few *.c files.)

From: Rob Landley rland...@parallels.com

Move an assert under DEBUG_KERNEL.  (Minor cleanup to save a few bytes.)

Signed-off-by: Rob Landley rland...@parallels.com
---

 include/linux/rtnetlink.h |4 
 1 file changed, 4 insertions(+)

diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
index bbad657..28c4025 100644
--- a/include/linux/rtnetlink.h
+++ b/include/linux/rtnetlink.h
@@ -782,6 +782,7 @@ extern struct netdev_queue *dev_ingress_queue_create(struct 
net_device *dev);
 extern void rtnetlink_init(void);
 extern void __rtnl_unlock(void);
 
+#ifdef CONFIG_DEBUG_KERNEL
 #define ASSERT_RTNL() do { \
if (unlikely(!rtnl_is_locked())) { \
printk(KERN_ERR RTNL: assertion failed at %s (%d)\n, \
@@ -789,6 +790,9 @@ extern void __rtnl_unlock(void);
dump_stack(); \
} \
 } while(0)
+#else
+#define ASSERT_RTNL()
+#endif
 
 static inline u32 rtm_get_table(struct rtattr **rta, u8 table)
 {
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] Move an assert under DEBUG_KERNEL.

2011-01-06 Thread Rob Landley

From: Rob Landley rland...@parallels.com

Move an assert under DEBUG_KERNEL.

Signed-off-by: Rob Landley rland...@parallels.com
---

Saves about 3k from x86-64 defconfig according to scripts/bloat-o-meter.

 include/linux/rtnetlink.h |4 
 1 file changed, 4 insertions(+)

diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
index bbad657..28c4025 100644
--- a/include/linux/rtnetlink.h
+++ b/include/linux/rtnetlink.h
@@ -782,6 +782,7 @@ extern struct netdev_queue 
*dev_ingress_queue_create(struct net_device *dev);

 extern void rtnetlink_init(void);
 extern void __rtnl_unlock(void);

+#ifdef CONFIG_DEBUG_KERNEL
 #define ASSERT_RTNL() do { \
if (unlikely(!rtnl_is_locked())) { \
printk(KERN_ERR RTNL: assertion failed at %s (%d)\n, \
@@ -789,6 +790,9 @@ extern void __rtnl_unlock(void);
dump_stack(); \
} \
 } while(0)
+#else
+#define ASSERT_RTNL()
+#endif

 static inline u32 rtm_get_table(struct rtattr **rta, u8 table)
 {
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-28 Thread Rob Landley
On Monday 28 December 2009 04:27:04 Andy Green wrote:
 I wasn't suggesting you don't have firsthand experience all over the
 place, eg, busybox, I know it.  What I do suggest in the principles I
 have been bothering your Christmas with here can really pay off and are
 in the opposite direction to Qemu, bloated bootloaders, and buildroot
 style distro production.

In the PDF I spent a dozen pages pointing out that I think buildroot is silly 
and how it diverged from its original goals.  Why do you bring it up here?

Building natively is nice.  I think cross compiling extensive amounts of stuff 
is generally counterproductive, and you don't seem to disagree with that.  
(However, if there weren't cases where cross compiling _does_ make sense I 
wouldn't bother building the second stage cross compilers in my project and 
packaging them up for other people to use.  I don't recommend it != people 
who do this are stupid.)

You're conflating a bunch of issues.  You can boot a full distro under qemu or 
under real hardware, so use a native distro is orthogonal to develop under 
emulation vs develop on target hardware.  And building a hand-rolled system 
or assembling prebuilt distro bits can both be done natively (under either 
kind of native), so that's orthogonal too.

I'm really not understanding what argument you're making.  The one specific 
configuration I've chosen to use is better than any other possible 
configuration...  Possibly there's a for your needs attached somewhere that 
you're missing?   You seem to be confusing that's not an interesting option 
for me with that's not an interesting option.  You're conflating a bunch of 
different issues into a One True Way, which is sad.

  Fedora provides a whole solution there, with the restriction it's
  designed for native build, not cross.
 
  QEMU: it's not just for breakfast anymore.
 
  That's right Qemu often requires lunch, teatime and supper too to build
  anything :-)
 
  Which is why you hook it up to distcc so it can call out to the cross
  compiler, which speeds up the build and lets you take advantage of SMP.
  (Pages 217-226 of the above PDF.)

 Or you just do it native and you don't care about extreme rebuild case
 because you're using a distro that built it already.

I find reproducibility to be kind of nice.  (Remember when ubuntu dropped 
Powerpc support?  Or how Red hat Enterprise is finally dropping itanic?  Even 
today can you build a full distro natively on new hardware like microblaze or 
blackfin?)

I also don't see any difference between your let the distro handle everything 
with the vendor supplies a BSP, why would you need anything else?  That's 
certainly a point of view, and for you it may be fine.  Anything can be 
outsourced.  But I don't have to do it, therefore nobody does is a 
questionable position.

  There's more things you can do to speed it up if you want to go down that
  rabbit hole (which the presentation does), and there's more work being
  done in qemu.  (TCG was originally a performance hit but has improved
  since, although it varies widely by platform and is its own rabbit hole. 
  Also switching to gigabit NIC emulation with jumbo frames helped distcc a
  lot.)

 Just saying people don't do distribution rebuilds for their desktop or
 server boxes unless they're Gentoo believers.

Linux From Scratch does not exist?  My friend Mark (who inherited a Slackware 
for PowerPC cross-build system) is in reality unemployed?  Nobody ever 
actually works _on_ Fedora and needs to do a development build?

It's interesting to see someone who can imply these things with a straight 
face on an embedded development list.

 So the need to consider
 this heavy duty stuff only exists at the distro.  In Fedora case, guys
 at Marvell are running it and have access to lots of fast physical
 hardware they hooked up to Koji (you touch on this in your PDF).  I
 don't think it's like they were waiting to hear about Qemu now they're
 gonna drop that and move to emulation.

You also seem to be unaware that QEMU was only started in 2003 (first commit to 
the repository, Feb 18, 2003), and didn't even work on x86 all that well until 
the end of 2005.  Decent arm support started showing up in 2006 but it was 
darn wonky, and the switch from dyngen to TCG didn't happen until Feb 1, 2008, 
before which you couldn't even build qemu with gcc 4.x:

  http://landley.net/qemu/2008-01-29.html#Feb_1,_2008_-_TCG

Therefore, they didn't drop support, QEMU wasn't ready for prime time back 
in 2006 when my then-boss (Manas Saksena) quit Timesys to go launch Fedora for 
Arm.  (The boss I had after that, David Mandala, is now in charge of Ubuntu 
Mobile.  Timesys had some darn impressive alumni, pity the company screwed up 
so badly none of us stayed around.  I myself stayed through four bosses, three 
CEOs, and outlasted 80% of my fellow engineers...)

By the way:

  http://fedoraproject.org/wiki/Architectures/ARM/HowToQemu

You continue to conflate 

Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-27 Thread Rob Landley
On Sunday 27 December 2009 03:54:51 Andy Green wrote:
 On 12/27/09 07:17, Somebody in the thread at some point said:

 Hi Rob -

Before replying, I note that Mark Miller and I gave a presentation entitled 
Developing for non-x86 targets using QEMU Ohio LinuxFest in October.

  http://impactlinux.com/fwl/downloads/presentation.pdf
  http://impactlinux.com/fwl/presentation.html

There's even horribly mangled audio of our rushed 1 hour presentation.  (The 
slides are for a day-long course and we had 55 minutes to give a talk, so we 
skimmed like mad.  Unfortunately, the netbook they used for audio recording 
had the mother of all latency spikes whenever the cache flush did an actual 
flash erase and write, so there are regular audio dropouts the whole way 
through.)  Still, it's somewhere under:

  http://www.archive.org/details/OhioLinuxfest2009

I've also spent the last few years developing a project that produces native 
built environments for various QEMU targets and documents how to bootstrap 
various distros under them:

  http://impactlinux.com/fwl

So I do have some firsthand experience here.

  Fedora provides a whole solution there, with the restriction it's
  designed for native build, not cross.
 
  QEMU: it's not just for breakfast anymore.

 That's right Qemu often requires lunch, teatime and supper too to build
 anything :-)

Which is why you hook it up to distcc so it can call out to the cross 
compiler, which speeds up the build and lets you take advantage of SMP.  
(Pages 217-226 of the above PDF.)

That's also why my FWL project uses a statically linked version of busybox, 
because the static linking avoids the extra page retranslations on each exec 
and thus sped up the ./configure stage by 20%.  (Pages 235-236 of PDF.)

There's more things you can do to speed it up if you want to go down that 
rabbit hole (which the presentation does), and there's more work being done in 
qemu.  (TCG was originally a performance hit but has improved since, although 
it varies widely by platform and is its own rabbit hole.  Also switching to 
gigabit NIC emulation with jumbo frames helped distcc a lot.)

But in general, Moore's Law says that qemu on current PC hardware is about the 
speed of current PC hardware seven years ago.  (And obviously nobody ever 
built anything before 2003. :)

 Newer ARM platforms like Cortex8+ and the Marvell Sheevaplug will
 outstrip emulated performance on a normal PC.  There are 2GHz multi-core
 ARMs coming as well apparently.  So I took the view I should ignore Qemu
 and get an early start on the true native build that will be the future
 of native build as opposed to cross due to that.

Pages 24-34 of the above PDF go over this.  The first two pages are on the 
advantages of native compiling on real hardware, the next eight pages are on 
the disadvantages.  It can certainly be made to work, especially in a large 
corporation willing to spend a lot of money on hardware as a _prerequisite_ to 
choosing a deployment platform.

For hobbyists, small businesses, and open source developers in general, there 
are significant advantages to emulation.  (Page 208 comes to mind.)  And if you 
_are_ going to throw money at hardware, x86-64 continues to have better 
price/performance ratio, which was always its thing.

 The point of the distro is you just let them build the bulk of it, just
 installing binary packages.  You're only rebuilding the bits you are
 changing for your application.

Pages 68-71.  If your definition of embedded development is using off the shelf 
hardware and installing prebuilt binary packages into it, life becomes a lot 
easier, sure.

 For a lot of cases that's a few small
 app packages that are mainly linking against stuff from the distro and
 they're not too bad to do natively.

Pages 78-84

 (In addition my workflow is to edit
 on a host PC

Pages 178-180

 and use scripts to teleport a source tree tarball to the
 device where it's built as a package every time and installed together
 with its -devel,

Pages 181-202

 so everything is always under package control).

Package control or source control?  (Different page ranges...)

 -Andy

Rob
-- 
Latency is more important than throughput. It's that simple. - Linus Torvalds
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-27 Thread Rob Landley
On Sunday 27 December 2009 04:09:23 Andy Green wrote:
  I agree it's nice to have a build environment compatible with your
  deployment environment, and distros certainly have their advantages, but
  you may not want to actually _deploy_ 48 megabytes of /var/lib/apt from
  Ubuntu in an embedded device.

 I did say in the thread you want ARM11+ basis and you need 100-200MBytes
 rootfs space to get the advantages of the distro basis.  If you have
 something weaker (even ARM9 since stock Fedora is ARMv5+ instruction set
 by default) then you have to do things the old way and recook everything
 yourself one way or another.

I started programming on a commodore 64.  By modern standards, that system is 
so far down into embedded territory it's barely a computer.  And yet people 
did development on it.

  http://landley.net/history/catfire/wheniwasaboy.mp3

That said, you can follow Moore's Law in two directions: either it makes stuff 
twice as powerful every 18 months or it makes the same amount of power half 
the price.

What really interests me is disposable computing.  Once upon a time swiss 
watches were these amazingly valuable things (which Rolex and friends try to 
cling to even today by gold-plating the suckers), but these days you can get a 
cheap little LCD clock as a happy meal toy.  The cheapest crappiest machines 
capable of running Linux are 32-bit boxes with 4 gigs of ram, which were high-
end server systems circa 1987 that cost maybe about $5k (adjusted for inflation 
anyway).  These days, the cheapest low-end Linux boxes (of the repurposed 
router variety) are what, about $35 new?  Moore's Law says that 21 years is 
14 doublings, which would be 1) $2500, 2) $1250, 3) $635, 4) $312, 5) $156, 6) 
$87, 7) $39, 8) $19, 9) $9.76, 10) $4.88, 11) $2.44, 12) $1.22, 13) $0.61, 14) 
$0.31.

So in 2009 that $5000 worth of computing power should actually cost about 30 
cents, and should _be_ disposable.  In reality, the CPU in that router is 
clocked 20 times faster than a Compaq deskpro 386, you get 4 to 8 times the 
memory, they added networking hardware, and so on.  And there are fixed costs 
for a case and power supply that don't care about Moore's Law, plus up-front 
costs to any design that need to be amortized over a large production run to 
become cheap, and so on.

And the real outstanding research problems include ergonomic UI issues for 
tiny portable devices, batteries wearing out after too many cycles, and the 
fact that making disposable devices out of materials like cadmium is dubious 
from environmental standpoint.  Oh, and of course there was the decade or two 
companies like Intel lost going up a blind alley by bolting giant heat sinks 
and fans onto abominations like the Pentium 4 and Itanic.  They didn't care 
about power consumption at all until fairly recently, and are still backing 
out of that cul-de-sac even today...

Still, I forsee a day when cereal boxes have a display on the front that 
changes every 30 seconds to attract passerby, driven by the same amount of 
circuitry and battery that makes the free toy inside blink an LED today.  (I 
don't know what else that sort of thing will be used for, any more than people 
predicted checking twitter from the iPhone,)

This I'm reluctant to abandond the low-end and say oh we have more power now, 
only machines with X and Y are interesting.  The mainframe, minicomputer, and 
micro (PC) guys each said that, and today the old PC form factor's getting 
kicked into the server space by the iPhone and such.  I want to follow Moore's 
Law down into disruptive technology territory and find _out_ what it does.

 Even now there are plenty of suitable platforms that will work with it,
 and over time they will only increase.

You must be this tall to ride the computer.

 Nothing seems to totally die out
 (8051-based micros are still in the market)

Mainframes are still on the market too.

 but each time something new
 comes in at the top it grabs some of the market and the older ones shrink.

 It boils down to the point that if you just treat the ARM11+ platforms
 like the previous generation and stick fat bootloaders and buildroot
 blobs on them, you are going to miss out on an epochal simplification
 where embedded Linux largely becomes like desktop Linux in workflow,
 quality and reliability of update mechanisms, and effort needed to bring
 up a box / device.

New computing niches will develop new usage patterns.  The iPhone is currently 
doing this, and is unlikely to be the last cycle.

They'll also grow more powerful and expand into old niches the way blade 
servers are constructed from laptop components and used for batch processing 
today, but I personally find that less interesting.

 -Andy

Rob
-- 
Latency is more important than throughput. It's that simple. - Linus Torvalds
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-26 Thread Rob Landley
On Tuesday 22 December 2009 16:23:37 Andy Green wrote:
 On 12/22/09 11:12, Somebody in the thread at some point said:

 Hi Robert -

  (Personally I used Fedora ARM port and RPM, but any distro and
  packagesystem like Debian workable on ARM would be fine).
 
  Until now, we are using the build it yourself approach with ptxdist,
  basically because of these reasons:
 
  - If something goes wrong, we want to be able to fix it, which means
 that we must be able to recompile everything. Having the source is no
 value by itself, if you are not able to build it.

 Fedora provides a whole solution there, with the restriction it's
 designed for native build, not cross.

QEMU: it's not just for breakfast anymore.

Rob
-- 
Latency is more important than throughput. It's that simple. - Linus Torvalds
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-26 Thread Rob Landley
On Wednesday 23 December 2009 03:29:22 Andy Green wrote:
 On 12/23/09 08:56, Somebody in the thread at some point said:

 Hi -

  yourself because it's the buildroot mindset, that whole task
  disappears  with a distro basis.
 
  If you don't step into for example toolchain problems or other crazy
  things...

 Again this is buildroot thinking.  The distro provides both the native
 and cross toolchains for you.  You're going to want to use the same
 distro as you normally use on your box so the cross toolchain installs
 as a package there.

Because boards that use things like uClibc and busybox just aren't interesting 
to you?

Please don't confuse development environment with build environment.  A 
development environment has xterms and IDEs and visual diff tools and a web 
browser and PDF viewer and so on.  A build environment just compiles stuff to 
produce executables.  (Even on x86, your fire breathing SMP build server in the 
back room isn't necessarily something you're going to VNC into and boot a 
desktop on.)

I agree it's nice to have a build environment compatible with your deployment 
environment, and distros certainly have their advantages, but you may not want 
to actually _deploy_ 48 megabytes of /var/lib/apt from Ubuntu in an embedded 
device.

Rob
-- 
Latency is more important than throughput. It's that simple. - Linus Torvalds
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-21 Thread Rob Landley
On Sunday 20 December 2009 23:51:23 Matt Hsu wrote:
 Rob Landley wrote:
  However, if that's your minimum then you can't use the bootloader to
  re-flash the device, which is kind of handy.  (It gives you an
  un-bricking fallback short of pulling out a jtag.)

 Hi Rob,

 Well, Boot from SD is your good friend.

Ok, not aiming to be a generic bootloader then.  You're only trying to support 
hardware that has the flash equivalent of a floppy drive.  Got it.

 If you look at the platform that Qi which is supported, most of them
 all have this feature.

Because if they didn't you wouldn't support them?  Bit of a selection bias 
there...

Life is a bit easier if you can stick a USB port on your device and boot from 
a USB stick or cdrom.  Most of the SoCs coming down the pipe support USB too, 
but your wiki doesn't seem to consider this an interesting case...

 If you notice the trend of SoC, booting from peripherals becomes a
 must.

Depends how cheap you want your hardware to be, and whether you can afford 
separate development and deployment boards.

Software development is a bit easier when you can spec adding a extra few 
dollars worth of hardware to your device rather than redoing its software.  I 
tend to deal with people who repurpose existing cheap plastic crap already 
being knocked out in huge quantities somewhere in china.  (Their hardware 
contribution _might_ be a different plastic case for the thing.)  I've also 
dealt with highly integrated little suckers that haven't got _space_ for an sd 
card.  Since I one day dream of following Moore's Law down to disposable 
computing, I'm reluctant to discard these cases as uninteresting.

 Once you step into kernel via Qi, kernel provides you everything
 such as mtd utils to re-flash device.
 We don't need to support programming the device in the bootloader
 anymore.

Depending on the kernel to reflash the device means that if you reflash the 
device with a kernel that doesn't work, what you have is a brick.  There's 
lots and lots of reasons for a kernel not to work, and a 2.6 kernel kernel 
takes up around a megabyte on a device that may only have 2 or 4 megs of ram 
so keeping an old one around at all times isn't feasible.  So without some 
kind of fallback (such as a little ~32k bootloader at the start of flash that 
you never overwrite, in its own little erase block), every time you install a 
new kernel you risk bricking the device.  (If you only care about devices that 
have 2 gigs of flash, life is much easier.)

 Don't reinvent the wheel.

There are how many existing bootloaders out there already?

  Looking at the screen shot there, you've got code to parse ext2
  filesystems. What is your definition of minimal?

 Enough to boot into Linux.

You need to parse an ext2 filesystem to boot into linux?  (I'm not saying it's 
a _bad_ thing, I'm just not seeing how it's minimal.)

  Rationale for not providing a boot menu is you don't want to mess with
  video init.

 Nope, the centric idea of Qi, is let kernel deal with everything it
 could handle.

So the fact the kernel can provide a serial console means you shouldn't?

 The video init should be handled by kernel stead of bootloader.

Oh agreed.  What that has to do with a command line interpreter on the serial 
console was my question.

Personally, i'm used to embedded devices being headless.  (I tend to consider 
the iPhone the next stage in the mainframe-micro-mini evolution and it'll 
soon be just another PC target when they get the ergonomics worked out.  
Heck, give those suckers a USB port and you could give 'em a big keyboard and 
display today, and they could even charge themselves through the thing.  Way 
more powerful than my first 386 PC.  Actually about as powerful as the laptop I 
had circa 2002 or 2003.)

 The following clip demonstrate the advantage of Qi bootloader:

 http://www.youtube.com/watch?v=ol9LWBKXXwQfeature=related

 - Faster booting time

I.E. you enable the cpu cache during kernel decompression.

I believe the u-boot guy just posted that as a todo item, and that falls on 
the coreboot side of bootloading, which really should be broken out into its 
own little project someday...

The other delay in something like u-boot is it pausing to see if it gets an 
esc or similar from the serial console, so it can be interrupted and provide 
a prompt.  That's a configurable delay which can be removed if it really bugs 
you.

 - Get rid of flash on display device when stepping into kernel

 Hope these could clear your doubt.

Your proposal confused me when it said things like improve portability and 
make people spend less time on bootloader development, so I thought you were 
aiming at something more generic.  But this bootloader is not intended to ever 
apply to something like a hammer board, or any existing linksys-class router, 
most current cell phones, devices like handheld game consoles where the sd 
card stores

Re: [Celinux-dev] CELF Project Proposal- Refactoring Qi, lightweight bootloader

2009-12-20 Thread Rob Landley
On Thursday 17 December 2009 02:31:36 Matt Hsu wrote:
 Summary: Refactoring Qi, lightweight bootloader.

 Proposer: Matt Hsu


 Description:

 Qi (named by Alan Cox on Openmoko kernel list) is a minimal bootloader that
 breathes life into Linux.  Its goal is to stay close to the minimum
 needed

Which bits does it do?

Every piece of software needs something to initialize the DRAM controller.  
After that, they could presumably just jump to a known location in flash that 
the kernel lives at, and the kernel can decompress itself and so on.  Doing 
just that much can probably be done in 4k pretty easily.  (I gloss over the 
kernel command line and root filesystem location, which can be hardwired into 
the image if you really don't care about providing the developer with a UI.)

However, if that's your minimum then you can't use the bootloader to re-flash 
the device, which is kind of handy.  (It gives you an un-bricking fallback 
short of pulling out a jtag.)  But doing that requires things like a network 
driver, TCP/IP stack, tftp implementation, serial driver, command line 
interpreter, and so on.  And of course code to erase and write flash blocks for 
your flash chip du jour, plus knowledge of the flash layout.  (In theory, said 
knowledge comes from parsing a device tree.)

I still live in hope that somebody will split the first part (coreboot) out 
from the second part (grub) in embedded bootloaders.  It's sad that the PC has 
a tradition of orthogonality here but the embedded world treats it as a single 
opaque lump.

 http://wiki.openmoko.org/wiki/Qi

Looking at the screen shot there, you've got code to parse ext2 filesystems.  
What is your definition of minimal?

Rationale for not providing a boot menu is you don't want to mess with video 
init.  I don't think I've actually seen an embedded bootloader that messes 
with video, they do serial console instead, and you have a screen shot of 
serial console messages so apparently the serial driver part is there...

Confused,

Rob
-- 
Latency is more important than throughput. It's that simple. - Linus Torvalds
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-15 Thread Rob Landley
On Tuesday 13 January 2009 20:51:16 Jamie Lokier wrote:
 Paul Mundt wrote:
  This happens in a lot of places, like embedded gentoo ports, where almost
  all of the work is sent across distcc to a cross-compilation machine. In
  systems that use package management, it is done on the host through
  emulation, or painfully cross-compiled.

 Ah yes, I remember using embedded Gentoo.

 95% of the time in ./configure scripts, 5% in compilations.

With SMP becoming commonplace, expect this to become the norm everywhere.  
Once you get to around quad processor, any C program with a ./configure step 
is probably going to take longer to configure than to compile.  (Of course C++ 
manages to remain slow enough that autoconf isn't so obvious a bottleneck.)

 And this is on x86!  I dread to think how slow it gets on something
 slow.

My friend Mark's been experimenting with the amazon cloud thing, feeding in 
an image with a qemu instance and distcc+cross-compiler, and running builds 
under that.  Renting an 8-way ~2.5 ghz server with 7 gigabytes of ram and 1.6 
terabytes of disk is 80 cents/hour through them plus another few cents/day for 
bandwidth and persistent storage and such.  That's likely to get cheaper as 
time goes on.

We're still planning to buy a build server of our own to have something in-
house, but for running nightly builds it's almost to the point where 
depreciation on the hardware is more than buying time from a server farm.  
Just _one_ of those 8-way servers is enough hardware to build an entire distro 
in an hour or so.

What this really allows us to do is experiment with how parallel can we get 
our build?  Because renting ten 8-way servers in a cluster is $8/hour, and 
distcc already scales trivially over that.  Down the road what Firmware Linux 
is working towards is multiple qemu instances running in parallel with a 
central instance distributing builds to each one, so each can do its own 
./configure in parallel, distribute compilation to the distccd instances as it 
has stuff to compile, and then package up the resulting binary into one of 
those portage tarballs and send it back to the central node to install on a 
network mount that the lot of 'em can mount as build context, so the packages 
can get their dependencies right.  (You don't want your build taking place in 
a network mount, but your OS being on one you never write to isn't so bad as 
long as you have local storage to build in.)

We'll probably leverage the heck out of Portage for this, and might wind up 
modifying it heavily.  Dunno yet.  (We can even force dependencies on portage 
so it doesn't need to calculate 'em, the central node can do that and then say 
you have these packages, _build_...)

But yeah, hobbyists with a laptop, network access, and a monthly budget of $20 
can do cluster builds these days.

Rob

P.S.  I still hope autoconf dies off and the world wakes up and moves away 
from that.  And from makefiles for that matter.  But in the meantime, I can 
work around it with enough effort.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-12 Thread Rob Landley
On Monday 12 January 2009 02:27:32 Peter Korsgaard wrote:
  Mark == Mark A Miller m...@mirell.org writes:

  Mark And for H. Peter Anvin, before you refer to such uses as compiling
 the Mark kernel under a native environment as a piece of art, please be
 aware Mark that the mainstream embedded development environment,
 buildroot, is Mark also attempting to utilize QEMU for a sanity check on
 the Mark environment.

 That's for verifying that the rootfs'es actually work, not for
 building stuff.

Not in my case.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-05 Thread Rob Landley
On Monday 05 January 2009 09:01:56 Jamie Lokier wrote:
 Bernd Petrovitsch wrote:
  I assume that the NFS-mounted root filesystem is a real distribution.

 Not unless you call uClinux (MMU-less) a real distribution, no.

I want things to be orthogonal.  The following should be completely separate 
steps:

1) Creating a cross compiler
2) building a native development environment
3) booting a native development environment (on real hardware or under and 
emulator)
4) natively building your target system.

You should be able to mix and match.  Crosstool for #1, go download fedora 
for arm instead of #2, qemu or real hardware is your choice for #3, and then 
you should be able to natively build gentoo under an ubuntu host or vice 
versa.  (How is not currently properly documented, but I'm working on that.)

My objection to build systems like buildroot or uClinux is that they bundle 
all this together into a big hairball.  They create their own cross compiler, 
build their own root file system, use their own packaging system, and you have 
to take it all or nothing.

My build system is ruthlessly orthogonal.  I try not to make it depend on 
other bits of _itself_ more than necessary.

   (* - No MMU on some ARMs, but I'm working on ARM FDPIC-ELF to add
proper shared libs.  Feel free to fund this :-)
 
  The above mentioned ARMs have a MMU. Without MMU, it would be truly
  insane IMHO.

 We have similar cross-build issues without MMUs... I.e. that a lot of
 useful packages don't cross-build properly (including many which use
 Autoconf), and it might be easier to make a native build environment
 than to debug and patch all the broken-for-cross-build packages.
 Especially as sometimes they build, but fail at run-time in some
 conditions.

If you can get a version of the same architecture with an mmu you can actually 
build natively on that.  It's not ideal (it's a bit like trying to build i486 
code on an i686; the fact it runs on the host is no guarantee it'll run on the 
target), but it's better than cross compiling.  And most things have a broad 
enough compatible base architecture that you can mostly get away with it.

 But you're right it's probably insane to try.  I haven't dared as I
 suspect GCC and/or Binutils would break too :-)

Oh it does, but you can fix it. :)

 I'm sticking instead with oh well cross-build a few packages by hand
 and just don't even _try_ to use most of the handy software out there.

Cross compiling doesn't scale, and it bit-rots insanely quickly.

 You mentioned ARM Debian.  According to
 http://wiki.debian.org/ArmEabiPort one recommended method of
 bootstrapping it is building natively on an emulated ARM, because
 cross-building is fragile.

That's what my firmware linux project does too.  (I believe I was one of the 
first doing this back in 2006, but there are three or four others out there 
doing it now.)

Native compiling under emulation is an idea whose time has come.  Emulators on 
cheap x86-64 laptops today are about as powerful as high end tricked out build 
servers circa 2001, and Moore's Law continues to advance.  More memory, more 
CPU (maybe via SMP but distcc can take advantage of that today and qemu will 
develop threading someday).  You can throw engineering time at the problem 
(making cross compiling work) or you can throw hardware at the problem (build 
natively and buy fast native or emulator-hosting hardware).  The balance used 
to be in favor of the former; not so much anymore.

That said, my drive for reproducibility and orthogonality says that your 
native development environment must be something you can reproduce entirely 
from source on an arbitrary host.  You can't make cross compiling go away 
entirely, the best you can do is limit it to bootstrapping the native 
environment.  But I want to keep the parts I have to cross compile as small 
and simple as possible, and then run a native build script to get a richer 
environment.  For the past 5+ years my definition has been an environment 
that can rebuild itself under itself is powerful enough, that's all I need to 
cross compile, and from the first time I tried this (late 2002) up until 
2.6.25 that was 7 packages.  That's why I responded to the addition of perl as 
a regression, because for my use case it was.

 -- Jamie

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-04 Thread Rob Landley
On Sunday 04 January 2009 06:07:35 Alan Cox wrote:
  I note that sed and printf and such are all susv3.  I have an explicit
  test for 32 bit math in the script that cares, and this worked in Red Hat
  9 circa 2003.

 If you are trying to do arbitary precision maths using standard posix
 tools just use dc. That way the standard is explicit about what you will
 get.

I looked at that, but:

A) the Open Group Base Specifications (which I normally go by, since they're 
roughly synonymous with SUSv3 and Posix and available free on the web; they 
just released version 7 a week or so back) don't list dc as one of their 
utilities.  (They mention bc, but not dc.)

B) The busybox implementation of dc is crap.  I got 'em to fix the bug where 
the output defaulted to binary instead of decimal, but the math is all done as 
floating point rather than arbitrary precision, and they don't implement the 
 operator.

C) The only calculation which can overflow 64 bits (the ADJ32 one) turns out 
not to need arbitrary precision math, just 72 bits, and if it ever uses more 
than 32 then bottom 32 are all zero before the divide so you can do it in 
three lines.

Essentially, the ADJ32 calculation is (($NUMBER-1)$SHIFT)/$NUMBER.

$SHIFT maxes out at 51 and the largest interesting $NUMBER is 100.  
(That's the pathological case of HZ=1, calculating the USEC_TO_HZ direction.  
A larger $HZ results in a smaller $SHIFT by the number of bits needed to store 
$HZ, by the way, so a $HZ of 17 would have a shift of 47.  So even a HZ bigger 
than a million should have a small enough $SHIFT not to cause trouble here, 
although that's probably an _insane_ input to this script.)

1 million needs 20 bits to store, so the above calculation has to cope with an 
intermediate value of 9951 which takes a little over 70 bits to store, 
hence the potential to overflow 63 bits of signed math.

But this calculation has two special properties:

1) The number you start with before the shift is divided back out at the end 
(more or less), so the _result_ has to be less than 1$SHIFT and thus only 
takes $SHIFT bits to store.  With a maximum $SHIFT of 51 it has to fit in a 64 
bit result with about a dozen bits to spare.

2) The bottom $SHIFT many bits are all zero before the divide.

We can use these two properties to easily break the math into chunks that 
can't overflow by:

a) Chopping off the bottom X bits and dividing what's left by $NUMBER, keeping 
both the dividend and the remainder.  Choose any X that's big enough that this 
step won't overflow.  (I chose X=32, leaving at most 40-ish bits here).
b) Shift that dividend X bits to the left.  This can't overflow because of 
special property 1 above.
c) Shift the remainder X bits to the left.  The remainder can't be larger than 
the $NUMBER you started with, so if X+bits($NUMBER)64 this has to fit too.  
With X=32 and bits=20 we again have a dozen bits to spare.
d) Add the results of (b) and (c) together.  Since the bottom X bits were all 
zero, this is equivalent to having done the full divide.  (Easy enough to mask 
those bottom bits off and add them to the remainder before the divide if they 
weren't, but we didn't need to do that because we know they were zero.)

So no arbitrary precision math is actually required here, and yes there's a 
comment in the source about this. :)

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh (v2)

2009-01-04 Thread Rob Landley
On Saturday 03 January 2009 20:48:21 David Vrabel wrote:
 Rob Landley wrote:
  From: Rob Landley r...@landley.net
 
  Replace kernel/timeconst.pl with kernel/timeconst.sh.  The new shell
  script is much simpler, about 1/4 the size, and runs on Red Hat 9 from
  2003.
 
  It requires a shell which can do 64 bit math, such as bash, busybox ash,
  or dash running on a 64 bit host.

 I use Ubuntu (hence dash) on 32 bit systems so I think this needs to
 work with dash on 32 bit hosts.

I have a qemu/images directory full of various OS images for testing purposes.

I just fired up my jeos 7.10 image to make sure that even the most stripped-
down version of Ubuntu (just enough operating system) still installs bash by 
default, and it does.  (It doesn't install a development toolchain, but it 
does install bash.)

I also installed a 32 bit xubuntu 8.10 image (which took 4 hours for some 
reason, and which also has bash), and explicitly tested its 32-bit 
/bin/dash, and that did 64-bit math too.  So current versions of dash do 
offer 64 bit math on 32 bit platforms.

 David

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-04 Thread Rob Landley
On Sunday 04 January 2009 18:15:30 Bernd Petrovitsch wrote:
 On Son, 2009-01-04 at 22:13 +, Jamie Lokier wrote:
  Rob Landley wrote:
   In a private email, Bernd Petrovitsch suggested set -- $i and then
   using NAME=$1; PERIOD=$2.  (I keep getting private email responses
   to these sort of threads, and then getting dismissed as the only one
   who cares about the issue.  Less so this time around, but still...)
   This apparently works all the way back to the bourne shell.
 
  If you're going all the way back to the bourne shell, don't use set

 Going back to a Bourne shell was neither the intention nor makes it
 sense IMHO.
 I mentioned it to point out that the `set -- ' (or `set x `) is nothing
 new or even a bash-ism.

  -- $i; use set x $i instead, and don't expect to do any arithmetic
  in the shell; use expr or awk for arithmetic.
 
  (Not relevant to kernel scripts, imho, since you can always assume
  something a bit more modern and not too stripped down).

 ACK. A bash can IMHO be expected. Even going for `dash` is IMHO somewhat
 too extreme.

I have yet to encounter a system that uses dash _without_ bash.  (All ubuntu 
variants, even jeos, install bash by default.  They moved the /bin/sh symlink 
but they didn't stop installing bash, and the kernel will preferentially use 
bash if it finds it.)  People keep telling me they exist.  I suppose you could 
uninstall bash.  You could also uninstall gcc.  Not sure what that proves. 
(And nobody's shown me this mythical second implementation of perl that all 
these perl scripts are supposed to be portable to...)

Busybox ash is a more interesting case, but that implements lots of bash 
extensions.

That said, it's easy enough the scripts to work with current versions of dash.  
The whole shell portability issue mostly seems to be a stand-in for other 
objections (Peter Anvin didn't change syslinux and klibc to require perl to 
build this year because of dash), but it's easy enough to just address the 
proxy objection and move on rather than arguing about it...

  (I have 850 Linux boxes on my network with a bourne shell which
  doesn't do $((...)).  I won't be building kernels on them though :-)

 Believe it or not, but there are folks out there who build the firmware
 on ARM 200 MHz NFS-mounted systems natively  (and not simply
 cross-compile it on a 2GHz PC .).

Yeah, but according to Changes if they do it with the current kernel they do 
it with at least gcc 3.2 (August 2002) and make 3.79.1 (June 2000), so trying 
to make it work on software released pre-Y2K probably isn't that high a 
priority. :)

   Bernd

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-04 Thread Rob Landley
On Sunday 04 January 2009 18:41:15 Ray Lee wrote:
 On Fri, Jan 2, 2009 at 12:13 AM, Rob Landley r...@landley.net wrote:
  Replace kernel/timeconst.pl with kernel/timeconst.sh.  The new shell
  script is much simpler, about 1/4 the size, and runs on Red Hat 9 from
  2003.
 
  Peter Anvin added this perl to 2.6.25.  Before that, the kernel had never
  required perl to build.

 Nice work.

Thanks.  You'll definitely want to look at the _second_ version of that patch 
rather than the first, though. :)

 As the computations can all be done in 64-bit precision
 now, and there have been concerns expressed about some shells not
 supporting 64 bit integers, is there any reason this can't be done
 using long longs in C?

Nope.  Any of this could be done in C.  (And that's the approach Sam Ravnborg 
prefers to take for the second patch in the series, upgrading unifdef.c to do 
everything itself.)

I tend to lean towards scripts that create header files rather than programs 
that create header files, but as long as you remember to use HOSTCC it's 
fairly straightforward. :)

 Other than ruining a good bike shed argument, anyway.

Oh pile on.  It beats being dismissed as the only one on the planet who cares 
about the issue (again). :)

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Friday 02 January 2009 10:04:08 Matthieu CASTET wrote:
 Rob Landley a écrit :
  On Friday 02 January 2009 03:26:37 Arkadiusz Miskiewicz wrote:
  On Friday 02 of January 2009, Rob Landley wrote:
 
  Heh,
 
  I believe all three scripts run under dash and busybox ash.  (The
  timeconst.sh one needs 64 bit math which dash only provides on 64 bit
  hosts, which is a regression from Red Hat 9 in 2003 by the way.

 With dash 0.5.4-12 (from debian sid), I seems I got the 64 bit math for
 32 bit hosts :
 $ uname -m
 i686
 $ dash -c 'echo $((132))'
 4294967296

Cool.

The relatively recent 32 bit image I have lying around for testing purposes 
is xubuntu 7.10, and when dash was first introduced into ubuntu it had 
_buckets_ of bugs.  (If you backgrounded a task with  and then hit ctrl-z on 
the command line, it would suspend the backgrounded task.  It was Not Ready 
for Prime Time in a big way.)  Lack of 64 bit math could easily be one more.  
(It _is_ a regression vs Red Hat 9.)

The dash in ubuntu 8.10 seems to have a lot of the more obvious problems 
worked out.  Good to know. :)

That said, while testing the new round of patches against various shells and 
making it reproduce the full set of time constants that the old perl script 
kept cached values for (24, 32, 48, 64, 100, 122, 128, 200, 250, 256, 300, 
512, 1000, 1024, and 1200 hz), I found a bug.

The first patch is  miscalculating USEC_TO_HZ_ADJ32 for 24 HZ and 122 HZ.  All 
the other values are fine.)  It's an integer overflow.  The GCD of 24 and 
100 is 8, so that's 17 significant bits with a shift of 47... which is 
exactly 64 bits, but the math is _signed_, so it goes boing.

For the record, the reason I can't just pregenerate all these suckers on a 
system that's got an arbitrary precision calculator (ala dc) and then just 
ship the resulting header files (more or less the what the first version of 
that first patch did) is that some architectures (arm omap and and arm at91) 
allow you to enter arbitrary HZ values in kconfig.  (Their help text says that 
in many cases values that aren't powers of two won't work, but nothing 
enforces this.)  So if we didn't have the capability to dynamically generate 
these, you could enter a .config value that would break the build.

I'd be willing to use dc in the script if A) it was mentioned in SUSv3 (it's 
not, although bc is), B) the version of dc in busybox wasn't crap (it's 
floating point rather than arbitrary precision, and doesn't implement the left 
shift operator).  The reason I'm not looking more closely at what SUSv3 has to 
say about bc is that A) it claims to be an entire programming language, which 
is definitely overkill for this B) busybox hasn't bothered to implement it so 
it can't be all that widely used anymore.

I'll fix this and resubmit, it just wasn't ready last night.  (If the merge 
window is closing soon I could resubmit the other two with Sam's suggestions 
and resubmit this one next time 'round, but it was only a couple days to write 
in the first place, once I finally figured out what the perl version was 
trying to _do_...)

I believe ADJ32 is the only operation with any potential to overflow.  The 
pathological case for SHIFT is HZ 1, which for USEC conversions would give a 
shift around 52 (32 significant bits plus 20 bits to divide by 100), but 
MUL32 still wouldn't overflow because the shift loop stops when it finds 32 
significant bits, and any larger HZ value would produce a smaller shift.  The 
problem with ADJ32 is it uses the MUL32 shift value, so a small $TO (24 HZ) 
with a big $FROM (100 USEC, 19 signficant bits) and a small Greatest 
Common Denominator (in this case 8) can overflow 64 bits.  Pathological case 
is still HZ 1.  (Or any other smallish prime number.)  If I make that work, 
everything else has to.

So anyway, it's not _arbitrary_ precision math.  It's more like 32+20+20=72 
bits, and I can probably fake that pretty easily by breaking a couple of 
operations into two stages...

Fallen a bit behind on the thread since I noticed this and went off to code, 
I'll try to catch up later today.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Friday 02 January 2009 12:01:34 Sam Ravnborg wrote:
 But the serie rased anohter topic: shall we ban use of perl
 for generating a kernel.

I dunno about ban, but every time somebody adds perl to the hot path of 
the kernel build it breaks my build system, and I write a removal patch 
anyway.  I have to maintain them anyway, so I might as well try to push 'em 
upstream.  (I posted the first patch in this series twice before, once for 25 
and then an updated version to the linux-embedded list for .26.)

I didn't discover this topic independently.  Somebody pinged me about it on 
freenode back in February, and several other people sent me private email 
about it, and it's been previously raised on several other mailing lists (such 
as busybox and uclibc ones).

Unfortunately, most of the embedded developers I know aren't subscribed to 
linux-kernel.  (You either do kernel development, or you do everything else.  
It's not really feasible to keep up with the guts of the kernel and uClibc and 
busybox and gcc and qemu and the current offerings of three different hardware 
vendors and whatever userspace application the board's supposed to run and 
your build system and what INSANE things your EEdiot electrical engineer 
decided to miswire this time and fighting off marketing's desire to switch 
everything over to WinCE because they can get their entire advertising budget 
subsidized and there's a trade show next week we're not READY for...  Not only 
can none of 'em read a meaningful subset of linux-kernel anymore, but if you 
disappear into your own little niche for nine months, when you pop back up the 
kernel's all different and sometimes even the patch submission policy's 
migrated a bit.  Heck, I'm three months behind reading the LWN kernel page 
myself, and that's no substitute for kernel-traffic, RIP...)

Hopefully the cc: to linux-embedded is helping get more embedded guys involved 
in the discussion than just me. :)

 And this is what primary is discussed and the outcome of
 that discussion will not prevent patches that stands on their
 own to be applied.

The best way to get a patch applied is always for that patch to stand on its 
own merits.  Larger agendas are secondary.

Whether or not the kernel decides on a policy of keeping perl out of the 
kernel build's hot path, I still need these patches for my own use, and plan 
to keep coming up with them and submitting them.  I haven't removed ones that 
haven't broken my build yet, but just because I'm not using md today doesn't 
mean I won't _start_.  (And if enough other people keep poking me about the 
kernel build I can tackle 'em to please them.  I actually _do_ know some 
embedded developers using raid for network attached storage and video servers 
and such...)

   Sam

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


PATCH [0/3]: Simplify the kernel build by removing perl v2

2009-01-03 Thread Rob Landley
Here's an updated set of patches to remove use of perl from the kernel build's 
hot path (roughly defined as make allnoconfig; make; make 
headers_install).

This update incorporates feedback from Sam Ravnborg, Ted Tso, Joe Perches, 
Ingo Oeser, and others.  It also fixes an integer overflow error in the first 
patch, and all three scripts have been retested under dash.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


PATCH [1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh (v2)

2009-01-03 Thread Rob Landley
From: Rob Landley r...@landley.net

Replace kernel/timeconst.pl with kernel/timeconst.sh.  The new shell script
is much simpler, about 1/4 the size, and runs on Red Hat 9 from 2003.

It requires a shell which can do 64 bit math, such as bash, busybox ash,
or dash running on a 64 bit host.

Changes from previous version:

Redo ADJ32 math to avoid integer overflow for small HZ sizes (such as 24 or
122).  In the pathological case (HZ=1) both versions now produce
USEC_TO_HZ_ADJ32 of 0x779c842fa.  (See source comments for details.)

Also expand usage message, add error message when no 64 bit math available in
shell (and suggest some shells that support it), add whitespace around
operators in equations, added underscores before __KERNEL_TIMECONST_H, change
makefile so script is responsible for creating output file, make script delete 
output file on error, change shebang to #!/bin/sh and test with dash and bash.

Signed-off-by: Rob Landley r...@landley.net
---

 kernel/Makefile |4 
 kernel/timeconst.pl |  378 --
 kernel/timeconst.sh |  149 
 3 files changed, 151 insertions(+), 380 deletions(-)

--- linux-2.6.28/kernel/timeconst.pl2008-12-24 17:26:37.0 -0600
+++ /dev/null   2008-11-21 04:46:41.0 -0600
@@ -1,378 +0,0 @@
-#!/usr/bin/perl
-# ---
-#
-#   Copyright 2007-2008 rPath, Inc. - All Rights Reserved
-#
-#   This file is part of the Linux kernel, and is made available under
-#   the terms of the GNU General Public License version 2 or (at your
-#   option) any later version; incorporated herein by reference.
-#
-# ---
-#
-
-#
-# Usage: timeconst.pl HZ  timeconst.h
-#
-
-# Precomputed values for systems without Math::BigInt
-# Generated by:
-# timeconst.pl --can 24 32 48 64 100 122 128 200 250 256 300 512 1000 1024 1200
-%canned_values = (
-   24 = [
-   '0xa6ab','0x2aa',26,
-   125,3,
-   '0xc49ba5e4','0x1fbe76c8b4',37,
-   3,125,
-   '0xa2c2aaab','0x',16,
-   125000,3,
-   '0xc9539b89','0x7fffbce4217d',47,
-   3,125000,
-   ], 32 = [
-   '0xfa00','0x600',27,
-   125,4,
-   '0x83126e98','0xfdf3b645a',36,
-   4,125,
-   '0xf424','0x0',17,
-   31250,1,
-   '0x8637bd06','0x3fff79c842fa',46,
-   1,31250,
-   ], 48 = [
-   '0xa6ab','0x6aa',27,
-   125,6,
-   '0xc49ba5e4','0xfdf3b645a',36,
-   6,125,
-   '0xa2c2aaab','0x1',17,
-   62500,3,
-   '0xc9539b89','0x3fffbce4217d',46,
-   3,62500,
-   ], 64 = [
-   '0xfa00','0xe00',28,
-   125,8,
-   '0x83126e98','0x7ef9db22d',35,
-   8,125,
-   '0xf424','0x0',18,
-   15625,1,
-   '0x8637bd06','0x1fff79c842fa',45,
-   1,15625,
-   ], 100 = [
-   '0xa000','0x0',28,
-   10,1,
-   '0xcccd','0x7',35,
-   1,10,
-   '0x9c40','0x0',18,
-   1,1,
-   '0xd1b71759','0x1fff2e48e8a7',45,
-   1,1,
-   ], 122 = [
-   '0x8325c53f','0xfbcda3a',28,
-   500,61,
-   '0xf9db22d1','0x7fbe76c8b',35,
-   61,500,
-   '0x8012e2a0','0x3ef36',18,
-   50,61,
-   '0xffda4053','0x1bce4217',45,
-   61,50,
-   ], 128 = [
-   '0xfa00','0x1e00',29,
-   125,16,
-   '0x83126e98','0x3f7ced916',34,
-   16,125,
-   '0xf424','0x4',19,
-   15625,2,
-   '0x8637bd06','0xfffbce4217d',44,
-   2,15625,
-   ], 200 = [
-   '0xa000','0x0',29,
-   5,1,
-   '0xcccd','0x3',34,
-   1,5,
-   '0x9c40','0x0',19,
-   5000,1,
-   '0xd1b71759','0xfff2e48e8a7',44,
-   1,5000,
-   ], 250 = [
-   '0x8000','0x0',29,
-   4,1,
-   '0x8000','0x18000',33,
-   1,4,
-   '0xfa00','0x0',20,
-   4000,1,
-   '0x83126e98','0x7ff7ced9168',43,
-   1,4000,
-   ], 256 = [
-   '0xfa00','0x3e00',30,
-   125,32,
-   '0x83126e98','0x1fbe76c8b',33,
-   32,125,
-   '0xf424','0xc',20,
-   15625,4,
-   '0x8637bd06','0x7ffde7210be',43,
-   4,15625,
-   ], 300 = [
-   '0xd556

PATCH [2/3]: Remove perl from make headers_install.

2009-01-03 Thread Rob Landley
From: Rob Landley r...@landley.net

Remove perl from make headers_install by replacing a perl script (doing
a simple regex search and replace) with a smaller and faster shell script
implementation.  The new shell script is a single for loop calling sed and
piping its output through unifdef to produce the target file.

Changes from previous version: Added help text and a check for the right
number of arguments.  Removed unused ARCH input from script and makefile
(the makefile incorporates ARCH into INDIR, so the script doesn't care),
fixed a whitespace mistake in the makefile pointed out by Sam Ravnborg,
changed the shebang to #!/bin/sh and tested under bash and dash.


put_changelog_here

Signed-off-by: Rob Landley r...@landley.net
---

 scripts/Makefile.headersinst |6 ++--
 scripts/headers_install.pl   |   46 -
 scripts/headers_install.sh   |   36 +
 3 files changed, 39 insertions(+), 49 deletions(-)

diff -ruN linux-2.6.28/scripts/headers_install.sh 
linux-2.6.28-new/scripts/headers_install.sh
--- linux-2.6.28/scripts/headers_install.sh 1969-12-31 18:00:00.0 
-0600
+++ linux-2.6.28-new/scripts/headers_install.sh 2009-01-02 22:35:17.0 
-0600
@@ -0,0 +1,36 @@
+#!/bin/sh
+
+if [ $# -lt 2 ]
+then
+   echo Usage: headers_install.sh INDIR OUTDIR [FILES...]
+   echo
+   echo Prepares kernel header files for use by user space, by removing
+   echo all compiler.h definitions and #includes, and removing any
+   echo #ifdef __KERNEL__ sections.
+   echo
+   echo INDIR:  directory to read each kernel header FILE from.
+   echo OUTDIR: directory to write each userspace header FILE to.
+   echo FILES:  list of header files to operate on.
+
+   exit 1
+fi
+
+# Grab arguments
+
+INDIR=$1
+shift
+OUTDIR=$1
+shift
+
+# Iterate through files listed on command line
+
+for i in $@
+do
+   sed -r \
+   -e 's/([ \t(])(__user|__force|__iomem)[ \t]/\1/g' \
+   -e 's/__attribute_const__([ \t]|$)/\1/g' \
+   -e 's...@^#include linux/compiler.h@@' $INDIR/$i |
+   scripts/unifdef -U__KERNEL__ -  $OUTDIR/$i
+done
+
+exit 0
diff -ruN linux-2.6.28/scripts/Makefile.headersinst 
linux-2.6.28-new/scripts/Makefile.headersinst
--- linux-2.6.28/scripts/Makefile.headersinst   2008-12-24 17:26:37.0 
-0600
+++ linux-2.6.28-new/scripts/Makefile.headersinst   2009-01-02 
22:36:42.0 -0600
@@ -44,8 +44,8 @@
 quiet_cmd_install = INSTALL $(printdir) ($(words $(all-files))\
 file$(if $(word 2, $(all-files)),s))
   cmd_install = \
-$(PERL) $ $(srctree)/$(obj) $(install) $(SRCARCH) $(header-y); \
-$(PERL) $ $(objtree)/$(obj) $(install) $(SRCARCH) $(objhdr-y); \
+  $(CONFIG_SHELL) $ $(srctree)/$(obj) $(install) $(header-y); \
+  $(CONFIG_SHELL) $ $(objtree)/$(obj) $(install) $(objhdr-y); \
 touch $@
 
 quiet_cmd_remove = REMOVE  $(unwanted)
@@ -64,7 +64,7 @@
@:
 
 targets += $(install-file)
-$(install-file): scripts/headers_install.pl $(input-files) FORCE
+$(install-file): scripts/headers_install.sh $(input-files) FORCE
$(if $(unwanted),$(call cmd,remove),)
$(if $(wildcard $(dir $@)),,$(shell mkdir -p $(dir $@)))
$(call if_changed,install)
diff -ruN linux-2.6.28/scripts/headers_install.pl 
linux-2.6.28-new/scripts/headers_install.pl
--- linux-2.6.28/scripts/headers_install.pl 2008-12-24 17:26:37.0 
-0600
+++ linux-2.6.28-new/scripts/headers_install.pl 1969-12-31 18:00:00.0 
-0600
@@ -1,46 +0,0 @@
-#!/usr/bin/perl -w
-#
-# headers_install prepare the listed header files for use in
-# user space and copy the files to their destination.
-#
-# Usage: headers_install.pl readdir installdir arch [files...]
-# readdir:dir to open files
-# installdir: dir to install the files
-# arch:   current architecture
-# arch is used to force a reinstallation when the arch
-# changes because kbuild then detect a command line change.
-# files:  list of files to check
-#
-# Step in preparation for users space:
-# 1) Drop all use of compiler.h definitions
-# 2) Drop include of compiler.h
-# 3) Drop all sections defined out by __KERNEL__ (using unifdef)
-
-use strict;
-
-my ($readdir, $installdir, $arch, @files) = @ARGV;
-
-my $unifdef = scripts/unifdef -U__KERNEL__;
-
-foreach my $file (@files) {
-   local *INFILE;
-   local *OUTFILE;
-   my $tmpfile = $installdir/$file.tmp;
-   open(INFILE, $readdir/$file)
-   or die $readdir/$file: $!\n;
-   open(OUTFILE, $tmpfile) or die $tmpfile: $!\n;
-   while (my $line = INFILE) {
-   $line =~ s/([\s(])__user\s/$1/g;
-   $line =~ s/([\s(])__force\s/$1/g;
-   $line =~ s/([\s(])__iomem\s/$1/g;
-   $line =~ s/\s__attribute_const__\s/ /g;
-   $line =~ s/\s__attribute_const__$//g;
-   $line =~ s/^#include

Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Friday 02 January 2009 10:04:08 Matthieu CASTET wrote:
 Rob Landley a écrit :
  On Friday 02 January 2009 03:26:37 Arkadiusz Miskiewicz wrote:
  On Friday 02 of January 2009, Rob Landley wrote:
 
  Heh,
 
  I believe all three scripts run under dash and busybox ash.  (The
  timeconst.sh one needs 64 bit math which dash only provides on 64 bit
  hosts, which is a regression from Red Hat 9 in 2003 by the way.

 With dash 0.5.4-12 (from debian sid), I seems I got the 64 bit math for
 32 bit hosts :
 $ uname -m
 i686
 $ dash -c 'echo $((132))'
 4294967296


 Matthieu

Alas, my attempt to install a 32 bit version of xubuntu 8.10 under qemu hung 
at Scanning files: 15%, and has been there for an hour now.  I'll have to 
take your word for it.  (All three scripts work fine under 64 bit dash.)

I encountered one bug in busybox, which I pinged that list about, but 
otherwise busybox ash works on 'em all too.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-03 Thread Rob Landley
On Friday 02 January 2009 13:33:02 H. Peter Anvin wrote:
 Rob Landley wrote:
  You mean The new shell script is much simpler, about 1/4 the size, runs
  on Red Hat 9 from 2003, and isn't perl? :)

 And introduces unclear environment dependencies depending on how
 external utilities are implemented.

I note that sed and printf and such are all susv3.  I have an explicit test 
for 32 bit math in the script that cares, and this worked in Red Hat 9 circa 
2003.

I consider this a step up from code with an implicit dependency on a CPAN 
library.

 The whole point of why that script was written in Perl was to have
 access to arbitrary-precision arithmetic -- after it was shown that bc
 would simply lock up on some systems.

A) I'm not using bc.

B) You don't need arbitrary precision arithmetic, you need around 72 bits 
worth of arithmetic for the pathological case.

C) Your definition of access to arbitrary-precision arithmetic includes the 
following, cut and paste verbatim from your old script:

# Precomputed values for systems without Math::BigInt
# Generated by:
# timeconst.pl --can 24 32 48 64 100 122 128 200 250 256 300 512 1000 1024 
1200
%canned_values = (
24 = [
'0xa6ab','0x2aa',26,
125,3,
'0xc49ba5e4','0x1fbe76c8b4',37,
3,125,
'0xa2c2aaab','0x',16,
125000,3,
'0xc9539b89','0x7fffbce4217d',47,
3,125000,
], 32 = [
'0xfa00','0x600',27,
125,4,
'0x83126e98','0xfdf3b645a',36,
4,125,
'0xf424','0x0',17,
31250,1,
'0x8637bd06','0x3fff79c842fa',46,
1,31250,
], 48 = [
'0xa6ab','0x6aa',27,
125,6,
'0xc49ba5e4','0xfdf3b645a',36,
6,125,
'0xa2c2aaab','0x1',17,
62500,3,
'0xc9539b89','0x3fffbce4217d',46,
3,62500,
], 64 = [
'0xfa00','0xe00',28,
125,8,
'0x83126e98','0x7ef9db22d',35,
8,125,
'0xf424','0x0',18,
15625,1,
'0x8637bd06','0x1fff79c842fa',45,
1,15625,
], 100 = [
'0xa000','0x0',28,
10,1,
'0xcccd','0x7',35,
1,10,
'0x9c40','0x0',18,
1,1,
'0xd1b71759','0x1fff2e48e8a7',45,
1,1,
], 122 = [
'0x8325c53f','0xfbcda3a',28,
500,61,
'0xf9db22d1','0x7fbe76c8b',35,
61,500,
'0x8012e2a0','0x3ef36',18,
50,61,
'0xffda4053','0x1bce4217',45,
61,50,
], 128 = [
'0xfa00','0x1e00',29,
125,16,
'0x83126e98','0x3f7ced916',34,
16,125,
'0xf424','0x4',19,
15625,2,
'0x8637bd06','0xfffbce4217d',44,
2,15625,
], 200 = [
'0xa000','0x0',29,
5,1,
'0xcccd','0x3',34,
1,5,
'0x9c40','0x0',19,
5000,1,
'0xd1b71759','0xfff2e48e8a7',44,
1,5000,
], 250 = [
'0x8000','0x0',29,
4,1,
'0x8000','0x18000',33,
1,4,
'0xfa00','0x0',20,
4000,1,
'0x83126e98','0x7ff7ced9168',43,
1,4000,
], 256 = [
'0xfa00','0x3e00',30,
125,32,
'0x83126e98','0x1fbe76c8b',33,
32,125,
'0xf424','0xc',20,
15625,4,
'0x8637bd06','0x7ffde7210be',43,
4,15625,
], 300 = [
'0xd556','0x2aaa',30,
10,3,
'0x999a','0x1',33,
3,10,
'0xd056','0xa',20,
1,3,
'0x9d495183','0x7ffcb923a29',43,
3,1,
], 512 = [
'0xfa00','0x7e00',31,
125,64,
'0x83126e98','0xfdf3b645',32,
64,125,
'0xf424','0x1c',21,
15625,8,
'0x8637bd06','0x3ffef39085f',42,
8,15625,
], 1000 = [
'0x8000','0x0',31,
1,1,
'0x8000','0x0',31,
1,1,
'0xfa00','0x0',22,
1000,1,
'0x83126e98','0x1ff7ced9168',41,
1,1000,
], 1024 = [
'0xfa00','0xfe00',32,
125,128,
'0x83126e98','0x7ef9db22',31,
128,125,
'0xf424','0x3c',22,
15625,16,
'0x8637bd06','0x1fff79c842f',41,
16,15625,
], 1200 = [
'0xd556','0xd555',32,
5,6,
'0x999a','0x',31,
6,5,
'0xd056','0x2a',22,
2500,3,
'0x9d495183','0x1ffcb923a29',41,
3,2500,
]
);

Plus a decent chunk of the remaining logic was code to regenerate that table, 
and to figure out when to use the table and when to compute new values.  (And 
erroring out if the system wasn't capable of doing so.)  I don't understand 
why you didn't just precompute the actual header file output instead or 
precomputing perl source, but that's a side issue.

Rob

--
To unsubscribe from this list

Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Friday 02 January 2009 13:27:45 H. Peter Anvin wrote:
 Sam Ravnborg wrote:
  Hi Wookey.
 
  Given the
  simplicitly of these patches I can't see any reason not to put them
  in
 
  Please do NOT do the mistake and think this the same thing.
 
  Rob's patch simplyfy the timecost stuff - and will be applied on
  this merit alone assuming comments will be addressed.
 
  But the serie rased anohter topic: shall we ban use of perl
  for generating a kernel.
  And this is what primary is discussed and the outcome of
  that discussion will not prevent patches that stands on their
  own to be applied.

 My personal opinion on this is that this is ridiculous.  Given that you
 need gcc, binutils, make etc. to build the kernel,

I believe Intel's icc builds the kernel, and tinycc previously built a subset 
of the kernel.  The pcc and llvm/clang projects are getting close to being 
able to build the kernel.  Ever since c99 came out, lots of gcc-isms with c99 
equivalents have been swapped over, most of the rest is testing.

 and this is more than
 inherent, you have to have a pretty bloody strangely constrained system
 to disallow Perl, which is as close to a standard Unix utility you can
 get without making it into SuS.

Please show me A) the standard perl implements, B) the second implementation 
of that standard ala IETF guidelines.

 The only *real* motivation I have seen for this is a system that as far
 I can tell is nothing other than a toy, specifically designed to show
 off how little you need to build the kernel.  In other words, it's not a
 practical application, it's a show-off art piece.

I'm glad you think my Firmware Linux project is a work of art, but if you'd 
like to hear directly from my users I can ask them to complain at you in 
person, if you like.  I'm not sure what that would prove, though.

When cross compiling, it's good to have as constrained an environment as 
possible, because otherwise bits of the host system leak into the target 
system.  If you don't tightly control your cross compiling environment, it 
won't work.  That's just about an axiom in embedded development.

I know every single dependency my system has.  I can list them, explicitly.  I 
did this because it's very _USEFUL_ in this context.

Add perl scripts that call cpan, and this is no longer true.

   -hpa

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-03 Thread Rob Landley
On Saturday 03 January 2009 06:28:22 Ingo Oeser wrote:
  +for i in MSEC 1000 USEC 100
  +do
  +   NAME=$(echo $i | awk '{print $1}')

 cut -d' ' -f1  does the same

  +   PERIOD=$(echo $i | awk '{print $2}')

 cut -d' ' -f2  does the same

From a standards perspective 
http://www.opengroup.org/onlinepubs/9699919799/utilities/cut.html vs 
http://www.opengroup.org/onlinepubs/9699919799/utilities/awk.html is probably 
a wash, but from a simplicity perspective using the tool that _isn't_ its own 
programming language is probably a win. :)

I made the change in the second round of patches I just posted.

Thanks,

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Saturday 03 January 2009 17:03:11 H. Peter Anvin wrote:
 Leon Woestenberg wrote:
  I agree with Rob that the amount of required dependencies should be
  kept to a minimum.
 
  If we only use 0.5% of a certain language (or: dependent package),
  then rather implement that 0.5% in the existing language.
 
  Dependencies very quickly become dependency hell. If A requires B,
  then A also inherits all (future) requirements of B, etc. etc.
 
  In my daily software development work with Linux and GNU software in
  general, 10% of it is spent fighting/removing these extremely thin
  or false depencies, so that it is usuable in embedded devices.

 First of all, I largely consider this a joke.

Yes, I've noticed this.  The fact multiple other people do _not_ consider this 
a joke doesn't seem to have sunk in yet.

 All real-life embedded
 kernel builds take place on hosted platforms; anything else seems to be
 done just because it can be done, as a kind of show-off art project.
 Cute, but hardly worth impeding the rest of the kernel community for.

 We're not talking about general platform dependencies here, but build
 dependencies for the kernel.  A platform that can build the kernel is
 not a small platform.

The kernel didn't need perl to build until 2.6.25.  For 17 years, this 
dependency was not required.  You added it, in a way that affected even 
allnoconfig, for no obvious gain.

 Second of all, these patches are not fullworthy replacements.  The
 original patch using bc had less dependencies, but bc failed on some
 platforms, mysteriously.

So rather than debugging it, you rewrote it in perl.  Much less potential 
mysterious behavior there.

 The new patches have *more* environmental
 dependencies than that ever did.

Could you please be a little more specific?

 Third, if someone actually cares to do it right, I have a smallish
 bignum library at http://git.zytor.com/?p=lib/pbn.git;a=summary that
 might be a starting point.

This doesn't _need_ bignum support.  It maxes out around 72 bits and the 
_result_ can't use more than about $SHIFT bits because you're dividing by the 
amount you shifted, so just chop off the bottom 32 bits, do a normal 64 bit 
division on the top (it has to fit), and then do the same division on the 
appropriate shifted remainder, and combine the results.  This is easy because 
when the shift _is_ 32 bits or more, the bottom 32 bits all have to be zeroes 
so you don't even have to mask and add, just shift the remainder left 32 bits 
so you can continue the divide.

Pulling out perl isn't always a good alternative to thinking about the 
problem.

   -hpa

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Friday 02 January 2009 08:04:09 Theodore Tso wrote:
 Sounds like though modulo dealing with 64-bit arithmetic, your patches
 are mostly dash/POSIX.2 comformant, so you're probably mostly good on
 that front once you address the 32/64-bit issues.  I'd also suggest
 explicitly add a reminder to the shell scripts' comments to avoid
 bashisms for maximum portability, to remind developers in the future
 who might try to change the shell scripts to watch out for portability
 issues.

I changed the scripts to start with #!/bin/sh and retested under dash.

If scripts say #!/bin/sh when they actually need bash, or say #!/bin/bash when 
they work with dash, that should probably be fixed.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Saturday 03 January 2009 21:38:13 Markus Heidelberg wrote:
 Rob Landley, 04.01.2009:
 Now that you mention this the second time, I have to ask where you have
 this information from. Since I use Gentoo, I was always able to compile
 OpenOffice (version 1, 2 and now 3) myself.

The gentoo panel last OLS.  (Either a BOF or a tutorial, I don't remember 
which.)  It's still _possible_ to build it from source, but they created a 
separate openoffice-bin package for the sole purpose of _not_ compiling it 
from source, and it's what they recommend installing.

 At the same time, it was always possible to use prebuilt packages as an
 alternative - the same way as it is possible for a few other packages
 (Firefox, Thunderbird, Seamonkey, maybe more). But AFAIK compiling from
 source is still the preferred method.

Apparently not for Open Office.

First hit googling gentoo openoffice install:
http://grokdoc.net/index.php/Gentoo-OpenOffice.org

The next two hits are bug tracker entries, and the one after that:
http://www.linuxforums.org/forum/gentoo-linux-help/71086-installing-
openoffice-question.html

Contains this cut and paste from emerge output:

These are the packages that would be merged, in order:

Calculating dependencies
!!! All ebuilds that could satisfy openoffice have been masked.
!!! One of the following masked packages is required to complete your request:
- app-office/openoffice-2.0.4_rc1-r1 (masked by: package.mask, package.mask, 
~am d64 keyword)
# 2005/10/24 Simon Stelling bl...@gentoo.org
# Don't even try to compile openoffice-2.x, it won't work.

But not to go too far down this rathole, I'm just using openoffice as an 
example here.  If you want to talk about it more, take it off list please.

 Markus

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-03 Thread Rob Landley
On Saturday 03 January 2009 20:14:44 H. Peter Anvin wrote:
 Rob Landley wrote:
  The new patches have *more* environmental
  dependencies than that ever did.
 
  Could you please be a little more specific?

 In this case, you're assuming that every version of every shell this is
 going to get involved with is going to do math correctly with the
 requisite precision, which is nowhere guaranteed, I'm pretty sure.

Well, SUSv3 requires that the shell support signed long arithmetic:
http://www.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06_04

And the LP64 standard says that on 64 bit systems, long must be 64 bit:
http://www.unix.org/whitepapers/64bit.html

Now the potential weakness there is that on 32 bit systems, shells might only 
support 32 bit math instead of 64 bit math.  (You'll notice I have a test for 
that.)  However, bash has supported 64 bit math on 32 bit systems since at 
least 2003.  (I keep a Red Hat 9 test image around because it had 50% market 
share when it shipped, so the majority of old Linux systems still in use 
_are_ RH9 or similar.  It also has the oldest gcc version the kernel still 
claims to build under.)  Busybox ash can also support 64 bit math on 32 bit 
hosts, and I just confirmed that the dash in the 32 bit xubuntu 8.10 supports 
64 bit math as well.

(It would also be possible to do a similar overflow avoiding trick to do the 
lot entirely in 32 bit math, but given that the three main shells in use on 
Linux _do_ support 64 bit math on 32 bit hosts and are _required_ to support 
it on 64 bit hosts according to SUSv3, the extra complexity really doesn't 
seem worth it.)

  Third, if someone actually cares to do it right, I have a smallish
  bignum library at http://git.zytor.com/?p=lib/pbn.git;a=summary that
  might be a starting point.
 
  This doesn't _need_ bignum support.  It maxes out around 72 bits and the
  _result_ can't use more than about $SHIFT bits because you're dividing by
  the amount you shifted, so just chop off the bottom 32 bits, do a normal
  64 bit division on the top (it has to fit), and then do the same division
  on the appropriate shifted remainder, and combine the results.  This is
  easy because when the shift _is_ 32 bits or more, the bottom 32 bits all
  have to be zeroes so you don't even have to mask and add, just shift the
  remainder left 32 bits so you can continue the divide.
 
  Pulling out perl isn't always a good alternative to thinking about the
  problem.

 Neither is open-coding a bignum operation instead of relying on an
 existing, validated implementation.

Implementing something by hand isn't _always_ a good alternative, sure.  That 
would be the thinking about the problem part.  In this instance, avoiding 
overflow is trivial.  (If 1-1 didn't wrap around, it wouldn't even need the 
if statement.)

I'm curious, would the existing, validated implementation in this instance 
be the perl implementation, or the library you wrote and pointed me to above 
as a potential replacement for it?

I suppose there's a certain amount of style in accusing me of reinventing the 
wheel while pointing me at your reinvention of the wheel.  (Are you aiming to 
replace Gnu's gmplib.org, the perhaps the BSD licensed one in openssh?  
Dropbear uses Libtommath.  A quick google for C open source big number 
libraries also found Libimath, MPI, NTL, BigDigits, decNumber, and MPI.  The 
last time I personally wrote my own arbitrary precision math package from 
scratch was in 1998, and that was in Java, so I'm a little rusty...)  But I 
don't personally consider avoiding the need for an arbitrary precision math 
library to be the same as reimplementing one.

   -hpa

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-03 Thread Rob Landley
On Saturday 03 January 2009 23:07:55 valdis.kletni...@vt.edu wrote:
 On Sat, 03 Jan 2009 19:36:04 CST, Rob Landley said:
  On Saturday 03 January 2009 06:28:22 Ingo Oeser wrote:
+for i in MSEC 1000 USEC 100
+do
+   NAME=$(echo $i | awk '{print $1}')
  
   cut -d' ' -f1  does the same
  
+   PERIOD=$(echo $i | awk '{print $2}')
  
   cut -d' ' -f2  does the same

 Close, but no cee-gar.  cut does something counter-intuitive with multiple
 blanks:

Yes, but in this case I'm the one passing in the data so I know there aren't 
multiple blanks.  (Or tabs instead of spaces.)

In a private email, Bernd Petrovitsch suggested set -- $i and then using 
NAME=$1; PERIOD=$2.  (I keep getting private email responses to these sort of 
threads, and then getting dismissed as the only one who cares about the issue.  
Less so this time around, but still...)  This apparently works all the way 
back to the bourne shell.  Not worth rolling another patch for, but if I do 
wind up rolling another patch for other reasons I might switch over to that.  
Both cut and awk are susv3, though.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-02 Thread Rob Landley
From: Rob Landley r...@landley.net

Replace kernel/timeconst.pl with kernel/timeconst.sh.  The new shell script
is much simpler, about 1/4 the size, and runs on Red Hat 9 from 2003.

Peter Anvin added this perl to 2.6.25.  Before that, the kernel had never
required perl to build.

Signed-off-by: Rob Landley r...@landley.net
---

 kernel/Makefile |4 
 kernel/timeconst.pl |  378 --
 kernel/timeconst.sh |   91 ++
 3 files changed, 93 insertions(+), 380 deletions(-)

diff -r d0c7611dcfd6 kernel/Makefile
--- a/kernel/Makefile   Tue Dec 30 17:48:25 2008 -0800
+++ b/kernel/Makefile   Fri Jan 02 00:10:44 2009 -0600
@@ -115,7 +115,7 @@
 $(obj)/time.o: $(obj)/timeconst.h
 
 quiet_cmd_timeconst  = TIMEC   $@
-  cmd_timeconst  = $(PERL) $ $(CONFIG_HZ)  $@
+  cmd_timeconst  = $(CONFIG_SHELL) $ $(CONFIG_HZ)  $@
 targets += timeconst.h
-$(obj)/timeconst.h: $(src)/timeconst.pl FORCE
+$(obj)/timeconst.h: $(src)/timeconst.sh FORCE
$(call if_changed,timeconst)
--- /dev/null   1969-12-31 00:00:00.0 -0600
+++ hg/kernel/timeconst.sh  2009-01-01 23:53:17.0 -0600
@@ -0,0 +1,91 @@
+#!/bin/bash
+
+if [ $# -ne 1 ]
+then
+   echo Usage: timeconst.sh HZ
+   exit 1
+fi
+
+HZ=$1
+
+# Sanity test: even the shell in Red Hat 9 (circa 2003) supported 64 bit math.
+
+[ $((132)) -lt 0 ]  exit 1
+
+# Output start of header file
+
+cat  EOF
+/* Automatically generated by kernel/timeconst.sh */
+/* Conversion constants for HZ == $HZ */
+
+#ifndef KERNEL_TIMECONST_H
+#define KERNEL_TIMECONST_H
+
+#include linux/param.h
+#include linux/types.h
+
+#if HZ != $HZ
+#error kernel/timeconst.h has the wrong HZ value!
+#endif
+
+EOF
+
+# For both Miliseconds and Microseconds
+
+for i in MSEC 1000 USEC 100
+do
+   NAME=$(echo $i | awk '{print $1}')
+   PERIOD=$(echo $i | awk '{print $2}')
+
+   # Find greatest common denominator (using Euclid's algorithm)
+
+   A=$HZ
+   B=$PERIOD
+
+   while [ $B -ne 0 ]
+   do
+   C=$(($A%$B))
+   A=$B
+   B=$C
+   done
+
+   GCD=$A
+
+   # Do this for each direction (HZ_TO_PERIOD and PERIOD_TO_HZ)
+
+   for DIRECTION in 0 1
+   do
+   if [ $DIRECTION -eq 0 ]
+   then
+   CONVERT=HZ_TO_${NAME}
+   FROM=$HZ
+   TO=$PERIOD
+   else
+   CONVERT=${NAME}_TO_HZ
+   FROM=$PERIOD
+   TO=$HZ
+   fi
+
+   # How many shift bits give 32 bits of significant data?
+
+   SHIFT=0
+   while [ $(( (($TO$SHIFT)+$FROM-1)/$FROM )) -lt $((131)) ]
+   do
+   SHIFT=$(( $SHIFT+1 ))
+   done
+
+   MUL32=$(( (($TO$SHIFT)+$FROM-1)/$FROM ))
+   MUL32=$(printf %x $MUL32)
+   echo #define ${CONVERT}_MUL32  U64_C(0x$MUL32)
+   ADJ32=$(($FROM/$GCD))
+   ADJ32=$(( (($ADJ32-1)$SHIFT)/$ADJ32 ))
+   ADJ32=$(printf %x $ADJ32)
+   echo #define ${CONVERT}_ADJ32  U64_C(0x$ADJ32)
+   echo #define ${CONVERT}_SHR32  $SHIFT
+   echo #define ${CONVERT}_NUMU64_C($(($TO/$GCD)))
+   echo #define ${CONVERT}_DENU64_C($(($FROM/$GCD)))
+   done
+done
+
+echo
+echo #endif /* KERNEL_TIMECHONST_H */
--- hg/kernel/timeconst.pl  2008-11-22 19:09:13.0 -0600
+++ /dev/null   1969-12-31 00:00:00.0 -0600
@@ -1,378 +0,0 @@
-#!/usr/bin/perl
-# ---
-#
-#   Copyright 2007-2008 rPath, Inc. - All Rights Reserved
-#
-#   This file is part of the Linux kernel, and is made available under
-#   the terms of the GNU General Public License version 2 or (at your
-#   option) any later version; incorporated herein by reference.
-#
-# ---
-#
-
-#
-# Usage: timeconst.pl HZ  timeconst.h
-#
-
-# Precomputed values for systems without Math::BigInt
-# Generated by:
-# timeconst.pl --can 24 32 48 64 100 122 128 200 250 256 300 512 1000 1024 1200
-%canned_values = (
-   24 = [
-   '0xa6ab','0x2aa',26,
-   125,3,
-   '0xc49ba5e4','0x1fbe76c8b4',37,
-   3,125,
-   '0xa2c2aaab','0x',16,
-   125000,3,
-   '0xc9539b89','0x7fffbce4217d',47,
-   3,125000,
-   ], 32 = [
-   '0xfa00','0x600',27,
-   125,4,
-   '0x83126e98','0xfdf3b645a',36,
-   4,125,
-   '0xf424','0x0',17,
-   31250,1,
-   '0x8637bd06','0x3fff79c842fa',46,
-   1,31250,
-   ], 48 = [
-   '0xa6ab','0x6aa',27,
-   125,6,
-   '0xc49ba5e4','0xfdf3b645a',36

[PATCH 3/3]: Convert mkcapflags.pl to mkcapflags.sh

2009-01-02 Thread Rob Landley
From: Rob Landley r...@landley.net

Convert kernel/cpu/mkcapflags.pl to kernel/cpu/mkcapflags.sh.

This script generates kernel/cpu/capflags.c from include/asm/cpufeature.h.

Peter Anvin added this perl to 2.6.28.

Signed-off-by: Rob Landley r...@landley.net
---

 arch/x86/kernel/cpu/Makefile  |4 +--
 arch/x86/kernel/cpu/mkcapflags.pl |   32 
 arch/x86/kernel/cpu/mkcapflags.sh |   28 
 3 files changed, 30 insertions(+), 34 deletions(-)

diff -ruN alt-linux/arch/x86/kernel/cpu/Makefile 
alt-linux2/arch/x86/kernel/cpu/Makefile
--- alt-linux/arch/x86/kernel/cpu/Makefile  2008-12-24 17:26:37.0 
-0600
+++ alt-linux2/arch/x86/kernel/cpu/Makefile 2008-12-28 18:10:51.0 
-0600
@@ -23,10 +23,10 @@
 obj-$(CONFIG_X86_LOCAL_APIC) += perfctr-watchdog.o
 
 quiet_cmd_mkcapflags = MKCAP   $@
-  cmd_mkcapflags = $(PERL) $(srctree)/$(src)/mkcapflags.pl $ $@
+  cmd_mkcapflags = $(CONFIG_SHELL) $(srctree)/$(src)/mkcapflags.sh $ $@
 
 cpufeature = $(src)/../../include/asm/cpufeature.h
 
 targets += capflags.c
-$(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.pl FORCE
+$(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.sh FORCE
$(call if_changed,mkcapflags)
diff -ruN alt-linux/arch/x86/kernel/cpu/mkcapflags.pl 
alt-linux2/arch/x86/kernel/cpu/mkcapflags.pl
--- alt-linux/arch/x86/kernel/cpu/mkcapflags.pl 2008-12-24 17:26:37.0 
-0600
+++ alt-linux2/arch/x86/kernel/cpu/mkcapflags.pl1969-12-31 
18:00:00.0 -0600
@@ -1,32 +0,0 @@
-#!/usr/bin/perl
-#
-# Generate the x86_cap_flags[] array from include/asm-x86/cpufeature.h
-#
-
-($in, $out) = @ARGV;
-
-open(IN,  $in\0)   or die $0: cannot open: $in: $!\n;
-open(OUT,  $out\0) or die $0: cannot create: $out: $!\n;
-
-print OUT #include asm/cpufeature.h\n\n;
-print OUT const char * const x86_cap_flags[NCAPINTS*32] = {\n;
-
-while (defined($line = IN)) {
-   if ($line =~ /^\s*\#\s*define\s+(X86_FEATURE_(\S+))\s+(.*)$/) {
-   $macro = $1;
-   $feature = $2;
-   $tail = $3;
-   if ($tail =~ /\/\*\s*\([^]*)\.*\*\//) {
-   $feature = $1;
-   }
-
-   if ($feature ne '') {
-   printf OUT \t%-32s = \%s\,\n,
-   [$macro], \L$feature;
-   }
-   }
-}
-print OUT };\n;
-
-close(IN);
-close(OUT);
diff -ruN alt-linux/arch/x86/kernel/cpu/mkcapflags.sh 
alt-linux2/arch/x86/kernel/cpu/mkcapflags.sh
--- alt-linux/arch/x86/kernel/cpu/mkcapflags.sh 1969-12-31 18:00:00.0 
-0600
+++ alt-linux2/arch/x86/kernel/cpu/mkcapflags.sh2008-12-28 
18:08:50.0 -0600
@@ -0,0 +1,28 @@
+#!/bin/bash
+#
+# Generate the x86_cap_flags[] array from include/asm/cpufeature.h
+#
+
+IN=$1
+OUT=$2
+
+(
+   echo #include asm/cpufeature.h
+   echo 
+   echo const char * const x86_cap_flags[NCAPINTS*32] = {
+
+   # Iterate through any input lines starting with #define X86_FEATURE_
+   sed -n -e 's/\t/ /g' -e 's/^ *# *define *X86_FEATURE_//p' $IN |
+   while read i
+   do
+   # Name is everything up to the first whitespace
+   NAME=$(echo $i | sed 's/ .*//')
+
+   # If the /* comment */ starts with a quote string, grab that.
+   VALUE=$(echo $i | sed -n 's...@.*/\* 
*\([^]*\).*\*/@\...@p')
+   [ -z $VALUE ]  VALUE=\$(echo $NAME | tr A-Z a-z)\
+
+   [ $VALUE != '' ]  echo   [X86_FEATURE_$NAME] = $VALUE,
+   done
+   echo };
+)  $OUT



Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-02 Thread Rob Landley
On Friday 02 January 2009 03:26:37 Arkadiusz Miskiewicz wrote:
 On Friday 02 of January 2009, Rob Landley wrote:
  Before 2.6.25 (specifically git bdc807871d58285737d50dc6163d0feb72cb0dc2
  ) building a Linux kernel never required perl to be installed on the
  build system.  (Various development and debugging scripts were written in
  perl and python and such, but they weren't involved in actually building
  a kernel.) Building a kernel before 2.6.25 could be done with a minimal
  system built from gcc, binutils, bash, make, busybox, uClibc, and the
  Linux kernel, and nothing else.

 And now bash is going to be required... while some distros don't need/have
 bash. /bin/sh should be enough.

 Heh,

I believe all three scripts run under dash and busybox ash.  (The timeconst.sh 
one needs 64 bit math which dash only provides on 64 bit hosts, which is a 
regression from Red Hat 9 in 2003 by the way.  Busybox ash can also provide 64 
bit math on 32 bit hosts, and the script should run with that just fine if you 
haven't got bash and that's what your sh in the path is.)

The makefiles execute those scripts via CONFIG_SHELL, not via the #!/blah line 
at the start, so it's largely irrelevant what gets put there anyway.  If you 
haven't got bash installed it'll use sh, which should work with dash on a 64 
bit host or with busybox ash.  (That's why that one file has a test to make 
sure 64 bit math _does_ work.  The only Linux development environment I'm 
aware of where that test would trigger is if you use a 32 bit ubuntu and go 
out of your way to _uninstall_ bash.  Even Cygwin uses bash.)

Beyond that, find linux -type f | xargs grep bin/bash | wc comes up with 38 
instances (admittedly half of 'em in Documentation, but lots in scripts, and 
makefiles, and defconfigs, at least one hardwired in C code.)  So this would 
not a _new_ dependency.

By the way, what Linux distros install a compiler toolchain but not bash?  I'm 
curious.  (Even after Ubuntu moved #!/bin/sh to point to dash, it still 
installs bash as part of the default environment, even if you don't install 
development tools.)  You've built the kernel on this system before?

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-02 Thread Rob Landley
On Friday 02 January 2009 04:16:53 Alejandro Mery wrote:
 Christoph Hellwig escribió:
  On Fri, Jan 02, 2009 at 10:26:37AM +0100, Arkadiusz Miskiewicz wrote:
  On Friday 02 of January 2009, Rob Landley wrote:
  Before 2.6.25 (specifically git
  bdc807871d58285737d50dc6163d0feb72cb0dc2 ) building a Linux kernel
  never required perl to be installed on the build system.  (Various
  development and debugging scripts were written in perl and python and
  such, but they weren't involved in actually building a kernel.)
  Building a kernel before 2.6.25 could be done with a minimal system
  built from gcc, binutils, bash, make, busybox, uClibc, and the Linux
  kernel, and nothing else.
 
  And now bash is going to be required... while some distros don't
  need/have bash. /bin/sh should be enough.
 
  *nod*  bash is in many ways a worse requirement than perl.  strict posix
  /bin/sh + awk + sed would be nicest, but if that's too much work perl
  seems reasonable.

 well, bash is not worse as bash is trivial to cross-compile to run on a
 constrained sandbox and perl is a nightmare, but I agree bash should be
 avoided too.

 I think the $(( ... )) bash-ism can be replaced with a simple .c helper
 toy.

No, $[ ] is the bashism, $(( )) is susv3:
http://www.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06_04

I intentionally switched from $[ ] to $(( )) to make dash work.

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3]: Replace kernel/timeconst.pl with kernel/timeconst.sh

2009-01-02 Thread Rob Landley
On Friday 02 January 2009 03:04:39 Sam Ravnborg wrote:
 Hi Rob.

 On Fri, Jan 02, 2009 at 02:13:30AM -0600, Rob Landley wrote:
  From: Rob Landley r...@landley.net
 
  Replace kernel/timeconst.pl with kernel/timeconst.sh.  The new shell
  script is much simpler, about 1/4 the size, and runs on Red Hat 9 from
  2003.

 This part of the changelog is OK except that is fails to
 address why we want to get away from perl.

You mean The new shell script is much simpler, about 1/4 the size, runs on 
Red Hat 9 from 2003, and isn't perl? :)

 Please drop the remining part of the changelog (but not the s-o-b).

ok.

  Peter Anvin added this perl to 2.6.25.  Before that, the kernel had never
  required perl to build.
 
  Signed-off-by: Rob Landley r...@landley.net
  ---
 
   kernel/Makefile |4
   kernel/timeconst.pl |  378 --
   kernel/timeconst.sh |   91 ++
   3 files changed, 93 insertions(+), 380 deletions(-)
 
  diff -r d0c7611dcfd6 kernel/Makefile
  --- a/kernel/Makefile   Tue Dec 30 17:48:25 2008 -0800
  +++ b/kernel/Makefile   Fri Jan 02 00:10:44 2009 -0600
  @@ -115,7 +115,7 @@
   $(obj)/time.o: $(obj)/timeconst.h
 
   quiet_cmd_timeconst  = TIMEC   $@
  -  cmd_timeconst  = $(PERL) $ $(CONFIG_HZ)  $@
  +  cmd_timeconst  = $(CONFIG_SHELL) $ $(CONFIG_HZ)  $@
   targets += timeconst.h
  -$(obj)/timeconst.h: $(src)/timeconst.pl FORCE
  +$(obj)/timeconst.h: $(src)/timeconst.sh FORCE
  $(call if_changed,timeconst)

 OK

  --- /dev/null   1969-12-31 00:00:00.0 -0600
  +++ hg/kernel/timeconst.sh  2009-01-01 23:53:17.0 -0600
  @@ -0,0 +1,91 @@
  +#!/bin/bash
  +
  +if [ $# -ne 1 ]
  +then
  +   echo Usage: timeconst.sh HZ
  +   exit 1
  +fi

 That usage is useless. Either extend it or spend a few lines
 in the shell script explainign what the shell script is supposed to do.

Do you mean something more like:

echo Usage: timeconst.sh HZ
echo
echo Generates a header file of constants for converting between decimal
echo HZ timer ticks and milisecond or microsecond delays

I'm happy turning it into a comment instead, I just found a quick check that 
I'd remembered to type an argument useful during debugging. :)

  +
  +HZ=$1
  +
  +# Sanity test: even the shell in Red Hat 9 (circa 2003) supported 64 bit
  math. +
  +[ $((132)) -lt 0 ]  exit 1
  +

 If it fails then add an error message explaining why. So if we get reports
 that this fails then we at least can see something like:
 timeconst noticed that the shell did not support 64 bit math - stop

Ok.

  +# Output start of header file
  +
  +cat  EOF
  +/* Automatically generated by kernel/timeconst.sh */
  +/* Conversion constants for HZ == $HZ */
  +
  +#ifndef KERNEL_TIMECONST_H
  +#define KERNEL_TIMECONST_H

 Please use __KERNEL_TIMECONST_H. The two underscores are standard.

Sure thing.  (I was just copying the perl there, I'll post an updated patch 
after I get some sleep.)

  +
  +#include linux/param.h
  +#include linux/types.h
  +
  +#if HZ != $HZ
  +#error kernel/timeconst.h has the wrong HZ value!
  +#endif
  +
  +EOF
  +
  +# For both Miliseconds and Microseconds
  +
  +for i in MSEC 1000 USEC 100
  +do
  +   NAME=$(echo $i | awk '{print $1}')
  +   PERIOD=$(echo $i | awk '{print $2}')
  +
  +   # Find greatest common denominator (using Euclid's algorithm)
  +
  +   A=$HZ
  +   B=$PERIOD
  +
  +   while [ $B -ne 0 ]
  +   do
  +   C=$(($A%$B))
  +   A=$B
  +   B=$C
  +   done
  +
  +   GCD=$A
  +
  +   # Do this for each direction (HZ_TO_PERIOD and PERIOD_TO_HZ)
  +
  +   for DIRECTION in 0 1
  +   do
  +   if [ $DIRECTION -eq 0 ]
  +   then
  +   CONVERT=HZ_TO_${NAME}
  +   FROM=$HZ
  +   TO=$PERIOD
  +   else
  +   CONVERT=${NAME}_TO_HZ
  +   FROM=$PERIOD
  +   TO=$HZ
  +   fi
  +
  +   # How many shift bits give 32 bits of significant data?
  +
  +   SHIFT=0
  +   while [ $(( (($TO$SHIFT)+$FROM-1)/$FROM )) -lt $((131)) ]
  +   do
  +   SHIFT=$(( $SHIFT+1 ))
  +   done
  +
  +   MUL32=$(( (($TO$SHIFT)+$FROM-1)/$FROM ))
  +   MUL32=$(printf %x $MUL32)
  +   echo #define ${CONVERT}_MUL32  U64_C(0x$MUL32)
  +   ADJ32=$(($FROM/$GCD))
  +   ADJ32=$(( (($ADJ32-1)$SHIFT)/$ADJ32 ))
  +   ADJ32=$(printf %x $ADJ32)
  +   echo #define ${CONVERT}_ADJ32  U64_C(0x$ADJ32)
  +   echo #define ${CONVERT}_SHR32  $SHIFT
  +   echo #define ${CONVERT}_NUMU64_C($(($TO/$GCD)))
  +   echo #define ${CONVERT}_DENU64_C($(($FROM/$GCD)))
  +   done
  +done

 Is it a shell limitation that all spaces around operators are missing?
 Makes it hard to read..

No, I was just trying to make sure I didn't go over the 80 char limit.  
(Several temporary variables are there primarily because

Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-02 Thread Rob Landley
On Friday 02 January 2009 03:50:23 Paul Mundt wrote:
 On Fri, Jan 02, 2009 at 02:07:28AM -0600, Rob Landley wrote:
  The perl checkin for 2.6.25 was the camel's nose under the tent flap, and
  since then two more instances of perl have shown up in the core kernel
  build. This patch series removes the three required to build a basic
  kernel for qemu for the targets I've tested (arm, mips, powerpc, sparc,
  m68k, sh4, and of course x86 and x86-64), replacing them with shell
  scripts.

 Misguided rhetoric aside, what does this actually accomplish? If folks
 add meaningful tools in to the kernel that require python, and it is
 generally regarded as being fairly ubiquitous, I can't imagine there
 being any substantiated objections against it.

I think bloat-o-meter is a marvelous tool, and I'm a big fan of python.  But I 
don't think you shouldn't have to run that to compile a kernel either, largely 
because not needing it for the first 17 years or so implied living without 
this requirement could be done, sustainably even.

There's a difference between a development workstation and a dedicated build 
system.  Requiring you to install X11 plus qt on the headless build server 
cranking out nightly snapshots in order to run the configure stage of the 
kernel build would be silly.  But this is not an argument for ripping out 
make xconfig from the kernel.

Spot the difference?

 Your main reasons against inclusion of perl seem to be that there is no
 realistic expectation for target systems that will be self-hosting will
 have perl included, or the inherent complexity in maintaining a coherent
 cross compiling environment.

I'm saying it's a major new environmental dependency that went in fairly 
recently and largely without comment, and it causes real world headaches for 
real people, of which I am only one.

If you don't think environmental dependencies are a problem, I welcome you to 
attempt to build open office.  (Even the gentoo guys gave up on that one and 
just started shipping a prebuilt binary.)

I think large amounts of complexity start with small amounts of complexity 
that grow.  Complexity is inevitable, but there should be a _reason_ for 
increases in it.

 Both of these things are issues with your
 own environment, and in no way are these representative of the embedded
 development community at large.

 Now with that out of the way, this entire series fundamentally fails to
 convert half of the perl scripts shipped with the kernel today, some that
 are required for build depending on Kconfig options, and others that are
 simply useful tools for self-hosted environments.

I didn't say the job was finished.  These are just the ones I've already 
personally hit, and thus A) needed to rewrite to build the kernel in my build 
environment, B) have a handy test case for.

 Simply converting a
 couple of scripts over you find objectionable is certainly fine if there
 is a real benefit in doing so, but this doesn't actually accomplish your
 goal of removing the perl dependency.

A) It's a start.

B) It works for me, and builds the .configs I've personally needed so far.

 Ignoring the compile-time dependencies that you overlooked, what you
 define as development and debugging scripts are still an integral part
 of the system, unless you are trying to argue that embedded developers
 have no interest in things like checkstack due to the trouble of trying
 to get perl built.

Coming up with new patches and modifying the source is a different use for 
source code than going ./configure; make; make install.  This is true for 
most open source software, I'd expect.

Or are you implying that eclipse or Emacs are such great IDEs that being able 
to build outside of a GUI isn't interesting?  The ability to build within an 
IDE should be allowed to preclude the ability to build without one?

 Until you can post a series that converts all of scripts/*.pl in its
 entirety, you have failed to address the fundamental reason why perl is
 used in the first place.

Never start anything unless you can finish it all in one go, eh?

Last I heard the kernel guys tend to frown on people who wander off in their 
own corner for a year and then dump a massive rewrite on them.  They seem to 
prefer the incremental trail of breadcrumbs approach.  Release early, 
release often, incorporate feedback, keep at it.

Or am I wrong?

 Trying to toss bizarre policy statements around
 regarding things you personally find objectionable without any coherent
 technical argument to the contrary is of no constructive use whatsoever.

Complexity is a cost, environmental dependencies are a form of complexity, if 
the benefit isn't worth the cost (or you can get the benefit without the cost) 
then you shouldn't pay the cost.

I was unaware this was a controversial attitude?

 The kernel is and always has been about using the right tool for the job,
 not a matter of dictating what tools you must use in order to accomplish
 a task you

Re: PATCH [0/3]: Simplify the kernel build by removing perl.

2009-01-02 Thread Rob Landley
On Friday 02 January 2009 03:49:34 Christoph Hellwig wrote:
 On Fri, Jan 02, 2009 at 10:26:37AM +0100, Arkadiusz Miskiewicz wrote:
  On Friday 02 of January 2009, Rob Landley wrote:
   Before 2.6.25 (specifically git
   bdc807871d58285737d50dc6163d0feb72cb0dc2 ) building a Linux kernel
   never required perl to be installed on the build system.  (Various
   development and debugging scripts were written in perl and python and
   such, but they weren't involved in actually building a kernel.)
   Building a kernel before 2.6.25 could be done with a minimal system
   built from gcc, binutils, bash, make, busybox, uClibc, and the Linux
   kernel, and nothing else.
 
  And now bash is going to be required... while some distros don't
  need/have bash. /bin/sh should be enough.

 *nod*  bash is in many ways a worse requirement than perl.  strict posix
 /bin/sh + awk + sed would be nicest, but if that's too much work perl
 seems reasonable.

The scripts should work with dash (modulo the one that needs 64 bit math, 
which dash only provides on a 64 bit host), or with busybox ash (which can 
provide 64 bit math on 32 bit hosts just like bash can).  I'll explicitly 
retest both of those when I respin the patches in the strikemorning/strike 
afternoon.

(And yes I thought about writing my own arbitrary precision arithmetic shell 
functions, but it really didn't seem worth the complexity since the only 32 
bit Linux distros I've seen that install dash also install bash by default.  I 
just put in a test for 32 bit math so it can spot it and fail, on the off 
chance you're running a 32 bit host with dash after explicitly uninstalling 
bash.  All the embedded 32 bit ones that try to act as development 
environments use busybox ash, or more often just install bash.)

That said, how is bash _worse_ than perl?  (Where's the second implementation 
of perl?  Even Python had jython, but perl has... what?  The attempt to rebase 
on Parrot went down in flames...)

If the argument is that depending on a specific shell implementation is as 
bad as depending on the one and only implementation of perl, that argument I 
can at least follow, even if it doesn't actually apply in this case.  But 
where does worse come in?

Rob
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux under emulator

2008-08-08 Thread Rob Landley
On Tuesday 05 August 2008 12:30:45 Grant Likely wrote:
 On Tue, Aug 5, 2008 at 11:28 AM, Mihaela Grigore

 [EMAIL PROTECTED] wrote:
  If I intend to run a 2.6 linux kernel under a powerpc emulator, what
  is needed to make a minimal bootable system? I mean, apart from the
  kernel itself and busybox, do I need a bootloader ? If no actual
  hardware is used and the kernel can reside directly in ram from the
  emulator's point of view (so no relocation is needed), what else is to
  be done before the kernel can start running ?

 Look at the firmware linux documentation.  It should tell you
 everything you need.

I'm actually rewriting the documentation.  It could be made to suck less.

Currently http://landley.net/code/firmware/downloads/README is more or less in 
final form, but the about.html and the design.html pages are somewhere 
between in flux and in pieces.  (Working on it...)

I try to answer questions promptly, though. :)

 http://www.landley.net/code/firmware/

If you want to be lazy and try out the prebuilt binaries, you can also grab:

http://landley.net/code/firmware/downloads/binaries/system-image/system-image-powerpc.tar.bz2

Extract it, and ./run-emulator.sh (or ./run-with-home.sh if you'd like a 2 gig 
hdb image attached on /home so you have some scratch space to build stuff 
with.)  Installing qemu 0.9.1 is left as an exercise to the reader.

Rob

P.S.  Things you don't actually need to know, but just in case:

The sucker is a complete native build environment, gcc and everything.  It's 
set up like a Linux From Scratch chapter 6 intermediate system, with only 
the /tools directory existing by default, so you can build a new system 
without traces of the old one cluttering it up.  The boot script 
(/tools/bin/qemu-setup.sh) creates a bunch of symlinks and empty mount point 
directories at the top level to make the system behave like a normal build 
environment, so you can actually compile stuff and it should work.  See 
http://www.linuxfromscratch.org/lfs/view/stable/chapter06/chapter06.html for 
details on that.

If you want to use the distcc acceleration trick to compile stuff (calling out 
to the cross compiler from inside qemu, so whatever you're compiling still 
acts like fully native build but isn't _quite_ as painfully slow about it), 
grab cross-compiler-powerpc.tar.bz2 from the 
downloads/binaries/cross-compiler directory and extract that into your 
system-impage-powerpc directory, then run:
  ./run-with-emulator.sh cross-compiler-powerpc
(which calls ./run-with-home.sh, which calls ./run-emulator.sh, which calls 
qemu, which probably gets transferred to voicemail by that point...)
-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel boot problem on IXP422 Rev. A

2008-06-15 Thread Rob Landley
On Friday 13 June 2008 15:05:54 Tim Bird wrote:
 Rob,

 This is an excellent and concise description of the open
 source perspective on the problem.  I'll add just one note below.

 Rob Landley wrote:
  1) Try to reproduce the bug under a current kernel.  (Set up a _test_
  system.)

 This sounds easy, but can be quite difficult.

It's not a question of difficult or easy: it's the procedure that works.

You don't get support from a commercial vendor unless you pay them money, and 
you don't get support from open source developers unless you help us make the 
next release just a little bit better.  (We never said our help was free, we 
just said it didn't cost _money_.  Ok, the FSF did but they don't speak for 
all of us...)

 Very often, product developers are several versions behind, with
 no easy way to use the current kernel version.

I'm aware of that.  But if you can't set up a test system to reproduce the bug 
on a current system, the rest of us haven't got a _chance_.

 For example, a 
 common scenario is starting with a kernel that comes with a board
 (with source mind you), where the kernel came from the semi-conductor
 vendor, who paid a Linux vendor to do a port, and it was
 released in a time-frame relative to the Linux vendor's
 product schedule.

Then poke your vendor to fix the problem.

If you've decided to use a divergent fork from a vendor rather than the 
mainstream version, then the vendor has to support that fork for you because 
we're not going to be familiar with it.  (You can _hire_ one of us to support 
it for you, but we're not going to do so on a volunteer basis.)

We're happy to debug _our_ code.  But our code is the current vanilla 
release tarball.  If you can't reproduce the problem in the current vanilla 
tarball, then it's not our bug.  If you can only reproduce it in an older 
version: congratulations, we must have fixed it since.  If you can only 
reproduce it in some other fork, obviously their changes introduced the bug.  
If it's your code plus this patch, we need to see the patch.

If _you_ can't reproduce it in our code, how do you expect _us_ to?

 This is how you end up having people STARTING projects today
 using a 2.6.11 kernel.  (I know of many).

Oldest I've seen a new project launch with this year is 2.6.15, but I agree 
with your point.

Whoever decided backporting bug fixes to a 2.6.16 kernel forever was a good 
idea seems to have muddied the waters a bit.  Ironically I don't know anybody 
actually _using_ that version, but I've seen several people point to it to 
show that the community supports arbitrarily older versions forever, and 
thus they don't have to upgrade to get support, and 2.6.18 is actually 
_newer_ than that...

 The real difficulty, when a developer finds themselves in
 this position, is how to forward-port the BSP code necessary to
 reproduce the bug in the current kernel.  Often, the code
 is not isolated well enough (this is a vendor problem that
 really needs attention.  If you have the BSP in patches, it
 is usually not too bad to forward port even across several
 kernel versions.  But many vendors don't ship stuff this way.)

Yup.  Sucks, doesn't it?  This is not a problem that improves with the passage 
of time.

Might be a good idea to make it clear up front that even if your changes never 
get mainlined, failure to break up and break out your patches is still likely 
to cause maintenance problems down the road.

 The fact is, that by a series of small steps and delays by
 the linux vendor, chip vendor, board vendor,
 and product developer the code is out-of step.

Hence the importance of breaking out and breaking up the changes.

 It's easy to say don't get in this position, but
 this even happens when everyone is playing nice and actively
 trying to mainline stuff.  BSP support in arch trees often
 lag mainline by a version or two.

Getting out of sync is inevitable.  Happens to full-time kernel developers, 
that's why they have their own trees.  That's a separate issue from asking 
for patches and getting a source tarball that compiles instead.  Here's a 
haystack, find the needle.

Mainlining changes and breaking them up into clean patches on top of some 
vanilla version (_any_ vanilla version) are two separate things.  You have to 
win one battle before you can even start the other.

 The number of parties involved here is why, IMHO, it has
 taken so long to make improvements in this area.

The lack of a clear consistent message from us to the vendors hasn't helped.

Rob
-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-15 Thread Rob Landley
On Sunday 15 June 2008 10:39:43 Leon Woestenberg wrote:
 Hello all,

 On Thu, Jun 12, 2008 at 2:41 AM, Rob Landley [EMAIL PROTECTED] wrote:
  Most packages don't cross compile at all.  Debian has somewhere north of
  30,000 packages.  Every project that does large scale cross compiling
  (buildroot, gentoo embedded, timesys making fedora cross compile, etc)
  tends to have about 200 packages that cross compile more or less easily,
  another 400 or so that can be made to cross compile with _lot_ of effort
  and a large enough rock, and then the project stalls at about that size.

 Agreed, OpenEmbedded has a few thousands, but your point is valid.
 However, fleeing to target-native compilation is not the way to
 improve the situation IMHO.

You say it like fleeing is a bad thing. :)

I believe building natively under emulation is the Right Thing.  Cross 
compiling has always historically been a transitional step until native 
compiling became available on the target.

When Ken Thompson and Dennis Ritchie were originally creating Unix for the 
PDP-7, they cross compiled their code from a honking big GE mainframe because 
that was their only option.  One of the first things they wrote was a PDP-7 
assembler that ran on the PDP-7.  The reason they created the B programming 
language in the first place was to have a tiny compiler that could run 
natively on the PDP-7, and when they moved up to a PDP-11 Dennis had more 
space to work with and expanded B into C.

When they severed the mainframe umbilical cord as soon as they were able to 
get the system self-hosting, it wasn't because the PDP-7 had suddenly become 
faster than the GE mainframe.

Compiling natively where possible has been the normal way to build Unix 
software ever since.  Linux became a real project when Linus stopped needing 
Minix to cross-compile it.  Linus didn't flee Minix, he assures us he 
erased his minix partition purely by accident. :)

 Moore's law on hardware also goes for the host, 

Which is why people no longer regularly write application software in assembly 
language, because we don't need to do that anymore.  The result would be 
faster, but not better.

The rise of scripting languages like Python and javascript that run the source 
code directly is also related (and if you don't think people don't write 
complete applications in those you haven't seen any of the google apps).  The 
big push for Java in 1998 could happen because the hardware was now fast 
enough to run _everything_ under an emulator for a processor that didn't 
actually exist (until Rockwell built one, anyway).

Build environments are now literally thousands of times faster than when I 
started programming.  The first machine I compiled code on was a commodore 64 
(1mhz, 8 bits, the compiler was called blitz and the best accelerator for 
it was a book).  The slowest machine I ever ran Linux on was a 16 mhz 386sx.

According to my blog, I moved from a 166mhz laptop to a 266mhz one on April 
13, 2002.  I started building entire Linux From Scratch systems on the 166mhz 
machine, including a ton of optional packages (apache, postgresql, openssh, 
samba, plus it was based on glibc and coreutils and stuff back then so the 
build was _slow_), hence the necessity of scripting it and leaving the build 
to its own devices for a few hours.

Even without distcc calling out to the cross compiler, the emulated system 
running on my laptop is several times faster than the build environment I had 
7 years ago (2001), somewhat faster than the one I had 5 years ago (2003), 
and somewhat slower than the one I had 3 years ago (2005).  (That's emulating 
an x86 build environment on my x86_64 laptop.  I didn't _have_ a non-x86 
build enviornment 5 years ago for comparison purposes.)

 I think the progress is even bigger on big iron.

Not that I've noticed, unless by big iron, you mean PC clusters.  (You can 
expand laterally if you've got the money for it and your problem distributes 
well...)

 Also, how much of the 3 packages are useful for something like
 your own firmware Linux?

None of them, because Firmware Linux has a strictly limited agenda: provide a 
native build environment on every system emulation supported by qemu.  That's 
the 1.0 release criteria.  (Some day I may add other emulators like hercules 
for s390, but the principle's the same.)

Once you have the native build environment, you can bootstrap Gentoo, or 
Debian, or Linux From Scratch, or whatever you like.  I've got instructions 
for some of 'em.

The buildroot project fell into the trap of becoming a distro and having to 
care about the interaction between hundreds of packages.  I'm not interested 
in repeating that mistake.

Figuring out what packages will other people might need is something I stopped 
trying to predict a long time ago.  If it exists, somebody wanted it.  People 
want/need the weirdest stuff: the accelerometer in laptops is used for 
rolling marble games, and the iPhone is a cell phone

Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-15 Thread Rob Landley
On Thursday 12 June 2008 13:18:07 Enrico Weigelt wrote:
 * Rob Landley [EMAIL PROTECTED] schrieb:

 Hi,

  There's also qemu.  You can native build under emulation.

 did you ever consider that crosscompiling is not only good for
 some other arch, but a few more things ?

Sure, such as building a uClibc system on a glibc host, which my _previous_ 
firmware linux project (http://landley.net/code/firmware/old) was aimed at.

That used User Mode Linux instead of qemu, because fakeroot wasn't good 
enough and chroot A) requires the build to run as root, B) sometimes gets a 
little segfaulty if you build uClibc with newer kernel headers than the 
kernel in the system you're running on.

You can't get away from cross compiling whenever you want to bootstrap a new 
platform.  But cross compiling can be minimized and encapsulated.  It can be 
a stage you pass through to get it over with and no longer have to deal with 
it on the other side, which is the approach I take.

  In addition, if you have a cross compiler but don't want to spend all
  your time lying to ./configure, preventing gcc from linking against the
  host's zlib or grabbing stuff out of /usr/include that your target hasn't
  got, or

 #1: use a proper (sysroot'ed) toolchain

I break everything.  (I've broken native toolchains.  I just break them 
_less_.)

By my count sysroot is the fifth layer of path logic the gcc guys have added 
in an attempt to paint over the dry rot.

Personally I use a derivative of the old uClibc wrapper script that rewrites 
the command line to start with --nostdinc --nostdlib and then builds it 
back up again without having any paths in there it shouldn't.

 #2: fix broken configure.in's (and feed back to upstream or OSS-QM)

Whack-a-mole.  Fun for the whole family.  Only problem is, it never stops.

 #3: replace libtool by unitool

Uninstall libtool and don't replace it with anything, it's a NOP on Linux.

  libraries are linked inside the emulator, anything that wants to look
  at /proc or sysinfo does it natively inside the emulator...)

 Only crap sw looks at /proc at build time.
 Yes, there's *much* crap sw out there :(

99% of all the developers out there don't really care about portability, and 
never will.  Even if you eliminate the windows guys and the people who don't 
do C, 90% of the people who are _left_ get to work on the PC first, get it to 
work natively on other Linux platforms afterwards.

Cross compiling is a step beyond portability.  They'll _never_ care about 
cross compiling.  If they get inspired to make it work on MacOS X, then 
you'll have to extract the source and _build_ it on MacOS X to make that 
work.  And 99% of all developers will nod their heads and go quite right, as 
it should be.

This isn't going to change any time soon.

Rob
-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Firmware Linux (was Re: Cross Compiler and loads of issues)

2008-06-13 Thread Rob Landley
On Thursday 12 June 2008 12:52:44 Shaz wrote:
 Hi,

 I have been following Re: [PATCH 0/1] Embedded Maintainer(s) and
 felt like asking that is there one good way to get a cross compiler
 work. I tried buildroot, scratchbox and even openMoko with
 openEmbedded but all of them had lots of issues and don't know which
 will be the best alternative.

Did you try my FWL project? :)

http://landley.net/code/firmware

To build from source, go to http://landley.net/code/firmware/downloads and 
grab the latest version (firmware-0.4.0.tar.bz2).  Extract it, cd into it, 
and run ./build.sh.  That should list the available platforms, and then run 
with the platform you like as an argument (ala ./build.sh powerpc).

The build.sh wrapper runs the other scripts in sequence:
  ./download.sh - downloads source code if you haven't already got it.
  ./host-tools.sh - compiles stuff on the host platform, (mostly optional).
  ./cross-compiler.sh - build a cross compiler toolchain for your target.
  ./mini-native.sh - Compile a kernel and root filesystem for your target.
  ./package-mini-native.sh - Create an ext2 image out of mini-native.

The other interesting scripts in the directory are:
  ./forkbomb.sh - Build every target (--nofork in series, --fork in parallel).
  Calling with --fork is faster, but needs about 4 gigabytes of ram.
  ./run-emulator.sh - Boot one of the system images you just built under qemu.
  Also sets up the distcc trick (see below).

If you don't feel like compiling any of the above yourself, you can download 
prebuilt binary images from the same downloads link.  The cross-compiler 
directory has the prebuilt cross compilers for each target (for i686 and 
x86_64 hosts).  The mini-native directory has the prebuilt root filesystem 
images as tarballs.  And the system-image.sh directory has a tarball with 
that same mini-native root filesystem as an ext2 image, a kernel for qemu, 
and shell scripts to invoke qemu appropriately for each one:

  ./run-emulator.sh - Run qemu, booting to a shell prompt.
  ./run-with-home.sh - Calls run-emulator with a second hard drive image
  hooked up as /dev/hdb and mounted on /home.  (If you haven't got an
  hdb.img in the directory it'll create an empty 2 gigabyte image and
  format it ext3.)
  ./run-with-distcc.sh - Calls run-with-home with the distcc trick set up.

The reason mini-native is called that is because it's a minimal native 
development environment.  It contains gcc, binutils, make, linux kernel 
headers, uClibc for your C library, and standard susv3 command line tools 
(provided by the busybox and toybox packages).  This is theoretically enough 
to build the whole of Linux From Scratch (http://www.linuxfromscratch.org) or 
bootstrap your way up to being able to natively build gentoo or debian using 
their package management systems.  (If you don't _want_ development tools in 
your root filesystem, set the environment variable BUILD_SHORT=1 before 
running ./mini-native.sh.  That'll also move it out of /tools and up to the 
top level.)

In practice, the environment is still missing seven commands (bzip2, find, 
install, od, sort, diff, and wget) needed to rebuild itself under itself.  
(The busybox versions were either missing or had a bug.)  I'm working on that 
for the next release.

The distcc trick is for speeding up native builds by using distcc to call out 
to the cross compiler.  Running the cross compiler on the host system is 
faster, but cross compiling is very brittle.  This approach does a native 
build under the emulator, but escapes the emulator for the actual compiling 
part.  I did a quick bench test compling make 3.81 (I had it lying around) 
and it sped up the actual compile by a factor of 7.  (Alas, it didn't speed 
up the ./configure part at all, just the make part.)

The ./emulator-build.sh script sets up the distcc trick using the files left 
in the build directory after you build it yourself.  (The cross compiler is 
left in there, and the kernel and ext2 system images that got tarred up are 
also left in there.)  The ./run-with-distcc.sh script does it for 
cross-compiler and system-image tarballs.  (It needs one argument: You have 
to tell it the path where you extracted the cross-compiler tarball.)

 I also went through the material provided freely by Free Electron but
 still I am not successful to build a custom kernel. Next I am trying
 MontaVista's kit. I just wish I don't get lost.

I'm happy to answer any questions about the stuff I did... :)

 Anyways, I liked the idea of Qemu based cross compiler. Is it possible
 for the inexperienced to get it working and emulate the exact model
 and devices.

That's what I'm trying to do.  I've got armv4l, armv5l, mips (big endian), 
mipsel (little endian), powerpc (a prep variant), i586, i686, and x86_64 
working.  They all work fine.  Run ./smoketest.sh $ARCH on each $ARCH after 
building it to see the sucker boot up and compile hello world using distcc.  
That means 

Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-13 Thread Rob Landley
On Thursday 12 June 2008 12:14:32 Bill Gatliff wrote:
 Paul Mundt wrote:
  Yes, that's the easy case. It's things like perl that are the corner
  cases, and my objection comes from the fact that people think we ought to
  not have the kernel depend on perl rather than just fixing the package
  itself. Autoconf/libtool damage is an entirely different problem :-)

 At first glance, it seems like checkincludes.pl could be duplicated by
 egrep | uniq | wc vs. egrep | wc.  Not quite sure what checkversion.pl is
 trying to do.

There's a difference between this is a development tool used while modifying 
source code and this is needed to build.

There are situations where it's ok to have a dependency on X11/qt/gtk, such 
as make xconfig.  This is _not_ the same as adding such dependency to make 
modules.

So far, none of the perl dependencies prevent you from building the kernel on 
a system that didn't have perl (or didn't have the right version of perl).

 So maybe we could _reduce_ dependency on perl, if there's any advantage to
 gain by doing so.  But the kernel build machinery isn't dependent on very
 many other systems (just tcl, bash and gcc-core),

There's no tcl dependency in the build.  (Yes, I actually know this.)

Part of my FWL work involves getting the system to rebuild itself under 
itself.  (The packages you need to make a minimal self-bootstrapping system 
are gcc, binutils, make, bash, uClibc, linux, and busybox/toybox).  I'm seven 
commands away from doing this.

I know this because I made a horrible little wrapper (attached, it really is 
sad) which touched a file with the name it was called as and then exec()ed 
the actual executable out of another directory.  Then I populated a directory 
with symlinks to every executable in $PATH (for i in `echo $PATH | 
sed 's/:/ /g'`;do for j in `ls $i`; do ln -s $i/$j $j; done; done), and 
another directory of similar symlinks to my wrapper.  I then ran my build 
with that wrapper directory at the start of $PATH and let the wrapper 
populate a directory with all the executables that actually got called during 
the build.  Then I filled up a directory with those executables, tried to run 
the build, and figured out why it broke.  (The above approach won't find 
calls to /bin/bash and a few other things, but it's a good start.)

Most of the point of my ./host-tools.sh wrapper in the FWL build is to 
populate a directory with the command line utilities mini-native will have in 
it (specifically the busybox/toybox versions, not the ones in the host 
system), and set $PATH equal to that directory and only that directory.  This 
way I know the system will build under itself because that's how it's 
building in the first place.

Currently, I need to grab the following out of the host system:

for i in ar as nm cc gcc make ld   bzip2 find install od sort diff wget
do
  [ ! -f ${HOSTTOOLS}/$i ]  (ln -s `which $i` ${HOSTTOOLS}/$i || dienow)
done

The first seven are the needed bits of the host toolchain (you'd think strip 
would be in there, but it turns out those packages only ever use 
$TARGET-strip).  The last seven are the ones that are either missing or had 
various bugs in the version of busybox I'm using that prevented the build 
from working right.

Rob
-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
#include stdio.h
#include string.h
#include stdlib.h

#include fcntl.h

char blah[65536];

#define ROOTPATH /home/landley/firmware/firmware

int main(int argc, char *argv[], char *env[])
{
  char *p, *p2;

  int i, fd;

  p2 = strrchr(*argv, '/');
  if (!p2) p2=*argv;
  else p2++;

  p=blah + sprintf(blah, %s ,p2);
  for (i=1; iargc; i++) {
p += sprintf(p, \%s\ ,argv[i]);
  }
  p[-1]='\n';

  // Log the command line

  fd=open(ROOTPATH /loggy.txt,O_WRONLY|O_CREAT|O_APPEND,0777);
  write(fd, blah, strlen(blah));
  close(fd);

  // Touch the file that got used.

  sprintf(blah,ROOTPATH /used/%s, p2);
  close(open(blah, O_WRONLY|O_CREAT, 0777));

  // Hand off control to the real executable
  sprintf(blah, ROOTPATH /handoff/%s, p2);
  execve(blah, argv, env);

  // Should never happen, means handoff dir is set up wrong.
  dprintf(2,Didn't find %s\n, *argv);
  exit(1);
}


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-13 Thread Rob Landley
On Friday 13 June 2008 04:06:18 Alexander Neundorf wrote:
  And the above are not really a big problem -

 checking if something builds is no problem, this just works. Running
 something is a problem, as in it doesn't just work (...because you cannot
 run it).

Noticing 2 weeks after deployment that signal handling in the mips version of 
perl is using the x86 signal numbers and they're not the same: priceless.

  The only simple solution so far (without diving into the implementation
  and searching for root causes) were AFAICS:
  - do not use libtool for linking (as the link line as such without
libtool works as expected)

 Yes, libtool sucks, it's the wrong solution to the problem.
 (and CMake doesn't use it).

Nothing on Linux really _uses_ libtool.  It's supposed to act as a NOP wrapper 
around the linker on any Linux target.  (It's there for things like Sparc and 
HPUX).

The fact that libtool manages to do its nothing _wrong_ so often would be 
hilarious if it wasn't such a pain.  Just uninstall libtool before trying to 
build for a Linux target, this should never cause any problems and will save 
you lots of headaches.

  Why on earth does someone need this explicitly during the build?
  If you have portable software, all of that should be hidden in the code
  and use sizeof(int).

According to the LP64 standard which pretty much all modern Unixes adhere to 
(including both Linux and MacOS X) sizeof(int) is always 4.  Guaranteed.

The LP64 standard:
  http://www.unix.org/whitepapers/64bit.html

The LP64 rationale:
  http://www.unix.org/version2/whatsnew/lp64_wp.html

The insane legacy reasons windows doesn't do this, but in fact sizeof(int) 
will still be 4 there anyway:
  http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx

Just FYI. :)

 But this was not the point. My point was: testing something by running an
 executable can be _a lot_ easier than testing the same without running
 something.

I think building natively under qemu is the easy way, yes. :)

 Alex
 --
 To unsubscribe from this list: send the line unsubscribe linux-embedded
 in the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mainlining min-configs...

2008-06-11 Thread Rob Landley
On Wednesday 11 June 2008 14:46:55 Tim Bird wrote:
 Adrian Bunk wrote:
  Randy's patch that documents KCONFIG_ALLCONFIG is in Linus' tree since
  April 2006.

 Well, dangit there it is!

 The patch I googled had it going into Documentation/kbuild.  It
 somehow escaped my attention in the README.

Mine as well.  (I grepped for KCONFIG_ALLCONFIG in Documentation, not in the 
rest of the tree.  I of all people should know better...)

 If I was 
 a little more skilled with my grep-ing I would have found it.
 Sorry for the bother!

The linux kernel has documentation in Documentation, in the make htmldocs 
output, in the Kconfig help entries, in README files, in the output of make 
help, and several other places.  (And then there's all the stuff that's not 
_in_ the kernel tarball...)

It's easy to miss.

Rob

P.S: yes, README files plural.
  find . -name *[Rr][Ee][Aa][Dd][Mm][Ee]* | grep -v Documentation | wc
  6969  2315

-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] console - Add configurable support for console charset translation

2008-06-04 Thread Rob Landley
On Wednesday 04 June 2008 05:33:53 Adrian Bunk wrote:
 Does the linux-tiny approach of adding a kconfig variable for each 5kB
 of code actually make sense? I'm asking since an exploding amount of
 kconfig variables and their interdependencies have a not so small
 maintainance impact in the long term.

Complexity is a cost, you have to get good bang for the buck when you spend 
it.

 And I'm wondering whether it's the best approach for reaching
 measurable results.

When I first started stripping down systems to make embedded masquerading 
routers back in the late 90's (before linksys came out), I started with a Red 
Hat install and removed lots and lots of packages.  That's the approach we're 
taking today, and I can say from experience that it's not sustainable.

I then wandered to a Linux From Scratch approach, building a system that had 
nothing in it but what I wanted.  Starting from zero and adding stuff, rather 
than starting from Mt. Crapmore and removing things until the shovel broke.

Someday I want to do the same for the Linux kernel.  When I started building 
systems instead of carving them out of blocks of distro, I started with 
a hello world root filesystem, and I want to make a hello world kernel.  
Start with just the boot code that does the jump to C code, only instead of 
start_kernel() in init/main.c have it call a hello_world() function that 
prints hello world to the console using the early_printk logic, then calls 
HLT.  And does _nothing_else_.  Then add stuff back one chunk at a time, 
sstarting with memory management, then the scheduler and process stuff, then 
the vfs, and so on.  So I know what all the bits do, and how big and 
complicated they are.  And I can document the lot of it as I go.

Unfortunately, as a learning experience, I estimate this would take me about a 
year.  And I haven't got a spare year on me at the moment.  But it remains 
prominently on my todo list, if I decide to start another major project.  
(Maybe after I get a 1.0 release of FWL out.)

 My gut feeling is that the influence of this kind of linux-tiny patches
 is hardly noticably compared to the overall code size development, but
 if you have numbers that prove me wrong just point me to them and I'll
 stand corrected.

The whackamole approach is never going to turn Ubuntu into Damn Small Linux, 
and it ignores the needs of the people who don't want the /proc hairball but 
_do_ want a ps that works.

Rob
-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html