Re: about size optimizations (Re: Not as much ccache win as I expected)

2008-06-15 Thread Jamie Lokier
David Woodhouse wrote:
 On Sat, 2008-06-14 at 10:56 +0100, Oleg Verych wrote:
  I saw that. My point is pure text processing. But as it seems doing
  `make` is a lot more fun than to do `sh`  `sed`.
 
 The problem is that it _isn't_ pure text processing. There's more to
 building with --combine than that, and we really do want the compiler to
 do it.
 
 _Sometimes_ you can just append C files together and they happen to
 work. But not always. A simple case where it fails would be when you
 have a static variable with the same name in two different files.

I suspect the simplest way to adapt an existing makefile is:

1. Replace each compile command gcc args... file.c -o file.o
   with gcc -E args... file.c -o file.o.i.

2. Replace each incremental link ld -r -o foo.o files... with
   cat `echo files... | sed 's/$/.i/'`  foo.o.i.

3. Similar replacement for each ar command making .a files.

4. Replace the main link ld -o vmlinux files... with
   gcc -o vmlinux --combine -fwhole-program `echo files... | sed 
's/$/.i/'`.

You can do this without changin the Makefile, if you provide suitable
scripts on $PATH for the make.

-- Jamie
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: about size optimizations (Re: Not as much ccache win as I expected)

2008-06-15 Thread Oleg Verych
 You can do this without changin the Makefile, if you provide suitable
 scripts on $PATH for the make.

I want to add here whole issue of kbuild's way of dependency
calculation and rebuild technique.

1) This whole infrastructure is needed only for developers. But
developer while writing/updating  some code must know what is changed
and how it impacts all dependent/relevant code. Thus, one must create
list of all files *before* doing edit/build/run cycles (even with
git/quilt aid). And this list must be fed to build system to make sure
everything needed is rebuilt, and anything else is not (to save time).

This is matter of organizing tools and ways of doing things -- a very
important feature of doing anything effectively.

2) OTOH user needs no such thing at all. New kernel -- new build from
scratch. Distros are same. Also blind belief for correct rebuild using
old object pool is a naive thing.

3) Testers applying and testing patches. OK, now it's a rule to have
diffstat, thus list of changed files. But one can filter out them from
diff/patch with `sed` easily. It can be done even rejecting pure
whitespace/comment changes.

Now you have list of files, feed them to build system, like in (1). No
`make` (recursive or not, or whatever) is needed (use ccache-like
thing in general case to save build time). Its key-thing -- timestamps
-- is a lock for development somehow overcame by `make`-based kbuild
2.6. What an irony.

Problems:

* more flexible source-usage (thus dependency) tracking is needed
(per-variable, per-function, per-file). This must not be a random
comments near #include, it must be natural part of source files
themselves. Filenames are not subject to frequent changes. Big ones
can be split, but main prefix must be the same, thus no need of
changing it in all users. Small ENOENT || prefix* heuristics is
quite OK here.

* implemented features and their options must be described and
documented in-place in sources (distributed configuration). Licence
blocks are not needed, one has top file with it or MODULE_LICENSE().
Describe your source in a form, that will be easily parse-able for
creating dependency and configuration items/options.

* once all this in place, creating specific config sets by end users
must not be so painful for both sides as it now is.

#include's  #ifdef's are proven PITA; flexible text processing
(analysis, transformations) with basic tools like `sed` (or `perl`) is
the right way IMHO. On this stage no `gcc -E` for working `cat $all
linux.c` is needed.

(My another stone to The art of thinking in `make` and C. Hope, it's
constructive. Again all this i see as handled with very small set of
universal scripts.)
-- 
sed 'sed  sh + olecom = love'''
-o--=O`C
 #oo'L O
___=E M
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Not as much ccache win as I expected

2008-06-15 Thread Jörn Engel
On Fri, 13 June 2008 14:10:29 -0700, Tim Bird wrote:
 
 Maybe I should just be grateful for any ccache hits I get.

ccache's usefulness depends on your workload.  If you make a change to
include/linux/fs.h, close to 100% of the kernel is rebuilt, with or
without ccache.  But when you revert that change, the build time differs
dramatically.  Without ccache, fs.h was simply changed again and
everything is rebuild.  With ccache, there are hits for the old version
and all is pulled from the cache - provided you have allotted enough
disk for it.

If you never revert to an old version or do some equivalent operation,
ccache can even be a net loss.  On a fast machine, the additional disk
accesses are easily more expensive than the minimal cpu gains.

Jörn

-- 
Public Domain  - Free as in Beer
General Public - Free as in Speech
BSD License- Free as in Enterprise
Shared Source  - Free as in Work will make you...
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel boot problem on IXP422 Rev. A

2008-06-15 Thread Rob Landley
On Friday 13 June 2008 15:05:54 Tim Bird wrote:
 Rob,

 This is an excellent and concise description of the open
 source perspective on the problem.  I'll add just one note below.

 Rob Landley wrote:
  1) Try to reproduce the bug under a current kernel.  (Set up a _test_
  system.)

 This sounds easy, but can be quite difficult.

It's not a question of difficult or easy: it's the procedure that works.

You don't get support from a commercial vendor unless you pay them money, and 
you don't get support from open source developers unless you help us make the 
next release just a little bit better.  (We never said our help was free, we 
just said it didn't cost _money_.  Ok, the FSF did but they don't speak for 
all of us...)

 Very often, product developers are several versions behind, with
 no easy way to use the current kernel version.

I'm aware of that.  But if you can't set up a test system to reproduce the bug 
on a current system, the rest of us haven't got a _chance_.

 For example, a 
 common scenario is starting with a kernel that comes with a board
 (with source mind you), where the kernel came from the semi-conductor
 vendor, who paid a Linux vendor to do a port, and it was
 released in a time-frame relative to the Linux vendor's
 product schedule.

Then poke your vendor to fix the problem.

If you've decided to use a divergent fork from a vendor rather than the 
mainstream version, then the vendor has to support that fork for you because 
we're not going to be familiar with it.  (You can _hire_ one of us to support 
it for you, but we're not going to do so on a volunteer basis.)

We're happy to debug _our_ code.  But our code is the current vanilla 
release tarball.  If you can't reproduce the problem in the current vanilla 
tarball, then it's not our bug.  If you can only reproduce it in an older 
version: congratulations, we must have fixed it since.  If you can only 
reproduce it in some other fork, obviously their changes introduced the bug.  
If it's your code plus this patch, we need to see the patch.

If _you_ can't reproduce it in our code, how do you expect _us_ to?

 This is how you end up having people STARTING projects today
 using a 2.6.11 kernel.  (I know of many).

Oldest I've seen a new project launch with this year is 2.6.15, but I agree 
with your point.

Whoever decided backporting bug fixes to a 2.6.16 kernel forever was a good 
idea seems to have muddied the waters a bit.  Ironically I don't know anybody 
actually _using_ that version, but I've seen several people point to it to 
show that the community supports arbitrarily older versions forever, and 
thus they don't have to upgrade to get support, and 2.6.18 is actually 
_newer_ than that...

 The real difficulty, when a developer finds themselves in
 this position, is how to forward-port the BSP code necessary to
 reproduce the bug in the current kernel.  Often, the code
 is not isolated well enough (this is a vendor problem that
 really needs attention.  If you have the BSP in patches, it
 is usually not too bad to forward port even across several
 kernel versions.  But many vendors don't ship stuff this way.)

Yup.  Sucks, doesn't it?  This is not a problem that improves with the passage 
of time.

Might be a good idea to make it clear up front that even if your changes never 
get mainlined, failure to break up and break out your patches is still likely 
to cause maintenance problems down the road.

 The fact is, that by a series of small steps and delays by
 the linux vendor, chip vendor, board vendor,
 and product developer the code is out-of step.

Hence the importance of breaking out and breaking up the changes.

 It's easy to say don't get in this position, but
 this even happens when everyone is playing nice and actively
 trying to mainline stuff.  BSP support in arch trees often
 lag mainline by a version or two.

Getting out of sync is inevitable.  Happens to full-time kernel developers, 
that's why they have their own trees.  That's a separate issue from asking 
for patches and getting a source tarball that compiles instead.  Here's a 
haystack, find the needle.

Mainlining changes and breaking them up into clean patches on top of some 
vanilla version (_any_ vanilla version) are two separate things.  You have to 
win one battle before you can even start the other.

 The number of parties involved here is why, IMHO, it has
 taken so long to make improvements in this area.

The lack of a clear consistent message from us to the vendors hasn't helped.

Rob
-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-15 Thread Rob Landley
On Sunday 15 June 2008 10:39:43 Leon Woestenberg wrote:
 Hello all,

 On Thu, Jun 12, 2008 at 2:41 AM, Rob Landley [EMAIL PROTECTED] wrote:
  Most packages don't cross compile at all.  Debian has somewhere north of
  30,000 packages.  Every project that does large scale cross compiling
  (buildroot, gentoo embedded, timesys making fedora cross compile, etc)
  tends to have about 200 packages that cross compile more or less easily,
  another 400 or so that can be made to cross compile with _lot_ of effort
  and a large enough rock, and then the project stalls at about that size.

 Agreed, OpenEmbedded has a few thousands, but your point is valid.
 However, fleeing to target-native compilation is not the way to
 improve the situation IMHO.

You say it like fleeing is a bad thing. :)

I believe building natively under emulation is the Right Thing.  Cross 
compiling has always historically been a transitional step until native 
compiling became available on the target.

When Ken Thompson and Dennis Ritchie were originally creating Unix for the 
PDP-7, they cross compiled their code from a honking big GE mainframe because 
that was their only option.  One of the first things they wrote was a PDP-7 
assembler that ran on the PDP-7.  The reason they created the B programming 
language in the first place was to have a tiny compiler that could run 
natively on the PDP-7, and when they moved up to a PDP-11 Dennis had more 
space to work with and expanded B into C.

When they severed the mainframe umbilical cord as soon as they were able to 
get the system self-hosting, it wasn't because the PDP-7 had suddenly become 
faster than the GE mainframe.

Compiling natively where possible has been the normal way to build Unix 
software ever since.  Linux became a real project when Linus stopped needing 
Minix to cross-compile it.  Linus didn't flee Minix, he assures us he 
erased his minix partition purely by accident. :)

 Moore's law on hardware also goes for the host, 

Which is why people no longer regularly write application software in assembly 
language, because we don't need to do that anymore.  The result would be 
faster, but not better.

The rise of scripting languages like Python and javascript that run the source 
code directly is also related (and if you don't think people don't write 
complete applications in those you haven't seen any of the google apps).  The 
big push for Java in 1998 could happen because the hardware was now fast 
enough to run _everything_ under an emulator for a processor that didn't 
actually exist (until Rockwell built one, anyway).

Build environments are now literally thousands of times faster than when I 
started programming.  The first machine I compiled code on was a commodore 64 
(1mhz, 8 bits, the compiler was called blitz and the best accelerator for 
it was a book).  The slowest machine I ever ran Linux on was a 16 mhz 386sx.

According to my blog, I moved from a 166mhz laptop to a 266mhz one on April 
13, 2002.  I started building entire Linux From Scratch systems on the 166mhz 
machine, including a ton of optional packages (apache, postgresql, openssh, 
samba, plus it was based on glibc and coreutils and stuff back then so the 
build was _slow_), hence the necessity of scripting it and leaving the build 
to its own devices for a few hours.

Even without distcc calling out to the cross compiler, the emulated system 
running on my laptop is several times faster than the build environment I had 
7 years ago (2001), somewhat faster than the one I had 5 years ago (2003), 
and somewhat slower than the one I had 3 years ago (2005).  (That's emulating 
an x86 build environment on my x86_64 laptop.  I didn't _have_ a non-x86 
build enviornment 5 years ago for comparison purposes.)

 I think the progress is even bigger on big iron.

Not that I've noticed, unless by big iron, you mean PC clusters.  (You can 
expand laterally if you've got the money for it and your problem distributes 
well...)

 Also, how much of the 3 packages are useful for something like
 your own firmware Linux?

None of them, because Firmware Linux has a strictly limited agenda: provide a 
native build environment on every system emulation supported by qemu.  That's 
the 1.0 release criteria.  (Some day I may add other emulators like hercules 
for s390, but the principle's the same.)

Once you have the native build environment, you can bootstrap Gentoo, or 
Debian, or Linux From Scratch, or whatever you like.  I've got instructions 
for some of 'em.

The buildroot project fell into the trap of becoming a distro and having to 
care about the interaction between hundreds of packages.  I'm not interested 
in repeating that mistake.

Figuring out what packages will other people might need is something I stopped 
trying to predict a long time ago.  If it exists, somebody wanted it.  People 
want/need the weirdest stuff: the accelerometer in laptops is used for 
rolling marble games, and the iPhone is a cell phone 

Re: [PATCH 0/1] Embedded Maintainer(s), [EMAIL PROTECTED] list

2008-06-15 Thread Rob Landley
On Thursday 12 June 2008 13:18:07 Enrico Weigelt wrote:
 * Rob Landley [EMAIL PROTECTED] schrieb:

 Hi,

  There's also qemu.  You can native build under emulation.

 did you ever consider that crosscompiling is not only good for
 some other arch, but a few more things ?

Sure, such as building a uClibc system on a glibc host, which my _previous_ 
firmware linux project (http://landley.net/code/firmware/old) was aimed at.

That used User Mode Linux instead of qemu, because fakeroot wasn't good 
enough and chroot A) requires the build to run as root, B) sometimes gets a 
little segfaulty if you build uClibc with newer kernel headers than the 
kernel in the system you're running on.

You can't get away from cross compiling whenever you want to bootstrap a new 
platform.  But cross compiling can be minimized and encapsulated.  It can be 
a stage you pass through to get it over with and no longer have to deal with 
it on the other side, which is the approach I take.

  In addition, if you have a cross compiler but don't want to spend all
  your time lying to ./configure, preventing gcc from linking against the
  host's zlib or grabbing stuff out of /usr/include that your target hasn't
  got, or

 #1: use a proper (sysroot'ed) toolchain

I break everything.  (I've broken native toolchains.  I just break them 
_less_.)

By my count sysroot is the fifth layer of path logic the gcc guys have added 
in an attempt to paint over the dry rot.

Personally I use a derivative of the old uClibc wrapper script that rewrites 
the command line to start with --nostdinc --nostdlib and then builds it 
back up again without having any paths in there it shouldn't.

 #2: fix broken configure.in's (and feed back to upstream or OSS-QM)

Whack-a-mole.  Fun for the whole family.  Only problem is, it never stops.

 #3: replace libtool by unitool

Uninstall libtool and don't replace it with anything, it's a NOP on Linux.

  libraries are linked inside the emulator, anything that wants to look
  at /proc or sysinfo does it natively inside the emulator...)

 Only crap sw looks at /proc at build time.
 Yes, there's *much* crap sw out there :(

99% of all the developers out there don't really care about portability, and 
never will.  Even if you eliminate the windows guys and the people who don't 
do C, 90% of the people who are _left_ get to work on the PC first, get it to 
work natively on other Linux platforms afterwards.

Cross compiling is a step beyond portability.  They'll _never_ care about 
cross compiling.  If they get inspired to make it work on MacOS X, then 
you'll have to extract the source and _build_ it on MacOS X to make that 
work.  And 99% of all developers will nod their heads and go quite right, as 
it should be.

This isn't going to change any time soon.

Rob
-- 
One of my most productive days was throwing away 1000 lines of code.
  - Ken Thompson.
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Enrico Weigelt
* Jamie Lokier [EMAIL PROTECTED] schrieb:

 A trouble with that is some packages have hundreds of user-selectable
 options - or even thousands.  It is unfeasible to use --enable-foo
 options for all of those when configuring then.

Well, not that much ;-o
But taking care of such feature switches is the job of an automated
distro builder tool, including things like dependency tracking.
Actually, I'm really too lazy for doing those stuff by hand ;-P

But you're right, some packages have too many optional features, 
which better should be their own packages, and there's sometimes
much code out there which should be reused ...

 Some other packages _should_ have more options, but don't because it's
 too unwieldy to make them highly configurable with Autoconf.  

Adding new feature switches w/ autoconf is almost trivial
(well, not completely ;-o)

 Imho, Kconfig would be good for more programs than it's currently used for,
 and could be made to work with those --enable/--with options: you'd be
 able to configure them entirely on the command line, or interactively
 with ./configure --menu (runs menuconfig), or with a config file.

Yes, that would be fine. But for me the primary constraint is that
all switches/options can be specified by command line - otherwise
I'd need extra complexity for each package in my distbuilder tool.

 Perhaps it might even be possible to write a very small, portable,
 specialised alternative to Make which is small enough to ship with
 packages that use it?

No, I really wouldn't advise this. Make tools are, IMHO, part of 
the toolchain (in a wider sense). Once point is avoiding code 
duplication, but the really important one is: a central point of
adaption/configuration. That's eg. why I like pkg-config so much:
if I need some tweaking, I just pass my own command (or a wrapper).
If each package does it's library lookup completely by itself, I
also need to touch each single package in case I need some tweaks.
I had exactly that trouble w/ lots of packages, before I ported
them to pkg-config.


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cross-compiling alternatives (was Re: [PATCH 0/1] Embedded Maintainer(s)...)

2008-06-15 Thread Enrico Weigelt
* Jamie Lokier [EMAIL PROTECTED] schrieb:

 Media players with lots of optional formats and drivers are another.
 (They also have considerable problems with their Autoconf in my
 experience).

You probably mean their hand-written ./configure script, which is
intentionally incompatible w/ autoconf (this is not autoconf 
as primary directive ;-P) ... I guess we've got the same one
in mind ;-)

 Reality is that Kconfig front end to autotools does work - as you've
 proved.  It's a good idea. :-)

Now, we just need an autoconf-alike frontend for Kconfig ;-)

 Most packages need lots of additional libraries installed - and the
 development versions of those libraries, for that matter.  Too often
 the right development version - not too recent, not too old.  
 With the wrong versions, there are surprises.

But that's not the problem of autoconf or any other buildsystem,
just bad engineering (often on both sides).

 You said about too many user-selectable options.  Many large packages
 _check_ for many installed libraries.  Get them wrong, and you have
 the same problems of untested combinations.

It even gets worse when they silently enable certain features on
presence/absence of some lib. That's btw one of the reasons why
sysroot is an primary constraint for me, even when building for the
platform+arch.

 Have you felt uncomfortable shipping a package that does use Autoconf,
 Automake and Libtool, knowing that the scripts generated by those
 tools are huge compared with the entire source of your package?

Yeah, that's one of those things in autotools I never understood:
why isn't there just one function for each type of check/action, 
which is just called with the right params ?


cu
-- 
-
 Enrico Weigelt==   metux IT service - http://www.metux.de/
-
 Please visit the OpenSource QM Taskforce:
http://wiki.metux.de/public/OpenSource_QM_Taskforce
 Patches / Fixes for a lot dozens of packages in dozens of versions:
http://patches.metux.de/
-
--
To unsubscribe from this list: send the line unsubscribe linux-embedded in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html