Hi Erich,

ok, this will have to be quick, since I have to work tomorrow - so
forgive me if I overlook something...

> - but we'd surely burn a lot of disk-space on the SF
>> servers...
> 
> We would indeed, but hardly more than we do right now.
Actually, we would - since the sources would not be compressed, and we'd
also check in the binaries created from those sources. Currently (and
I've by far not built everything available) my apps tree from CVS is 120
MB, my buildtool tree is 1.3 GB - yes, apps is missing the kernel and
gcc, and the buildtool tree could be cut down quite a bit, since my
buildtool tree contains stuff not needed for the final result, but it's
still going to be more than just a bit of extra space. Whether that's
going to be an issue, I don't know (we don't have a quota on SF cvs
space right now).

> Mhhh.. maybe I am underestimating the issues with the toolchain. I was
> under the impression that most of the stuff is perl anyway, so at least
> partly portable. 
Oh, sure it is - but the perl stuff isn't what I'm worried about (and it
isn't what caused problems in the past, apart from the couple of
instances where a change in Config::General broke things).
The issue I'm seeing is that the binaries in our toolchain have the
location of the uclibc libs hardcoded - so, if I compile the toolchain
on "/stuff/sourceforge/src/bering-uclibc/buildtool", the gcc will look
for its libs in
/stuff/sourceforge/src/bering-uclibc/buildtool/staging/usr/lib
Other people will put it in "/var/leaf" or "/home/leaf" or
"/home/username/devel/leaf" (you get the idea). I'm afraid this is going
to cause plenty of issues - simply because some people will fail to read
the docs about where to put things, and also because some people will
read the docs, but are unable to put things where they should (because
they have to work on a box that they don't have root access to).

> I _believe_ also, that a precompiled toolchain would
> work on most recent distributions, as we are distributing gcc and the
> bintools too.
It will not, simply because there's no common denominator about what a
"recent distribution" might be. I use RHEL - compared to the latest
Fedora, that is hugely out of date. Same for debian-stable (unless
something has changed since the last time I used debian). So, it'll come
down to providing a toolchain for people with Ubuntu or Fedora (which
seem to be the most "up to date" distros out there - or not, I'm not
willing to get into a discussion about Linux distros), and still provide
a "do it yourself" toolset for those running more conservative setups.

Basically, if we went down that route, we'd be making a step backwards.
In the old days, people were told to install debian-whatever, since that
provided all the libs needed for building leaf stuff. Buildtool was an
attempt to allow people to build sources on whatever linux box they
chose to use it on (or at least try to). If we go the route you're
suggesting, we're basically telling people again to install a specific
distro (or one of a few), to be able to install our toolchain, to be
able to compile sources.
And then, if we abandon buildtool - how does this toolchain get built in
the first place? One of the goals of buildtool and buildpacket was to
get reproducible results, even if your box crashed and you had to start
from scratch. Another goal was to make upgrading to a new uClibc release
relatively easy. If we abandon the tools that allow us to build the
toolchain with minimal user-interaction, I think we will make a big step
backwards. If we don't, we have to maintain the toolchain (like today),
plus the compiled version. Call me a pessimist, but I see a huge
potential for things getting out of sync.

> Yes, until now this is maintainable, as long as the toolchain version
> remains stable. I was a bit puzzled though that, for example, I would
> find kernel version information in the buildtool.mk files, but this may
> be just an example for source to be fixed.
No, that's perfectly fine. A specific version of buildtool.mk is built
for a specific toolchain - if you check things out for July 1st, 2007,
you'll (hopefully) get a toolchain that matches all buildtool.mk files
(or vice versa). But you can't use last year's toolchain with a
buildtool.mk file from yesterday - they're all tied together. So the
versioning is there, it's just done in CVS (so, you can't tell buildtool
to use a specific version, but rather use the buildtool that was made
for that version, and then instruct buildtool where to get the
corresponding files - as described in the "Checking out an older version
to build" chapter in our docs).

> I am convinced though it is easier to keep a single tree in sync than
> multiple trees.
We don't have multiple trees - just one tree (starting at /leaf/).
The only place where I can see "multipe trees" is where we're supporting
different uClibc versions or different kernel versions at a given point
in time - but that's simply for support reasons, the average developer
should always use the "latest, greatest".
A given version of buildtool will only be able to build one version of
Bering-uClibc (a specific version of uClibc and a specific kernel version).

> Basically this is what I am suggesting. Just for my understanding, what
> do you think could break the individulal developers workspace in this
> case? CVS should protect against exactly these problems.
Surely not. If we check in binaries (which I guess we would if we use
your suggestion), cvs would only be able to replace one binary with
another. If I link a library against a certain lib, cvs then updates
that lib, chances are that my lib will no longer work.
I've always been told that CVS is a source management system, not one
for deploying binaries (which sounds like what you're suggesting).

> I guess this is a good idea, although I am personally not that hooked on
> VMWare as it is another closed source environment.
Well, to each his own (I don't have an issue with closed source, when
the EULA is reasonable and the tool gets the job done). There are Open
Source Virtual machines too (Xen, Bochs, qemu) that do the job too,
VMWare was simply chosen at the time because that's what was in place
and worked (so rather than learning how to set up a new VM, the person
in question could work on what was supposed to run inside that VM). If a
build environment is provided for an open source VM, I'm sure it would
be used. But at the moment, the point is that there's no build
environment for neither a closed nor an open source VM.

> I believe even a simple compressed filesystem which can be mounted and
> chrooted into (as was available with Bering glibc) could be one
> possible, although primitive, solution. Another one could be a KNOPPIX
> image covering the actual release with tools and sources.
Sure (knoppix has been discussed as well - back in 2003 if I recall
correctly). The problem is, nobody so far had the time to do it. And in
the end it comes down to this: the current buildtool setup is surely far
from perfect, but it works (mostly) for the people creating packages
right now (or at least, the people creating the majority of packages).
I'm sure that if somebody provided a better, more flexible build-system,
people would use it. But so far, nobody has, and the people working on
the majority of packages would (so I assume - I'm only speaking for
myself though) rather continue working on improving the leaf distro,
making new packages, than re-creating what we already have (even if the
end result might be better than what we already have). That's at least
my understanding about why people will rather spend the evening working
on compiling a piece of source that somebody asked for, than work on the
build-system.

Before we get too much into the discussion of what could or should be
done - look at the development resources available to the team right
now. Judging from the CVS commits list for the past few months, there
are 5 people working on leaf sourcecode right now - you, Eric Spakman,
KP Kirchdoerfer, Cedric Schieli and myself (and I hardly count, since I
haven't had near enough time to do serious development for quite a
while). Who is going to provide (easy) and maintain (not so easy) that
chroot environment? That's been the main problem of this project all
along (even before the LEAF project existed and the thing was still
called LRP) - there's always been only a relatively small number of core
developers. And there's only so much people can do in their spare time.

> Agreed, I ran into such issues in the eql_enslave stuff. I believe this
> is very difficult though, as we need to maintain patches against the
> uplink sources at all times and this is very time consuming.
Well, broken sources can be fixed upstream too :-) I won't say it's easy
or that it usually works (especially if he patch is only needed for some
"exotic" distro), but it's worth a try.

Basically, I have nothing at all to say against your suggestion - other
than that I think it addresses the symptom (broken sources/makefile for
the toolchain/buildtool makefiles) rather than the cause (the fact that
the sources/toolchain makefile/buildtool makefiles are broken). If
somebody finds the time to provide a buildenv that will make things work
despite being broken, I surely won't have an issue with that, and I'm
pretty sure it will be put into CVS. But at this point, I don't see who
would create such a thing, and more importantly, support it (nothing is
worse than a toolchain that worked last year...)

Martin

P.S. I guess this mail wasn't all that quick after all :-)

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

_______________________________________________
leaf-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/leaf-devel

Reply via email to