Re: Challenge: Binary free uploading

2006-07-24 Thread Matt Zimmerman
On Sun, Jul 16, 2006 at 08:14:48PM +0200, Wouter Verhelst wrote:
 For starters, we'd need a *lot* of hardware to be able to do all these
 builds. Many of them will fail, because there *will* be people who will
 neglect to test their builds, and they will hog the machine so that
 other people (who do test properly) have to wait a long time for their
 build to happen.

I don't think that Anthony was suggesting doing the test builds on all
architectures, but only one (presumably i386).  This would not require much
in the way of hardware resources at all.

-- 
 - mdz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-24 Thread Wouter Verhelst
On Mon, Jul 24, 2006 at 12:00:56PM +0100, Matt Zimmerman wrote:
 On Sun, Jul 16, 2006 at 08:14:48PM +0200, Wouter Verhelst wrote:
  For starters, we'd need a *lot* of hardware to be able to do all these
  builds. Many of them will fail, because there *will* be people who will
  neglect to test their builds, and they will hog the machine so that
  other people (who do test properly) have to wait a long time for their
  build to happen.
 
 I don't think that Anthony was suggesting doing the test builds on all
 architectures, but only one (presumably i386).  This would not require much
 in the way of hardware resources at all.

Not even that. His followup message made that clear :-)

Just for the record, I think something like this would be nice; but I've
got slightly different priorities currently, so I personally am not
going to be able to help out any with this.

-- 
Fun will now commence
  -- Seven Of Nine, Ashes to Ashes, stardate 53679.4


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-22 Thread Joey Hess
Goswin von Brederlow wrote:
 - people won't test build their sources before upload anymore
 - all those build failures will overload the buildds

If a developer makes more than N uploads in M days that FTBFS on the
buildds, future uploads go to the end of the queue for M days.

Of course once that is implemented, it probably won't ever be actually
used, so just assume it is implemented already. :-)

 - the untested debs will have far more bugs making sid even more
   unstable

I think we already have other similar feedback loops that deal with
that, but YMMV.

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-17 Thread Anthony Towns
On Sun, Jul 16, 2006 at 08:14:48PM +0200, Wouter Verhelst wrote:
 For starters, we'd need a *lot* of hardware to be able to do all these
 builds. Many of them will fail, because there *will* be people who will
 neglect to test their builds, and they will hog the machine so that
 other people (who do test properly) have to wait a long time for their
 build to happen.

As it stands, I don't think this would be a shared service; but rather
something people setup on their own -- so you edit on your laptop, commit
to your server, and have the build happen remotely so you don't hear the
disk grind, or have your load average increase while you're busy trying
to play armagetron... It could be shared for team maintained things like
the X packages, but at least initially, I wouldn't think that would be
worth worrying about.

That's also why I lean towards pbuilder instead of sbuild -- sbuild is
great for building lots of packages continually; but pbuilder's better for
setting up quickly and easily without having to put much thought into it.

My guess would be that it ought to be possible to hack up a pretty simple
shell script that does this usefully, then build on it from there.

Cheers,
aj



signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-17 Thread Wouter Verhelst
On Mon, Jul 17, 2006 at 03:42:18PM +1000, Anthony Towns wrote:
 On Sun, Jul 16, 2006 at 08:14:48PM +0200, Wouter Verhelst wrote:
  For starters, we'd need a *lot* of hardware to be able to do all these
  builds. Many of them will fail, because there *will* be people who will
  neglect to test their builds, and they will hog the machine so that
  other people (who do test properly) have to wait a long time for their
  build to happen.
 
 As it stands, I don't think this would be a shared service; but rather
 something people setup on their own -- so you edit on your laptop, commit
 to your server, and have the build happen remotely so you don't hear the
 disk grind, or have your load average increase while you're busy trying
 to play armagetron... It could be shared for team maintained things like
 the X packages, but at least initially, I wouldn't think that would be
 worth worrying about.

Oh, that way. Hrm, that might work.

 That's also why I lean towards pbuilder instead of sbuild -- sbuild is
 great for building lots of packages continually; but pbuilder's better for
 setting up quickly and easily without having to put much thought into it.

Don't have much of an argument there.

 My guess would be that it ought to be possible to hack up a pretty simple
 shell script that does this usefully, then build on it from there.

Actually, I don't think it would be very hard to modify buildd to be
able to run pbuilder instead of sbuild. I would have to check, though.

(and no, I'm not volunteering to do this -- sbuild works okayish for me,
and I have more than enough to do as it is :)

-- 
Fun will now commence
  -- Seven Of Nine, Ashes to Ashes, stardate 53679.4


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-17 Thread Adeodato Simó
 Hi all,

Hi, I wanted to chip in and share a couple thoughts on the matter,
namely:

  (a) why (at the moment) Debian's buildd network is not an area
  particularly suited to get improved by looking at what Ubuntu is
  doing (in other words, why little Ubuntu does there can be directly
  imported into Debian)

  (b) how the Ubuntu NoMoreSourcePackages initiative linked by Anthony
  is way more than zomsg, look, our i386 builddz can `bzr get` and
  don't pass -b to dpkg-buildpackage

 * * *

Let's start with (a). For good or bad, the fact is that in Debian,
ensuring that uploads have received a fair amount of testing and human
supervision, _both_ in terms that the source package is correct and
builds correctly, and that the binary packages do work properly, is
still a big concern.

Whenever changes like binary-less uploads or autosigning buildds, or
similar, get proposed, the above concern is normally enough to stop
them. For example, if you mention in -devel the idea maintainer does
not generate the source package, but buildd does from a VCS, a big part
of the discussion goes to ensuring the change would not negatively
affect to the amount of testing the packages get:

 * Anthony Towns [Sun, 16 Jul 2006 16:47:12 +1000]:

 and if the build was successful, make the .changes file, the
 source and the binary packages available, so that they can be checked by
^^^
 hand, and uploaded to the archive.

 * martin f krafft [Sun, 16 Jul 2006 14:24:19 +0200]:

 it's just too easy to sign and send back a changes file when you are
 currently too busy with other things.

 Thus, my idea was to require a certain number of certificates to be
 attached to the changes file, which prove that the source has been

 tested. E.g. lintian could issue a certificate just as well as
  ^^

 * Stephen Gran [Mon, 17 Jul 2006 00:00:10 +0100]:

 Why not just require binary uploads, and then chuck the binary away?
 Then we are where we are today (someone managed to get the thing to
  ^^^
 build at least once), but all debs are built from source on the buildd's.
  ^^^
 
And, of course (with penalties, even!):

 * Bernhard R. Link [Sun, 16 Jul 2006 11:54:32 +0200]:

 I think this should have some check reading the download-logs and
 refuse the upload (and perhaps also delete all built files and
 blacklisting the requestor for a month), if the generated .deb files
 were not downloaded or the signed changes sent in within some absurd
 short time making it inplausible the build was actually checked.
 
 Something like a quarter of an hour, I'd suggest.

 On a second thought, perhaps better half an hour and also checking the
   ^
 .diff.gz was downloaded...
  ^^^

This is why, IMHO and while the above does not change, maybe it's just
more efficient to make improvements to Debian's own buildd network by
conceiving them ourselves, by just thinking on what we need from buildds
so that they adapt to our needs better.

But also, I'm really very curious (and would welcome insight from any
Ubuntu people reading this) how the above mentioned concerns for
ensuring a minimum amount of testing are addressed in Ubuntu. If I let
my creative mood trump in, I can think of a couple discourses that would
explain:

  (1) “if you think carefully, in the very end a lack of testing only
  hurts the autobuilders, whilst results in saved time for the
  maintainers in the common case. And since in Ubuntu we have the
  resources to increase the number of autobuilders, but our number
  of maintainers is low...”

  (2) “uploading source  binaries is sooo 20th century, mate.”

 * * *

Now for the (maybe more interesting) (b). Seems like after Antony posted
the link to https://wiki.ubuntu.com/NoMoreSourcePackages, the discussion
has mostly centered around the buildds that speak to VCS part of the
spec, but I'd like to draw your attention into a couple points that are,
IMHO, worth having spelled out (and of which builddz talk VCS is just
a by-product of):

  * what it really says there is we find that the source package hinders
our development model, so we plan on freeing our maintainers of that
burden [pretty much as they freed them of the 'final binary packages
are built in a clean environment provided by the maintainer' concept],
and let they exist as an internal buildd component only.

(In the paragraph above, our development model means that where
everybody can upload any package. But note that, obviously, that
while they can say everybody can play, they can also (and do) say
everybody can play, 

Re: Challenge: Binary free uploading

2006-07-17 Thread Bernhard R. Link
* Adeodato Sim?? [EMAIL PROTECTED] [060717 15:11]:
 But also, I'm really very curious (and would welcome insight from any
 Ubuntu people reading this) how the above mentioned concerns for
 ensuring a minimum amount of testing are addressed in Ubuntu.

Actually that minimum amount of testing is one of my biggest concerns
with ubuntu. Of course I'm biased because I do not care about Ubuntu and
only get in touch with it, as people come to me as the local person you
can ask when something does not work with your Debian system also with
their Ubuntu problems, so I only see the worst cases.

Take a look at https://launchpad.net/distros/ubuntu/+source/xdm/+bug/2461
for example. From my POV someone took a new version and uploaded it but
most likely never tested it, because it simply cannot work without the
missing part, it does not start, does not run, does nothing. And such
a package even entered a release. (I've never looked at a launchpad
before so I may misparse it, the person coming to me with the problem
at least claimed it was a released version he used)

That's no big problem when you have a majority only target and people
can always switch back to Debian once they realize that they - like
anyone else - are a minority. But once the game is no longer
a small group of people cope with a large volume of stuff and make
sure the things journalists look at work properly but every package
has a maintainer looking after it there should be proper testing
procedures. And when there are no proper testing procedures, at least
the procedures should be choosen to not discourage testing. 

Hochachtungsvoll,
  Bernhard R. Link


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Challenge: Binary free uploading

2006-07-16 Thread Anthony Towns
Hi all,

At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
the new world order for Ubuntu packages -- which will simplify making
changes to Ubuntu packages to a matter of simply committing the change
to the source repository with bzr, and running a new command something
like src publish edgy to instruct the autobuilders to grab the source
from the bzr repository, create a traditional source package, and start
building it for all architectures.

We've recently seen an example of someone using some general features of
the bug tracking system to mirror LaunchPad's features wrt tracking the
status on other BTSes [0] -- what I'm wondering is if we can't manage to
hack up a similar feature to that one for Debian with our current tools.

The idea would be, I guess, to be able to setup pbuilder on a server
somewhere, have it watch for a build instruction -- and then automatically
check out the source, run a build with pbuilder, make the build log
available, and if the build was successful, make the .changes file, the
source and the binary packages available, so that they can be checked by
hand, and uploaded to the archive. 

For bonus points, have the server be able to automatically do the upload
by the maintainer downloading the changes, signing it, and sending the
signed changes file somewhere.

For more bonus points, have the server be easy to setup (apt-get install
some package, edit a conguration file), and work for all sorts of
different revision control systems (CVS, Subversion, git, etc).

Cheers,
aj

[0] http://lists.debian.org/debian-devel-announce/2006/05/msg1.html



signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-16 Thread martin f krafft
also sprach Anthony Towns aj@azure.humbug.org.au [2006.07.16.0847 +0200]:
 At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
 the new world order for Ubuntu packages -- which will simplify making
 changes to Ubuntu packages to a matter of simply committing the change
 to the source repository with bzr, and running a new command something
 like src publish edgy to instruct the autobuilders to grab the source
 from the bzr repository, create a traditional source package, and start
 building it for all architectures.

http://blog.madduck.net/debian/2005.08.11-rcs-uploads ...

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
windows v.i.s.t.a.: viruses, infections, spyware, trojans and adware


signature.asc
Description: Digital signature (GPG/PGP)


Re: Challenge: Binary free uploading

2006-07-16 Thread Erast Benson
On Sun, 2006-07-16 at 16:47 +1000, Anthony Towns wrote:
 Hi all,
 
 At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
 the new world order for Ubuntu packages -- which will simplify making
 changes to Ubuntu packages to a matter of simply committing the change
 to the source repository with bzr, and running a new command something
 like src publish edgy to instruct the autobuilders to grab the source
 from the bzr repository, create a traditional source package, and start
 building it for all architectures.

Just as a side note, very similar idea implemented by Nexenta
GNU/OpenSolaris folks but on top of Subversion, called a HackZone [1].

Currently the entire Ubuntu/Dapper repository imported into Nexenta
subversion. Developers contributing to Nexenta utilizing branches and
taking advantage of COW while promoting experimental work to the main
branch. The entire paradigm also helps us to AutoMerge
upstream (Ubuntu/Dapper in this case) changes almost automatically.

Once Developer made its change in its HackZone, he could just simply
commit  trigger AutoBuilder [2] by doing:

hackzone-commit -b bug#

[1] http://www.gnusolaris.org/gswiki/HackZone
[2] http://www.gnusolaris.org/cgi-bin/hackzone-web

Hope this info will be useful too.

Erast


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Goswin von Brederlow
Anthony Towns aj@azure.humbug.org.au writes:

 Hi all,

 At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
 the new world order for Ubuntu packages -- which will simplify making
 changes to Ubuntu packages to a matter of simply committing the change
 to the source repository with bzr, and running a new command something
 like src publish edgy to instruct the autobuilders to grab the source
 from the bzr repository, create a traditional source package, and start
 building it for all architectures.

 We've recently seen an example of someone using some general features of
 the bug tracking system to mirror LaunchPad's features wrt tracking the
 status on other BTSes [0] -- what I'm wondering is if we can't manage to
 hack up a similar feature to that one for Debian with our current tools.

 The idea would be, I guess, to be able to setup pbuilder on a server
 somewhere, have it watch for a build instruction -- and then automatically
 check out the source, run a build with pbuilder, make the build log
 available, and if the build was successful, make the .changes file, the
 source and the binary packages available, so that they can be checked by
 hand, and uploaded to the archive. 

 For bonus points, have the server be able to automatically do the upload
 by the maintainer downloading the changes, signing it, and sending the
 signed changes file somewhere.

 For more bonus points, have the server be easy to setup (apt-get install
 some package, edit a conguration file), and work for all sorts of
 different revision control systems (CVS, Subversion, git, etc).

 Cheers,
 aj

 [0] http://lists.debian.org/debian-devel-announce/2006/05/msg1.html

Will you setup the Debian DAK to allow source only uploads and apply
patches to wanna-build and buildd for anyone willing to work on this?

Because if this is not also ment for Debian then you are slightly off
topic and Debian people should be told in advance that their work
would be solely for the competition.


Further, what is your opinion on the following claims:

- people won't test build their sources before upload anymore
- all those build failures will overload the buildds
- the untested debs will have far more bugs making sid even more
  unstable

Any other contras I've not repeated from the numerous past discussions
of banning binary uploads?

MfG
Goswin

PS: This is no attack on you, I'm just thrown off since your
challenge seems to contradict all past discussions.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Bernhard R. Link
* Anthony Towns aj@azure.humbug.org.au [060716 08:48]:
 The idea would be, I guess, to be able to setup pbuilder on a server
 somewhere, have it watch for a build instruction -- and then automatically
 check out the source, run a build with pbuilder, make the build log
 available, and if the build was successful, make the .changes file, the
 source and the binary packages available, so that they can be checked by
 hand, and uploaded to the archive. 
 
 For bonus points, have the server be able to automatically do the upload
 by the maintainer downloading the changes, signing it, and sending the
 signed changes file somewhere.

I think this should have some check reading the download-logs and
refuse the upload (and perhaps also delete all built files and
blacklisting the requestor for a month), if the generated .deb files
were not downloaded or the signed changes sent in within some absurd
short time making it inplausible the build was actually checked.
Something like a quarter of an hour, I'd suggest.

On a second thought, perhaps better half an hour and also checking the
.diff.gz was downloaded...

Hochachtungsvoll,
  Bernhard R. Link


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Anthony Towns
On Sun, Jul 16, 2006 at 10:12:37AM +0200, Goswin von Brederlow wrote:
 Will you setup the Debian DAK to allow source only uploads and apply
 patches to wanna-build and buildd for anyone willing to work on this?

No. All the above should be doable without needing any changes to any of
the project infrastructure -- all it does is change how the initial upload
is prepared. In other words, it's a purely technical challenge, no policies
or politics needed.

Cheers,
aj



signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-16 Thread Anthony Towns
On Sun, Jul 16, 2006 at 09:10:20AM +0200, martin f krafft wrote:
 also sprach Anthony Towns aj@azure.humbug.org.au [2006.07.16.0847 +0200]:
  At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
  the new world order for Ubuntu packages -- which will simplify making
  changes to Ubuntu packages to a matter of simply committing the change
  to the source repository with bzr, and running a new command something
  like src publish edgy to instruct the autobuilders to grab the source
  from the bzr repository, create a traditional source package, and start
  building it for all architectures.
 http://blog.madduck.net/debian/2005.08.11-rcs-uploads ...

Wow has it really been that long?

Has any code come of it yet?

Cheers,
aj



signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-16 Thread martin f krafft
also sprach Anthony Towns aj@azure.humbug.org.au [2006.07.16.1320 +0200]:
  http://blog.madduck.net/debian/2005.08.11-rcs-uploads ...
 
 Wow has it really been that long?
 
 Has any code come of it yet?

Well, for one I have not really gotten any input from people, but
that's also partially my fault. I was also disabled for most of the
past 10 months, so no, there hasn't been any code produced from my
end.

I have pretty good ideas how to implement this but have not figured
out how to actually do the certificates. And to be honest, I see
a lot more potential in this idea than Ubuntu chose to implement,
but they are limiting themselves to bzr anyway.

At the core of my approach would be .changes files, which already
list the components of an upload, currently using single filenames.
I don't see a reason why those couldn't be URIs. 

An upload request (as I call them) would be a .changes file sent to
the buildd, which would check it for validity and then start
fetching the components to assemble the source package. So the
orig.tar.gz file would be fetched by the buildd e.g. from upstream
(and checked against size/hash in .changes), the diff.gz generated
by taking the diff -Nru of the unpacked orig against a checkout
defined by another URI, e.g.

  svn://svn.debian.org/svn/collab-main/hibernate/tags/[EMAIL PROTECTED]

(or git, or hg, or bzr, or CVS, just a tarball, or a diff itself),
and potentially have the size/hash of the tree verified -- even
though theoretically, r892 of the above repository cannot be changed
once it's committed, there *are* ways, especially when the
repository isn't hosted on a debian.org machine anymore. And
finally, the DSC file would be generated by the build daemon on the
fly.

This is all fairly straight forward and could be implemented in
a few days. I agree with aj, btw, that this has to be done on
a separate machine first, until it's all tested. Then it should be
merged into dak.

The easiest way to get all this done is by having the buildd send
a generated, standard changes file (the format we know it, not the
one with URIs) and do the upload when it receives the changes file
from the maintainer with a valid DD signature.

However, this would not do it for me, because ideally, the changes
file would be sent to multiple people (my goal remains solving the
bottleneck problem), and there is no way to ensure that these people
actually tested the same package that the buildd assembled -- and
it's just too easy to sign and send back a changes file when you are
currently too busy with other things.

Thus, my idea was to require a certain number of certificates to be
attached to the changes file, which prove that the source has been
tested. E.g. lintian could issue a certificate just as well as
dpkg-buildpackage could issue one when the package successfully
builds (although a piuparts certificate would make that obsolete).
The buildd would check whether there are special requirements for
the specific source package it's assembling, or otherwise fall back
to the default (e.g. libc6 may require 3 developers to sign off an
upload, while the maintainer of ipcalc doesn't think piuparts is
necessary; all other packages require proof of building of the
binary and signoff by a single developer, which is the current
default). Only if the requirements have been met, then the buildd
goes ahead to process the package.

While it's easy to conceive such certificates, and easy to add such
functionality to the checker programmes, it seems impossible to make
it such that they cannot be faked. My take is that we are not trying
to guard against malicious uploads, we are just trying to make
quality assurance more flexible for the requirements of distributed
package maintenance; thus, as soon as we have a certificate system
that may not be secure, but which makes manual certificate
generation (cheating the system) more time-consuming or tedious than
running the checkers and fixing the issues, it's all good. If we
later find that people are going the easy way and e.g. just add
lintian overrides instead of fixing issues, just to get the
certificate they want, we can/should/will resort to other means
anyway.

But there remains one problem with this approach, and this relates
to dak: I think it's very doable to invent a system that builds
binaries from multiple sources (not just source packages), but for
such a binary to make it into the archive still requires a signed
.changes file which dak can read (and dak does not know about
svn:// etc.). Thus, we basically get to the same problem that our
buildd maintainers are facing and it seems we cannot get around to
manual signing of the generated changes files by a developer unless
we beef up dak to be satisfied with the proposed changes file
format.

Anyway, as you can see, this issue certainly strikes my interest and
I am going to Limerick next week to officially start work on my
Ph.D., for which this is certainly a relevant topic. Thus, I'd love
to hear from 

Re: Challenge: Binary free uploading

2006-07-16 Thread Thijs Kinkhorst
On Sun, 2006-07-16 at 14:24 +0200, martin f krafft wrote:
 While it's easy to conceive such certificates, and easy to add such
 functionality to the checker programmes, it seems impossible to make
 it such that they cannot be faked.

I don't like the certificate idea for two reasons.

First, if you want to make sure that no packages with e.g. lintian
errors enter the archive, you can make a lot simpler system by just
running lintian server-side. There's no cheating possible, there's no
complex certificate infrastructure required.

But more importantly, I don't think that strictly requiring that a
package is lintian errors clean is a good idea anyway. Suppose that
there's a security bug in a package that I want to fix quickly. Lintian
yields an error that was already present in the previous package. I
can't upload just the security fix unless I fix that other error aswell.


Thijs


signature.asc
Description: This is a digitally signed message part


Re: Challenge: Binary free uploading

2006-07-16 Thread martin f krafft
also sprach Thijs Kinkhorst [EMAIL PROTECTED] [2006.07.16.1521 +0200]:
 But more importantly, I don't think that strictly requiring that a
 package is lintian errors clean is a good idea anyway. Suppose that
 there's a security bug in a package that I want to fix quickly. Lintian
 yields an error that was already present in the previous package. I
 can't upload just the security fix unless I fix that other error aswell.

First, which certificates are required can be defined for each
source package. Second, I could well imagine an override-style
certificate for emergency uploads.

About running logcheck on the server, the problem is simply the time
it takes until the user gets feedback, and server load.

-- 
Please do not send copies of list mail to me; I read the list!
 
 .''`. martin f. krafft [EMAIL PROTECTED]
: :'  :proud Debian developer and author: http://debiansystem.info
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
i think, therefore i'm single
  -- lizz winstead


signature.asc
Description: Digital signature (GPG/PGP)


Re: Challenge: Binary free uploading

2006-07-16 Thread Wouter Verhelst
On Sun, Jul 16, 2006 at 04:47:12PM +1000, Anthony Towns wrote:
 Hi all,
 
 At https://wiki.ubuntu.com/NoMoreSourcePackages is a description of
 the new world order for Ubuntu packages -- which will simplify making
 changes to Ubuntu packages to a matter of simply committing the change
 to the source repository with bzr, and running a new command something
 like src publish edgy to instruct the autobuilders to grab the source
 from the bzr repository, create a traditional source package, and start
 building it for all architectures.
 
 We've recently seen an example of someone using some general features of
 the bug tracking system to mirror LaunchPad's features wrt tracking the
 status on other BTSes [0] -- what I'm wondering is if we can't manage to
 hack up a similar feature to that one for Debian with our current tools.
 
 The idea would be, I guess, to be able to setup pbuilder on a server
 somewhere,

Why pbuilder? It's a great tool to check build-deps, and it's a great
tool to casually build packages from time to time; but if you're really
going to get rid of binaries in uploads, I think the more efficient way
to do so would be to hack sbuild and buildd to do so.

 have it watch for a build instruction -- and then automatically
 check out the source, run a build with pbuilder, make the build log
 available, and if the build was successful, make the .changes file, the
 source and the binary packages available, so that they can be checked by
 hand, and uploaded to the archive. 
 
 For bonus points, have the server be able to automatically do the upload
 by the maintainer downloading the changes, signing it, and sending the
 signed changes file somewhere.
[...]

buildd already has all that.
* wanna-build has lists of packages that need to be built, and buildd
  grabs packages out of those lists. The wanna-build database is
  currently fed by some scripts that are part of dak, but there's
  nothing preventing anyone from writing different scripts and/or
  modifying wanna-build slightly.
* build logs are on buildd.d.o.
* .changes files are part of the build log, and are clearly marked so
  they can be mechanically extracted by a sed one-liner (possibly a perl
  one-liner too, not sure about that bit).
* uploads are done by sending signed .changes files to the buildd host
  (the exact mail address to be used depends on the exact buildd host in
  use, obviously).

You would only need to create some scripts to populate the wanna-build
database, plus modify sbuild so that it knows how to fetch a source
package from a version control system rather than from a Debian mirror.
The rest would probably work as is.

All that being said, I'm not convinced doing sourceless uploads is
actually a good idea. It's been proposed in the past, but I've never
seen arguments that convinced me it would be a good idea. The difference
with this idea is that you could set it up so that the original binary
upload would be done out of your source repository, which would then
do a sourceful upload to ftp-master which in turn would trigger builds
on other architectures; that way, you wouldn't bother other
architectures with untested builds.

But we'll still have issues.

For starters, we'd need a *lot* of hardware to be able to do all these
builds. Many of them will fail, because there *will* be people who will
neglect to test their builds, and they will hog the machine so that
other people (who do test properly) have to wait a long time for their
build to happen.

Ubuntu has a lot more money behind them than Debian does, so they can
mitigate this problem by simply buying more hardware. How do you suggest
Debian would tackle this problem?

-- 
Fun will now commence
  -- Seven Of Nine, Ashes to Ashes, stardate 53679.4


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Challenge: Binary free uploading

2006-07-16 Thread Stephen Gran
This one time, at band camp, Wouter Verhelst said:
 All that being said, I'm not convinced doing sourceless uploads is
 actually a good idea. It's been proposed in the past, but I've never
 seen arguments that convinced me it would be a good idea. The difference
 with this idea is that you could set it up so that the original binary
 upload would be done out of your source repository, which would then
 do a sourceful upload to ftp-master which in turn would trigger builds
 on other architectures; that way, you wouldn't bother other
 architectures with untested builds.
 
 But we'll still have issues.
 
 For starters, we'd need a *lot* of hardware to be able to do all these
 builds. Many of them will fail, because there *will* be people who will
 neglect to test their builds, and they will hog the machine so that
 other people (who do test properly) have to wait a long time for their
 build to happen.

Why not just require binary uploads, and then chuck the binary away?
Then we are where we are today (someone managed to get the thing to
build at least once), but all debs are built from source on the buildd's.
-- 
 -
|   ,''`.Stephen Gran |
|  : :' :[EMAIL PROTECTED] |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


signature.asc
Description: Digital signature


Re: Challenge: Binary free uploading

2006-07-16 Thread Goswin von Brederlow
martin f krafft [EMAIL PROTECTED] writes:

 An upload request (as I call them) would be a .changes file sent to
 the buildd, which would check it for validity and then start
 fetching the components to assemble the source package. So the

At home I had my buildd setup so I could just dump an url _for_ a
changes file into a web frontent, it would fetch it, verify the
signature, fetch the files, verify them and then trigger a build.

Should buildds realy have to know how to use several VCS systems to
generate a source package? Unless this is ment as write code at home,
test compile on buildd setup I don't see much advantage of this and a
lot of temptation to skip testing the source before build.

A simple upload queue to dupload changes files to and maybe a web
interface to enter urls for changes files should be enough to do a
final does this build cleanly test before an upload. The source and
changes file should always be available from local tests the developer
did.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]