Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Steve Langasek
On Tue, Aug 23, 2005 at 07:58:40PM +0200, Adrian von Bidder wrote:
 On Monday 22 August 2005 23.51, Steve Langasek wrote:
  On Mon, Aug 22, 2005 at 06:22:11PM +, W. Borgert wrote:
   On Mon, Aug 22, 2005 at 07:29:31PM +0200, Adrian von Bidder wrote:
really matters:  can we (the Debian project) maintain the port?  Thus
I propose we only limit on the number of developers:  are there
people who are willing and competent to maintain kernel, boot loader,
platform specific installer bits, libc and toolchain?

   That sounds sensible.

  It ignores the fact that every port is a drain on centralized project
  resources, whether it has users or not.

 How so?

 (I mean, how does my proposal to drop the 'has users' requirement in favor 
 of 'do we have developers' ignore the resource usage.  I certainly do not 
 dispute that a port uses resources.)

Ok, then perhaps it doesn't ignore it, but I don't believe that it
addresses it adequately.  A 5GB repository on a central project machine,
that adds to the maintenance load of DSA and the ftp-masters, is a
rather expensive sandbox to give a handful of developers in the case
that it doesn't forward the interests of our actual users.

 And even if:  would a userless port have the developers?  For one thing, the 
 develoeprs are users themselves, and for another thing, even 'doorstop 
 architectures' where 90% of the users are seriously computer infected, only 
 a few of those are likely to be competent enough to maintain kernel and 
 toolchain.  So I'd claim the (difficult to define) 'has users' requirement 
 is not so much different from a (IMHO easier to define) 'has developers' 
 requirement.

I don't understand why you think it's difficult to define this
requirement -- certainly not why it's difficult enough to warrant
dropping it.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
[EMAIL PROTECTED]   http://www.debian.org/


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Anthony Towns
On Tue, Aug 23, 2005 at 11:06:37PM -0700, Steve Langasek wrote:
  (I mean, how does my proposal to drop the 'has users' requirement in favor 
  of 'do we have developers' ignore the resource usage.  I certainly do not 
  dispute that a port uses resources.)
 Ok, then perhaps it doesn't ignore it, but I don't believe that it
 addresses it adequately.  A 5GB repository on a central project machine,
 that adds to the maintenance load of DSA and the ftp-masters,

5GB? The archive's 140GB, so that's over 10GB per architecture on each
mirror, and there's additional space for old binaries that're kept around
in the morgue -- which is a little under 3GB per month per architecture;
we're keeping old debs around for between six months and a year atm,
so that's probably more like 34GB centrally and 10GB on mirrors.

The real problem, IMO, with not having users is that it becomes easy
to say oh, well, no one's using it, doesn't matter if it stays broken
a while longer, I've got other things to do, or, worse, this fix is
crucially needed for this architecture which no one users, sorry that
it breaks i386.

  And even if:  would a userless port have the developers?  

Heh.

I thought there was going to be separate questions, something like: show
you've got 5-10 DDs who'll support and maintain the port, appropriate
upstream support for the toolchain, and ~50 actual users who'll use the
port for real life things.

(And that's the /general/ case, if s390 doesn't need 50 separate users
because it has 10 machines with 50 billion users each to justify its
existance, that's fine -- exceptions can be made to any of these sorts
of rules when appropriate)

Cheers,
aj


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread GOTO Masanori
At Sun, 21 Aug 2005 03:58:24 +0200,
Wouter Verhelst wrote:
 - must be a developer-accessible debian.org machine for the
   architecture

Does this part mean developer-accessible machine is always usable for
all debian developers?  Does such machine have dchroot for
old-stable/stable/unstable ?

I want you to describe this part explicitly:

 - must be at least one development debian.org machine that is
   available for all debian developers (not restricted) for the
   architecutre.  dchroot for stable/unstable must be available on
   that machine.

We, architecture specific package maintainers, including toolchain
package (gcc and glibc), frequently need to compile and test their
packages to support them (ex: new ABI and upstream release).  I
sometimes met such developer un-accessible architectures - developer
accessible machine is more important to keep each architecture usable.
IMO, such unmaintained architecture machines must be SCC, caused by
lazyness of porting teams.

Regards,
-- gotom


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Tollef Fog Heen
* Olaf van der Spek 

| I understand most maintainers don't try the new toolchain themselves,
| but wouldn't it be possible for someone else to build the entire
| archive (or parts of it by multiple people) and (automatically) report
| bugs?

With the toolchain, it won't help to just rebuild the archive on a
fast architecture: A single AMD64 system can rebuild the archive in a
couple of days, but few other architectures can.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Wouter Verhelst
On Wed, Aug 24, 2005 at 05:04:40PM +0900, GOTO Masanori wrote:
 At Sun, 21 Aug 2005 03:58:24 +0200,
 Wouter Verhelst wrote:
  - must be a developer-accessible debian.org machine for the
architecture
 
 Does this part mean developer-accessible machine is always usable for
 all debian developers?  Does such machine have dchroot for
 old-stable/stable/unstable ?
 
 I want you to describe this part explicitly:
 
  - must be at least one development debian.org machine that is
available for all debian developers (not restricted) for the
architecutre.  dchroot for stable/unstable must be available on
that machine.

For clarity: the list of items specified in that post was just the items
from the original proposal, rearranged in two lists.

We have not discussed this particular bit in detail; so I can't just go
ahead and start changing them -- but I'd say your modification would
make sense and sounds reasonable.

-- 
The amount of time between slipping on the peel and landing on the
pavement is precisely one bananosecond


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Andreas Barth
Hi,

* GOTO Masanori ([EMAIL PROTECTED]) [050824 10:38]:
 At Sun, 21 Aug 2005 03:58:24 +0200,
 Wouter Verhelst wrote:
  - must be a developer-accessible debian.org machine for the
architecture
 
 Does this part mean developer-accessible machine is always usable for
 all debian developers?  Does such machine have dchroot for
 old-stable/stable/unstable ?

It was definitly meant that way, yes.


Cheers,
Andi


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Olaf van der Spek
On 8/24/05, Tollef Fog Heen [EMAIL PROTECTED] wrote:
 * Olaf van der Spek
 
 | I understand most maintainers don't try the new toolchain themselves,
 | but wouldn't it be possible for someone else to build the entire
 | archive (or parts of it by multiple people) and (automatically) report
 | bugs?
 
 With the toolchain, it won't help to just rebuild the archive on a
 fast architecture: A single AMD64 system can rebuild the archive in a
 couple of days, but few other architectures can.

Wouldn't that at least catch the non-platform-specific bugs?
And what about cross-compiling?



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Wouter Verhelst
On Wed, Aug 24, 2005 at 11:42:28AM +0200, Olaf van der Spek wrote:
 And what about cross-compiling?

Cross-compiling is no magic wand that can save us from the slow
architectures. There are quite a number of problems with
cross-compiling:

* Many packages don't support cross-compiling, and those that do may
  have bugs in their makefiles that make cross-compiling either harder
  or impossible.
* You can't run the test suites of the software you're compiling, at
  least not directly.
* There's a serious problem with automatically installing
  build-dependencies. Dpkg-cross may help here, but there's no
  apt-cross (at least not TTBOMK); and implementing that may or may not
  be hard (due to the fact that build-dependencies do not contain
  information about whether a package is an arch:all package or not).
* By using a cross-compiler, by definition you use a compiler that is
  not the same as the default compiler for your architecture. As such,
  your architecture is no longer self-hosting. This may introduce bugs
  when people do try to build software for your architecture natively
  and find that there are slight and subtle incompatibilities.

Hence the point of trying out distcc in the post to d-d-a; that will fix
the first three points here, but not the last one. But it may not be
worth the effort; distcc runs cc1 and as on a remote host, but cpp and
ld are still being run on a native machine. Depending on the program
being compiled, this may take more time than expected.

-- 
The amount of time between slipping on the peel and landing on the
pavement is precisely one bananosecond


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Peter 'p2' De Schrijver
 * Many packages don't support cross-compiling, and those that do may
   have bugs in their makefiles that make cross-compiling either harder
   or impossible.
 * You can't run the test suites of the software you're compiling, at
   least not directly.
 * There's a serious problem with automatically installing
   build-dependencies. Dpkg-cross may help here, but there's no
   apt-cross (at least not TTBOMK); and implementing that may or may not
   be hard (due to the fact that build-dependencies do not contain
   information about whether a package is an arch:all package or not).

scratchbox solves these problems.

 * By using a cross-compiler, by definition you use a compiler that is
   not the same as the default compiler for your architecture. As such,
   your architecture is no longer self-hosting. This may introduce bugs
   when people do try to build software for your architecture natively
   and find that there are slight and subtle incompatibilities.
 

I have never seen nor heared about such a case. IME this is extremely 
rare (if it happens at all). The only way to know if this is a real
problem is to try using cross compiling and verify against existing
native compiled binaries. Unfortunately the verify bit is quite annoying
as a simple cmp will likely fail because of things like build date,
build number, etc included in the binary. For packages which have a testsuite, 
this testsuite could be used as the verification step. 

 Hence the point of trying out distcc in the post to d-d-a; that will fix
 the first three points here, but not the last one. But it may not be
 worth the effort; distcc runs cc1 and as on a remote host, but cpp and
 ld are still being run on a native machine. Depending on the program
 being compiled, this may take more time than expected.
 

Which is why scratchbox is a more interesting solution, as it only runs
those parts on target which can't be done on the host.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Wouter Verhelst
On Wed, Aug 24, 2005 at 02:13:50PM +0200, Peter 'p2' De Schrijver wrote:
  * Many packages don't support cross-compiling, and those that do may
have bugs in their makefiles that make cross-compiling either harder
or impossible.
  * You can't run the test suites of the software you're compiling, at
least not directly.
  * There's a serious problem with automatically installing
build-dependencies. Dpkg-cross may help here, but there's no
apt-cross (at least not TTBOMK); and implementing that may or may not
be hard (due to the fact that build-dependencies do not contain
information about whether a package is an arch:all package or not).
 
 scratchbox solves these problems.

As does distcc; that wasn't the point, these are just issues that occur
with cross-compilers.

  * By using a cross-compiler, by definition you use a compiler that is
not the same as the default compiler for your architecture. As such,
your architecture is no longer self-hosting. This may introduce bugs
when people do try to build software for your architecture natively
and find that there are slight and subtle incompatibilities.
  
 
 I have never seen nor heared about such a case. IME this is extremely
 rare (if it happens at all).

Do you want to take the chance of finding out the hard way after having
built 10G (or more) worth of software?

This is not a case of embedded software where you cross-compile
something that ends up on a flash medium the size of which is counted in
megabytes; this is not a case of software which is being checked and
tested immediately after compilation and before deployment. This is a
whole distribution. Subtle bugs in the compiler may go unnoticed for a
fair while if you don't have machines that run that software 24/7. If
you replace build daemons by cross-compiling machines, you lose machines
that _do_ run the software at its bleeding edge 24/7, and thus lose
quite some testing. It can already take weeks as it is to detect and
track down subtle bugs if they creep up in the toolchain; are you
willing to make it worse by delaying the time of detection like that?

I'm not saying this problem is going to hit us very often. I do say this
is going to hit us at _some_ point in the future; maybe next year, maybe
in five years, maybe later; in maintaining autobuilder machines over the
past four years, I've seen enough weird and unlikely problems become
reality to assume murphy's law holds _quite_ some merit here. The
important thing to remember is that this is a risk that is real, and
that should be considered _before_ we blindly switch our build daemons
to cross-compiling machines.

I'm not even saying I oppose to using cross-compilers; it's just that
the idea of slow architectures' build daemons are slow, but luckily
there's an easy solution; we can replace them by fast machines that do
cross-compiling is blatantly incorrect.

 The only way to know if this is a real problem is to try using cross
 compiling and verify against existing native compiled binaries.

That's not much help. We need to test this continuously, not just once a
tiny little bit and then never again. If you need to compare against
natively built packages, you'll need build daemons anyway; so what's the
point then?

 Unfortunately the verify bit is quite annoying as a simple cmp will
 likely fail because of things like build date, build number, etc
 included in the binary.

Well, that makes it even less of a help.

 For packages which have a testsuite, this testsuite could be used as
 the verification step. 

Sure -- but the number of packages that have a reliable testsuite is
miserably low.

  Hence the point of trying out distcc in the post to d-d-a; that will fix
  the first three points here, but not the last one. But it may not be
  worth the effort; distcc runs cc1 and as on a remote host, but cpp and
  ld are still being run on a native machine. Depending on the program
  being compiled, this may take more time than expected.
 
 Which is why scratchbox is a more interesting solution, as it only runs
 those parts on target which can't be done on the host.

I'm not so sure I agree with you on that one. Speed is just one part of
the story; quality is another. The more you run natively, the more bugs
you'll find.

But I guess I can have a look at scratchbox before I say no to it.

-- 
The amount of time between slipping on the peel and landing on the
pavement is precisely one bananosecond


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Michael Banck
On Wed, Aug 24, 2005 at 02:13:50PM +0200, Peter 'p2' De Schrijver wrote:
  * Many packages don't support cross-compiling, and those that do may
have bugs in their makefiles that make cross-compiling either harder
or impossible.
  * You can't run the test suites of the software you're compiling, at
least not directly.
  * There's a serious problem with automatically installing
build-dependencies. Dpkg-cross may help here, but there's no
apt-cross (at least not TTBOMK); and implementing that may or may not
be hard (due to the fact that build-dependencies do not contain
information about whether a package is an arch:all package or not).
 
 scratchbox solves these problems.

[...]

 Which is why scratchbox is a more interesting solution, as it only runs
 those parts on target which can't be done on the host.

Sounds interesting.  Will you work on implementing this?


Michael

-- 
Michael Banck
Debian Developer
[EMAIL PROTECTED]
http://www.advogato.org/person/mbanck/diary.html


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Tollef Fog Heen
* Olaf van der Spek 

| On 8/24/05, Tollef Fog Heen [EMAIL PROTECTED] wrote:
|  * Olaf van der Spek
|  
|  | I understand most maintainers don't try the new toolchain themselves,
|  | but wouldn't it be possible for someone else to build the entire
|  | archive (or parts of it by multiple people) and (automatically) report
|  | bugs?
|  
|  With the toolchain, it won't help to just rebuild the archive on a
|  fast architecture: A single AMD64 system can rebuild the archive in a
|  couple of days, but few other architectures can.
| 
| Wouldn't that at least catch the non-platform-specific bugs?

They are usually caught fairly quickly.  The problem here is what to
do in the cases where nobody cares enough about the port to fix
toolchain breakages which only affect that arch.  If we have a broken
toolchain across all architectures, it's something different.

| And what about cross-compiling?

Cross-compilation is prone to failures and not really a good
solution.  This has been discussed here a lot before.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Olaf van der Spek
On 8/24/05, Tollef Fog Heen [EMAIL PROTECTED] wrote:
 | Wouldn't that at least catch the non-platform-specific bugs?
 
 They are usually caught fairly quickly.  The problem here is what to
 do in the cases where nobody cares enough about the port to fix
 toolchain breakages which only affect that arch.  If we have a broken

'Officially' ignore the port until it's solved?



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Olaf van der Spek
On 8/24/05, Wouter Verhelst [EMAIL PROTECTED] wrote:
 On Wed, Aug 24, 2005 at 02:13:50PM +0200, Peter 'p2' De Schrijver wrote:
 Do you want to take the chance of finding out the hard way after having
 built 10G (or more) worth of software?

 This is not a case of embedded software where you cross-compile
 something that ends up on a flash medium the size of which is counted in
 megabytes; this is not a case of software which is being checked and
 tested immediately after compilation and before deployment. This is a
 whole distribution. Subtle bugs in the compiler may go unnoticed for a
 fair while if you don't have machines that run that software 24/7. If
 you replace build daemons by cross-compiling machines, you lose machines

Instead of replacing machines you could add cross-compiling machines
to detect bugs earlier if the native machines can't keep up with
(speculative) compiling to find (toolchain) bugs.



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Andreas Barth
* Olaf van der Spek ([EMAIL PROTECTED]) [050824 15:52]:
 On 8/24/05, Tollef Fog Heen [EMAIL PROTECTED] wrote:
  | Wouldn't that at least catch the non-platform-specific bugs?
  
  They are usually caught fairly quickly.  The problem here is what to
  do in the cases where nobody cares enough about the port to fix
  toolchain breakages which only affect that arch.  If we have a broken

 'Officially' ignore the port until it's solved?

Ah, and that's just what this proposal is about.


Cheers,
Andi


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread Peter 'p2' De Schrijver

   * By using a cross-compiler, by definition you use a compiler that is
 not the same as the default compiler for your architecture. As such,
 your architecture is no longer self-hosting. This may introduce bugs
 when people do try to build software for your architecture natively
 and find that there are slight and subtle incompatibilities.
   
  
  I have never seen nor heared about such a case. IME this is extremely
  rare (if it happens at all).
 
 Do you want to take the chance of finding out the hard way after having
 built 10G (or more) worth of software?
 

I don't see why the risk would be higher compared to native compilation.

 This is not a case of embedded software where you cross-compile
 something that ends up on a flash medium the size of which is counted in
 megabytes; this is not a case of software which is being checked and

Some embedded software is fairly extensive and runs from HD.

 tested immediately after compilation and before deployment. This is a

Most packages are not tested automatically at all.

 whole distribution. Subtle bugs in the compiler may go unnoticed for a
 fair while if you don't have machines that run that software 24/7. If

Only a very tiny fraction of the software in debian runs 24/7 on debian
machines.

 you replace build daemons by cross-compiling machines, you lose machines
 that _do_ run the software at its bleeding edge 24/7, and thus lose
 quite some testing. It can already take weeks as it is to detect and

Most cross compiled software also runs 24/7. I have yet to see problems
produced by cross compiling the code.

 track down subtle bugs if they creep up in the toolchain; are you
 willing to make it worse by delaying the time of detection like that?
 

They wouldn't necessarily show up any faster in native builds. 

 I'm not saying this problem is going to hit us very often. I do say this
 is going to hit us at _some_ point in the future; maybe next year, maybe
 in five years, maybe later; in maintaining autobuilder machines over the
 past four years, I've seen enough weird and unlikely problems become
 reality to assume murphy's law holds _quite_ some merit here. The
 important thing to remember is that this is a risk that is real, and
 that should be considered _before_ we blindly switch our build daemons
 to cross-compiling machines.
 

I don't think the risk is real considering the amount of cross compiled
software already running in the world.

Cheers,

Peter (p2).


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-24 Thread W. Borgert
Quoting Peter 'p2' De Schrijver [EMAIL PROTECTED]:
 Most packages are not tested automatically at all.

Unfortunately not.

 Most cross compiled software also runs 24/7. I have yet to see problems
 produced by cross compiling the code.
...
 I don't think the risk is real considering the amount of cross compiled
 software already running in the world.

Yes.  In my company we rely heavily on cross-compilation, because
our target environment is not (meant to be) self-hosting.  There is
absolutely no problem in terms of stability, that is related to the
cross-compilation.  Sometimes we ran into gcc bugs, but those were
not only in the cross cc.

Cheers, WB


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Olaf van der Spek
On 8/23/05, Thomas Bushnell BSG [EMAIL PROTECTED] wrote:
 Roger Leigh [EMAIL PROTECTED] writes:
 
  Andreas Jochens in particular did a lot of hard work in fixing most of
  the GCC 4.0 failures and regressions over the last year while porting
  for amd64.  The fact that many maintainers have not yet applied, or at
  least carefully reviewed and applied amended patches, is a pity.
 
 The reason I didn't was that I didn't want to make potentially
 destabilizing changes with sarge in progress.

Sarge was released more than two months ago. Why didn't you do it in
the meantime (just curious and wondering)?



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Marc Haber
On Mon, 22 Aug 2005 14:51:52 -0700, Steve Langasek [EMAIL PROTECTED]
wrote:
On Mon, Aug 22, 2005 at 06:22:11PM +, W. Borgert wrote:
 On Mon, Aug 22, 2005 at 07:29:31PM +0200, Adrian von Bidder wrote:
  really matters:  can we (the Debian project) maintain the port?  Thus I 
  propose we only limit on the number of developers:  are there people who 
  are willing and competent to maintain kernel, boot loader, platform 
  specific installer bits, libc and toolchain?

 That sounds sensible.

It ignores the fact that every port is a drain on centralized project
resources, whether it has users or not.

Even a userless port is a service to the community. I have, for
example, a package whose upstream is a keen reader of our buildd logs
to improve on his package's portability.

Greetings
Marc

-- 
-- !! No courtesy copies, please !! -
Marc Haber |Questions are the | Mailadresse im Header
Mannheim, Germany  | Beginning of Wisdom  | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG Rightful Heir | Fon: *49 621 72739834



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Steve Langasek
On Tue, Aug 23, 2005 at 11:12:09AM +0200, Marc Haber wrote:
 On Mon, 22 Aug 2005 14:51:52 -0700, Steve Langasek [EMAIL PROTECTED]
 wrote:
 On Mon, Aug 22, 2005 at 06:22:11PM +, W. Borgert wrote:
  On Mon, Aug 22, 2005 at 07:29:31PM +0200, Adrian von Bidder wrote:
   really matters:  can we (the Debian project) maintain the port?  Thus I 
   propose we only limit on the number of developers:  are there people who 
   are willing and competent to maintain kernel, boot loader, platform 
   specific installer bits, libc and toolchain?

  That sounds sensible.

 It ignores the fact that every port is a drain on centralized project
 resources, whether it has users or not.

 Even a userless port is a service to the community. I have, for
 example, a package whose upstream is a keen reader of our buildd logs
 to improve on his package's portability.

So we should spend 4-5GB of disk space per architecture on ftp-master.d.o
for ports with no users, just so upstreams can see how portable their code
is to architectures that have no users?

No thanks.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
[EMAIL PROTECTED]   http://www.debian.org/


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Steve Langasek
On Mon, Aug 22, 2005 at 11:42:50AM -0400, David Nusinow wrote:
 On Mon, Aug 22, 2005 at 12:22:47AM -0700, Steve Langasek wrote:
  There was discussion in Vancouver about requiring ports to have an
  upstream kernel maintainer, FSO upstream; perhaps we should be
  considering requiring there to be a glibc/gcc/binutils upstream for each
  port, so that we don't get the first sign of these bugs when the
  packages hit unstable.

 What sort of qa would upstream be doing that would help us out here?

Ideally, for each port there would be someone tracking glibc/gcc/binutils
upstream who's in a position to recognize when a change may cause
regressions for that port, and testing accordingly.

Next best is to have people who are making heavy use of updated versions of
these packages before they reach unstable; this should generally be fairly
straightforward, e.g. both gcc-4.0 and glibc 2.3.5 were in experimental for
a while before being uploaded to unstable, but the buildds are still doing a
lot of the work of catching regressions once they hit unstable, and by that
point the damage is done.

 Can the port teams do this kind of work themselves prior to packages
 hitting unstable?

Absolutely.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
[EMAIL PROTECTED]   http://www.debian.org/


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Thomas Bushnell BSG
Olaf van der Spek [EMAIL PROTECTED] writes:

 On 8/23/05, Thomas Bushnell BSG [EMAIL PROTECTED] wrote:
 Roger Leigh [EMAIL PROTECTED] writes:
 
  Andreas Jochens in particular did a lot of hard work in fixing most of
  the GCC 4.0 failures and regressions over the last year while porting
  for amd64.  The fact that many maintainers have not yet applied, or at
  least carefully reviewed and applied amended patches, is a pity.
 
 The reason I didn't was that I didn't want to make potentially
 destabilizing changes with sarge in progress.

 Sarge was released more than two months ago. Why didn't you do it in
 the meantime (just curious and wondering)?

Because I started teaching in earnest; this is a very busy time for me
until late September.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Adrian von Bidder
On Monday 22 August 2005 23.51, Steve Langasek wrote:
 On Mon, Aug 22, 2005 at 06:22:11PM +, W. Borgert wrote:
  On Mon, Aug 22, 2005 at 07:29:31PM +0200, Adrian von Bidder wrote:
   really matters:  can we (the Debian project) maintain the port?  Thus
   I propose we only limit on the number of developers:  are there
   people who are willing and competent to maintain kernel, boot loader,
   platform specific installer bits, libc and toolchain?
 
  That sounds sensible.

 It ignores the fact that every port is a drain on centralized project
 resources, whether it has users or not.

How so?

(I mean, how does my proposal to drop the 'has users' requirement in favor 
of 'do we have developers' ignore the resource usage.  I certainly do not 
dispute that a port uses resources.)

And even if:  would a userless port have the developers?  For one thing, the 
develoeprs are users themselves, and for another thing, even 'doorstop 
architectures' where 90% of the users are seriously computer infected, only 
a few of those are likely to be competent enough to maintain kernel and 
toolchain.  So I'd claim the (difficult to define) 'has users' requirement 
is not so much different from a (IMHO easier to define) 'has developers' 
requirement.

cheers
-- vbi

-- 
Beware of the FUD - know your enemies. This week
* The Alexis de Toqueville Institue *
http://fortytwo.ch/opinion/adti


pgpzaYRHf0ZFu.pgp
Description: PGP signature


Re: Team have veto rights, because they can just refuse the work anyway? (Was: Results of the meeting in Helsinki about the Vancouver proposal)

2005-08-23 Thread Emanuele Rocca
Hello David,

* David Nusinow [EMAIL PROTECTED], [2005-08-21 19:44 -0400]:
  On Sun, Aug 21, 2005 at 11:29:51PM +0200, Petter Reinholdtsen wrote:
   [Wouter Verhelst]
   b) the three beforementioned teams could already refuse to
   support a port anyhow, simply by not doing the work.
   
   This is not really a valid argument.  If a team in debian refuses to
   accept decisions made by a majority of debian developers, or rejects
   democratic control, this team will just have to be replaced by the DPL.
  
  I think the reality of this situation is that a team would refuse to do the
  work due to valid reasons. The easy ones I can imagine are the ones we've
  heard already, such as We don't have the people to do this or We don't
  have enough time. 

I think that Petter was speaking about intentional boycott by a group of
DDs for some specific reasons. Lack of people or time is something which
is not malicious and that don't require the intervention of the DPL, but
only more contributors.

ciao,
ema


signature.asc
Description: Digital signature


Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Walter Landry
Wouter Verhelst wrote:
 Vancouver has gotten a very specific meaning in the Debian
 community: one of a visionary proposal[1] that received quite its
 share of flames from many Debian contributors, including
 myself. Since it appeared to many of us that the intentional result
 of this proposal would have been to essentially kill off many of our
 architectures, many of us weren't too happy with the proposal.

How about we completely scrap the proposal, and instead let the
decision be made by the DPL?  The DPL can use whatever criteria they
like to decide, although presumably they would be guided by documents
such as the Vancouver proposal.  The DPL would talk to the various
teams to get their input on whether an architecture should be
supported by Debian, but the DPL would have the final say.

The nice thing about this is that the DPL is an elected officer, and
so is directly accountable to the rest of Debian.  This tends to make
them better communicators.

Cheers,
Walter


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Adeodato Simó
* Sven Luther [Mon, 22 Aug 2005 23:17:10 +0200]:

  Sven Luther dijo [Mon, Aug 22, 2005 at 12:52:06PM +0200]:

   the security level would still be higher using only official
   buildds and centraly controled.

   The only reason this does not happen is that the ftp-masters dislike the 
   x86
   buildds lagging or breaking and people bothering them about it.

 the problems is that they are fearing an hoard of angry x86 users
 complaining to them if there is be it only one day delay in the build of some
 random x86 package :)

  This is hilarious. Both the idea of the i386 buildd admin getting
  scared by a hoard of angry users, and the statement that such fear is
  THE ONLY REASON why we don't have source-only uploads in Debian.

  There is such a big journalist inside you, Mr Luther.

-- 
Adeodato Simó
EM: asp16 [ykwim] alu.ua.es | PK: DA6AE621
 
Capitalism is the extraordinary belief that the nastiest of men, for the
nastiest of reasons, will somehow work for the benefit of us all.
-- John Maynard Keynes


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-23 Thread Adeodato Simó
* Manoj Srivastava [Mon, 22 Aug 2005 07:58:06 -0500]:

 The end goal is not just to have packages built on the
  buildd -- and important goal for Debian, certainly, but not the only
  one we have. As promoters of free software, we also are committed to
  have packages build for our users, in a real environment, not just
  a sterile, controlled. artificial debian specific test
  environment. 

  I do my edit-build-test cycles in my main system, but compile the
  packages that I upload in a pbuilder-like environment.

-- 
Adeodato Simó
EM: asp16 [ykwim] alu.ua.es | PK: DA6AE621
 
The first step on the road to wisdom is the admission of ignorance. The
second step is realizing that you don't have to blab it to the world.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Sven Luther
On Sun, Aug 21, 2005 at 07:28:55PM +0200, Jonas Smedegaard wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 21-08-2005 03:58, Wouter Verhelst wrote:
 
  We also came to the conclusion that some of the requirements proposed in
  Vancouver would make sense as initial requirements -- requirements that
  a port would need to fulfill in order to be allowed on the mirror
  network -- but not necessarily as an 'overall' requirement -- a
  requirement that a port will always need to fulfill if it wants to be
  part of a stable release, even if it's already on the mirror network.
  Those would look like this:
 [snip]
  Overall:
 [snip]
  - binaries must have been built and signed by official Debian
Developers
 
 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.
 
 
 Is that intended to change, or is it a typo in the proposal?

All packages should be built by official debian buildds anyway, not on
developper machines with random cruft and unsecure packages installed, or even
possibly experimental or home-modified stuff.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Steve Langasek
Wouter,

Thank you for your work in preparing this; I think this summary is a
good beginning for revisiting the questions the Vancouver meeting poses
for etch.

On Sun, Aug 21, 2005 at 03:58:24AM +0200, Wouter Verhelst wrote:

 Vancouver has gotten a very specific meaning in the Debian community:
 one of a visionary proposal[1] that received quite its share of flames from
 many Debian contributors, including myself. Since it appeared to many of us
 that the intentional result of this proposal would have been to essentially
 kill off many of our architectures, many of us weren't too happy with the
 proposal.

As I reiterated several times in the discussion earlier this year, the
Vancouver proposal was motivated in part by a concern that the absolute
count of release architectures in Debian is too high to be sustainable
even if all architectures met the proposed criteria.  It may be the
decision of the Project that this is the release team's problem, but
people should recognize that this is not a decision that comes for free.
Even well-maintained ports will occasionally introduce their share of
architecture-specific problems, and we certainly have ports that are not
well-maintained right now, including in ways not addressed by any of the
proposed criteria that came out of Vancouver.

In particular, we invariably run into arch-specific problems every time
a new version of a toolchain package is uploaded to unstable.  Some may
remember that the new glibc/gcc blocked non-toolchain progress for
months during the beginning of the sarge release cycle, and that the
aftermath took months more to be sorted out.  So far, etch threatens to
be more of the same; in the past month we've had:

- miscellaneous, but far-reaching, internal compiler errors with gcc-4.0
  on at least arm and m68k, though at least m68k seems to be getting
  dealt with in response to a disclaimer from the gcc maintainer that he
  was unable to support m68k
- a binutils bug on hppa that caused a glibc miscompilation, leading to
  /usr/bin/make segfaulting consistently and bringing the hppa buildd to
  a halt for about a week
- a change in glibc resulted in certain libraries built with an old
  version of binutils on powerpc to blow up the linker; binNMUs all around,
  moderatedelays  for building other packages but not a big deal since
  everything waits on glibc anyway; nevertheless, definitely a time sink
- an undocumented ABI change in glibc on alpha that results in fakeroot
  reporting files as zero bytes in size at inopportune times (like when
  trying to compile the file containing the declaration of main()...);
  this one has just been identified, and we can probably count on
  another week to get it firmly resolved, followed by another glibc
  upload and another week of waiting...

There was discussion in Vancouver about requiring ports to have an
upstream kernel maintainer, FSO upstream; perhaps we should be
considering requiring there to be a glibc/gcc/binutils upstream for each
port, so that we don't get the first sign of these bugs when the
packages hit unstable.

 We also came to the conclusion that some of the requirements proposed in
 Vancouver would make sense as initial requirements -- requirements that
 a port would need to fulfill in order to be allowed on the mirror
 network -- but not necessarily as an 'overall' requirement -- a
 requirement that a port will always need to fulfill if it wants to be
 part of a stable release, even if it's already on the mirror network.
 Those would look like this:

FWIW, though I don't think anyone has any intention of checking up on
ports to make sure they're still meeting these requirements, several of
the initial requirements you list look to me like things that should
self-evidently be required on an ongoing basis, namely:

 - must be freely usable (without NDA)
 - must be able to run a buildd 24/7 without crashing
 - must have an actual, working buildd
 - must include basic UNIX functionality
 - must demonstrate to have at least 50 users
 - [...] must have one redundant buildd machine

IOW, I don't think it's ok for an architecture to lose basic UNIX 
functionality once it's been approved as a release candidate. ;)

Cheers,
-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
[EMAIL PROTECTED]   http://www.debian.org/


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Ingo Juergensmann
On Sun, Aug 21, 2005 at 03:58:24AM +0200, Wouter Verhelst wrote:

 1. The requirement that 'an architecture must be publically available to
buy new'.
 
It was explained that this requirement was not made to be applied
retroactively to already existing ports; rather, it was designed to
avoid new hardware which, as of yet, is only available under NDA, or
to avoid things such as a Vax port of Debian. Older ports, such as
m68k and arm, are expected to reach a natural end-of-life to a point
where it no longer is possible for Debian and the port's porters to
support it, at which point the port would then, of course, be
dropped.

Where's the definition of natural end of life? Who defines when this state
is reached? The porters (how many of them then?)? What is meant with no
longer possible ... to support it?

 2. The requirement that any architecture needs to be able to keep up
with unstable by using only two buildd machines.
 
The rationale for this requirement was that there is a nontrivial
cost to each buildd, which increases super-linearly; apparently,
there have been cases in the past where this resulted in ports with
many autobuilders slacking when updates were necessary (such as with
the recent security autobuilder problems).

According to Joeys blog (http://www.infodrom.org/~joey/log/?200508201755):
I also have the impression that there is no buildd for m68k and s390 for
sarge-proposed updates - or there's another reason the updated vim is not
available on these architectures.

There still seems to be an incomplete buildd infrastructure. Why is that and
why is that not communicated properly? All I see is a vaque assumption that
the infrastructure is in place or it isn't, but nobody seems to know that
for sure. And even more there seems to be a security team and infrastructure
for stable and testing and both seem to have nothing to do with eachother,
i.e. are the machines for both are identical or what?
 
On the flip side, it was argued that more autobuilders results in
more redundancy; with a little overcapacity, there is a gain here
over an architecture which has just one autobuilder, where then that
single autobuilder goes down.

I don't see the problem with having as much autobuilders as the porters can
handle? (can handle = handle the buildd logs, keep them running, supply
them with hardware, etc.)
 
This item was much debated, and we didn't reach an agreement; in the
end, we decided to move on. We hope that after more debate, we will
reach a solution that is acceptable to everyone, but in the mean
time, the requirement remains (but see below).

Well, it seems nice to debate the need of autobuilder redundancy, but how
about buildd admin redundancy? There has been problems with that for several
archs in the past. How should this been addressed?
 
 3. The veto powers given to the DSA team, the Security team, and the
Release team, on a release of any given port.
 
Some of us feared for abuse of this veto power. All understood the
problems that exist if any port is of such low quality that it would
suck up the time of any of the three given teams; however, we felt
that a nonspecific veto power as proposed would be too far-reaching.
 
At first, a counter-proposal was made which would require the three
teams to discuss a pending removal of a port together with the
porters team, and require them to come to an agreement. This was
dismissed, since a) this would move the problems to somewhere else,
rather than fix them (by refusing to drop a port, a porters team
could essentially kill the security team), and b) the three
beforementioned teams could already refuse to support a port anyhow,
simply by not doing the work.

In that case, when there might be a package with an unsolvable security
issue for a port (perhaps because it needs a newer gcc or toolchain, because
the released one has an issue like ICE or so), that package could be dropped
from the release for that arch. Why dropping the whole arch then?
 
In that light, we agreed on a procedure for dropping a port which is
designed to avoid abuse, by making it as open as possible: if any of
the aforementioned teams wants to use their veto power, they have to
post a full rationale to the debian-devel-announce mailinglist, with
an explanation of the problems and reasons for their decision.

I would like to see a detailed rationale under what circumstances such as
veto might be raised *beforehand* anyone can veto something. It should be
clear what requirements must be fulfilled so that a team can veto something. 
Otherwise you'll end always with discussions where the vetoing team says
this and the other team (like porters) say that. And what if someone has
objections against that veto? What procedure will handle this situation
then?
 
 4. The requirement that any port has to have 5 developers support 

Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Wouter Verhelst
On Mon, Aug 22, 2005 at 10:19:38AM +0200, Ingo Juergensmann wrote:
 On Sun, Aug 21, 2005 at 03:58:24AM +0200, Wouter Verhelst wrote:
  1. The requirement that 'an architecture must be publically available to
 buy new'.
  
 It was explained that this requirement was not made to be applied
 retroactively to already existing ports; rather, it was designed to
 avoid new hardware which, as of yet, is only available under NDA, or
 to avoid things such as a Vax port of Debian. Older ports, such as
 m68k and arm, are expected to reach a natural end-of-life to a point
 where it no longer is possible for Debian and the port's porters to
 support it, at which point the port would then, of course, be
 dropped.
 
 Where's the definition of natural end of life? Who defines when this state
 is reached? The porters (how many of them then?)? What is meant with no
 longer possible ... to support it?

What that phrase is trying to say is that nobody will force any of those
older ports out of the archive -- at least that's what the release team
and DSA members present at the meeting told me. It's expected that, at
some point in the future, it will be technically or practically
impossible for porters to continue to support those ports; that's when
they'll be dropped.

[...]
 In that light, we agreed on a procedure for dropping a port which is
 designed to avoid abuse, by making it as open as possible: if any of
 the aforementioned teams wants to use their veto power, they have to
 post a full rationale to the debian-devel-announce mailinglist, with
 an explanation of the problems and reasons for their decision.
 
 I would like to see a detailed rationale under what circumstances such as
 veto might be raised *beforehand* anyone can veto something. It should be
 clear what requirements must be fulfilled so that a team can veto something. 
 Otherwise you'll end always with discussions where the vetoing team says
 this and the other team (like porters) say that. And what if someone has
 objections against that veto? What procedure will handle this situation
 then?

There are already procedures in place in Debian to overrule a delegate's
decision, if necessary. It's pointless to duplicate that.

  4. The requirement that any port has to have 5 developers support it,
 and be able to demonstrate that there are (at least) 50 users.
 
 How should this demonstration should be achieved? What is the procedure for
 this? When I grep my /etc/passwd I have 28 users for m68k on my own, but
 that machine just counts as one user on popcon.d.o.
 
 Some people feared that this could kill off a port such as s390,
 which typically has little installations, but many users on a single
 installation. It was confirmed that the important number here is the
 number of users, rather than the number of installations; so any port
 should be able to reach that number.
 
 ... and this paragraph makes clear, that you just can't use popcon for that
 issue. So, how shall those users be counted?

This is not specified, and deliberately so. Any way you can come up with
that demonstrates 50 users should do (although it may require _active_
users; the idea is to avoid ports that are only being used by the build
daemons)

  None of the participants had a problem with any of the other
  requirements. Note that the separate mirror network is fully distinct of
  the rest of the original proposal (although there was a significant
  amount of confusion over that fact). The ability to be part of a
  stable release (or not) would be fully distinct of the separate mirror
  network; indeed, the implementation of both items will now be discussed
  and implemented fully separate, to avoid further confusion.
 
 During the original vancouver proposal discussion the mirror network problem
 was said to be the reason for the vancouver proposal. So, I expected a more
 detailed info about this issue and how it would be solved (and when). 
 When the reason for the vancouver proposal is just mirror space on some
 mirrors, why should this be addressed by a potential drop of archs anyway? 

It shouldn't, and this was confirmed at the meeting. That's where the
confusion lies.

 Why don't let the mirrors just mirror what they want? Nobody forces a mirror
 admin that he *has* to be a primary mirror, right? 
 I have absolutely no problems with mirrors that don't carry all archs as
 long as there are several mirrors that do. 

Ditto.

  - binary packages must be built from unmodified Debian source
 
 Uhm? When there is a new arch upcoming, they need to modifiy the Debian
 source, at least sometimes, right?

Yes, and this happens. I've already had requests to modify my
Architecture: line in my nbd packages for new ports, such as amd64 and
ppc64, even before they're part of the archive.

It's not hard to do this, and if there's a valid patch, people usually
apply it.

  - binaries must have been built and signed by 

Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Jon Dowland
On Sun, Aug 21, 2005 at 10:30:08PM +0200, Laszlo Boszormenyi wrote:
  I do rebuild them and more on this that I download the .orig.tar.gz
  for myself from the official upstream location and check the diff
  ofcourse.  This may sound paranoid, but this is me.

As a user, I certainly appreciate this - keep it up! :)

-- 
Jon Dowland   http://jon.dowland.name/
FD35 0B0A C6DD 5D91 DB7A  83D1 168B 4E71 7032 F238


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Jonas Smedegaard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 22-08-2005 08:24, Sven Luther wrote:
 On Sun, Aug 21, 2005 at 07:28:55PM +0200, Jonas Smedegaard wrote:

On 21-08-2005 03:58, Wouter Verhelst wrote:


We also came to the conclusion that some of the requirements proposed in
Vancouver would make sense as initial requirements -- requirements that
a port would need to fulfill in order to be allowed on the mirror
network -- but not necessarily as an 'overall' requirement -- a
requirement that a port will always need to fulfill if it wants to be
part of a stable release, even if it's already on the mirror network.
Those would look like this:

[snip]

Overall:

[snip]

- binaries must have been built and signed by official Debian
  Developers

Currently, sponsored packages are only signed, not built, by official
Debian Developers.


Is that intended to change, or is it a typo in the proposal?
 
 
 All packages should be built by official debian buildds anyway, not on
 developper machines with random cruft and unsecure packages installed, or even
 possibly experimental or home-modified stuff.

Ubuntu works like that: Binaries for all archs are compiled by buildd's.
But as I understand it, Debian currently do not use this scheme.

Also, as Manoj[1] and others have pointed out, sponsors are _expected_
to recompile packages they sign, but I believe it is not part of policy.

So I ask again: Is this an intended (and IMO quite welcome) change of
policy, or a typo?


 - Jonas

P.S.

Please cc me on responses to this thread, as I am not subscribed to d-devel.


[1] It is pure coincidence that my IRC nick is so close to yours, Manoj.
It was Micah suggesting to use my first name backwards when other
obvious options was taken... :-)

- --
* Jonas Smedegaard - idealist og Internet-arkitekt
* Tlf.: +45 40843136  Website: http://dr.jones.dk/

 - Enden er nær: http://www.shibumi.org/eoti.htm
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDCZqFn7DbMsAkQLgRAjKRAJ9qGdwiFmySH6JpHiOF0grWNbfOoACgj5HE
0W9rt9aOo3wlb0Csb3zzThk=
=4p9z
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Olaf van der Spek
On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
 In particular, we invariably run into arch-specific problems every time
 a new version of a toolchain package is uploaded to unstable.  Some may
 remember that the new glibc/gcc blocked non-toolchain progress for
 months during the beginning of the sarge release cycle, and that the
 aftermath took months more to be sorted out.  So far, etch threatens to
 be more of the same; in the past month we've had:

I've been wondering, why isn't the new toolchain tested and the
resulting errors fixed before it's uploaded to unstable or made the
default?



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Steve Langasek
On Mon, Aug 22, 2005 at 10:19:38AM +0200, Ingo Juergensmann wrote:

  3. The veto powers given to the DSA team, the Security team, and the
 Release team, on a release of any given port.
  
 Some of us feared for abuse of this veto power. All understood the
 problems that exist if any port is of such low quality that it would
 suck up the time of any of the three given teams; however, we felt
 that a nonspecific veto power as proposed would be too far-reaching.
  
 At first, a counter-proposal was made which would require the three
 teams to discuss a pending removal of a port together with the
 porters team, and require them to come to an agreement. This was
 dismissed, since a) this would move the problems to somewhere else,
 rather than fix them (by refusing to drop a port, a porters team
 could essentially kill the security team), and b) the three
 beforementioned teams could already refuse to support a port anyhow,
 simply by not doing the work.

 In that case, when there might be a package with an unsolvable security
 issue for a port (perhaps because it needs a newer gcc or toolchain, because
 the released one has an issue like ICE or so), that package could be dropped
 from the release for that arch. Why dropping the whole arch then?

TTBOMK, the security team have always said that they are committed to
providing security updates for everything in stable/main.  You are
arguing that we, and our users, should be perfectly comfortable with the
idea of shipping a low-quality port for which this guarantee does not
hold.  I don't see any reason at all why that should be an acceptable
answer; if a port's stable users really don't care about security
updates, then the technical justification for including it in the
release, instead of doing separate snapshot releases, becomes quite
slim.

 In that light, we agreed on a procedure for dropping a port which is
 designed to avoid abuse, by making it as open as possible: if any of
 the aforementioned teams wants to use their veto power, they have to
 post a full rationale to the debian-devel-announce mailinglist, with
 an explanation of the problems and reasons for their decision.

 I would like to see a detailed rationale under what circumstances such as
 veto might be raised *beforehand* anyone can veto something. It should be
 clear what requirements must be fulfilled so that a team can veto something. 
 Otherwise you'll end always with discussions where the vetoing team says
 this and the other team (like porters) say that. And what if someone has
 objections against that veto? What procedure will handle this situation
 then?

I don't know if this is a language gap or what, but dude, it's a *veto*.
The whole *point* is that these are the teams that bear most of the
burden if a port isn't being looked after, and they should therefore
have a direct say in whether such a port is included, and it shouldn't
be automatically allowed in just because the manner in which it's in
shitty shape isn't something anyone thought of when putting together
this list.  Exercising a veto means that, in that team's expert opinion,
it is not in the interest of the Debian project to treat that port as a
release candidate.  The procedure if you object is to *change that
expert opinion* -- preferably by addressing the cause for the veto,
though it seems that there are at least some people around who would
much rather antagonize the release team/ftpmasters until they resign in
disgust than actually step up and *work* on the ports they're so keen to
defend.

  None of the participants had a problem with any of the other
  requirements. Note that the separate mirror network is fully distinct of
  the rest of the original proposal (although there was a significant
  amount of confusion over that fact). The ability to be part of a
  stable release (or not) would be fully distinct of the separate mirror
  network; indeed, the implementation of both items will now be discussed
  and implemented fully separate, to avoid further confusion.

 During the original vancouver proposal discussion the mirror network problem
 was said to be the reason for the vancouver proposal.

Uh, no, it wasn't.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
[EMAIL PROTECTED]   http://www.debian.org/


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Aurelien Jarno

Sven Luther a écrit :

All packages should be built by official debian buildds anyway, not on
developper machines with random cruft and unsecure packages installed, or even
possibly experimental or home-modified stuff.


What about packages built on developer machines, but using the same 
software as on the official debian buildds? I mean using sbuild in a 
dedicated chroot. I sometimes do that for my packages when buildd are 
lagging or when a package fails to build because of missing dependencies.


Bye,
Aurelien

--
  .''`.  Aurelien Jarno | GPG: 1024D/F1BCDB73
 : :' :  Debian GNU/Linux developer | Electrical Engineer
 `. `'   [EMAIL PROTECTED] | [EMAIL PROTECTED]
   `-people.debian.org/~aurel32 | www.aurel32.net


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Andreas Jochens
 On Mon, Aug 22, Wouter Verhelst wrote:
   - binary packages must be built from unmodified Debian source
  
  Uhm? When there is a new arch upcoming, they need to modifiy the Debian
  source, at least sometimes, right?

 Yes, and this happens. I've already had requests to modify my
 Architecture: line in my nbd packages for new ports, such as amd64 and
 ppc64, even before they're part of the archive.

 It's not hard to do this, and if there's a valid patch, people usually
 apply it.

Yes, most maintainers are very helpful to porters and apply those kind 
of simple patches to add support for a new architecture. 

As someone who sent quite a few such requests, I would like to thank 
all maintainers who applied these kind of changes and helped to sort 
out architecture specific problems.

Unfortunately, there are also maintainers who say something like
I will not make my package work on architecture xxx because 
that architecture is not part of Debian..

This is rare, but it happens. I recently got such a reaction when I
tried to convice a maintainer of the 'linux-2.6' kernel package 
to add a small 8-line patch to support the native ppc64 port
by reusing the kernel config files which are already available 
on the regular powerpc architecture.

Regards
Andreas Jochens


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Sven Luther
On Mon, Aug 22, 2005 at 11:51:55AM +0200, Aurelien Jarno wrote:
 Sven Luther a écrit :
 All packages should be built by official debian buildds anyway, not on
 developper machines with random cruft and unsecure packages installed, or 
 even
 possibly experimental or home-modified stuff.
 
 What about packages built on developer machines, but using the same 
 software as on the official debian buildds? I mean using sbuild in a 
 dedicated chroot. I sometimes do that for my packages when buildd are 
 lagging or when a package fails to build because of missing dependencies.

Should be ok, but the security level would still be higher using only official
buildds and centraly controled.

The only reason this does not happen is that the ftp-masters dislike the x86
buildds lagging or breaking and people bothering them about it.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Marc Haber
On Mon, 22 Aug 2005 10:19:38 +0200, Ingo Juergensmann
[EMAIL PROTECTED] wrote:
On Sun, Aug 21, 2005 at 03:58:24AM +0200, Wouter Verhelst wrote:
 4. The requirement that any port has to have 5 developers support it,
and be able to demonstrate that there are (at least) 50 users.

How should this demonstration should be achieved? What is the procedure for
this? When I grep my /etc/passwd I have 28 users for m68k on my own, but
that machine just counts as one user on popcon.d.o.

Some people feared that this could kill off a port such as s390,
which typically has little installations, but many users on a single
installation. It was confirmed that the important number here is the
number of users, rather than the number of installations; so any port
should be able to reach that number.

... and this paragraph makes clear, that you just can't use popcon for that
issue. So, how shall those users be counted?

Popcon is good in giving a lower boundary for the number of users, so
if an arch has 50 machines reporting to popcon, it is sure to have
passed the 50 user limit.

I can imagine that for archs with less than 50 machines reporting to
popcon it could be possible to have some kind of registration
mechanism.

Greetings
Marc

-- 
-- !! No courtesy copies, please !! -
Marc Haber |Questions are the | Mailadresse im Header
Mannheim, Germany  | Beginning of Wisdom  | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG Rightful Heir | Fon: *49 621 72739834



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Andreas Barth
* Olaf van der Spek ([EMAIL PROTECTED]) [050822 12:35]:
 On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
  In particular, we invariably run into arch-specific problems every time
  a new version of a toolchain package is uploaded to unstable.  Some may
  remember that the new glibc/gcc blocked non-toolchain progress for
  months during the beginning of the sarge release cycle, and that the
  aftermath took months more to be sorted out.  So far, etch threatens to
  be more of the same; in the past month we've had:

 I've been wondering, why isn't the new toolchain tested and the
 resulting errors fixed before it's uploaded to unstable or made the
 default?

Because apparently nobody does. To really find out (some of) the
toolchain bugs, you need to compile the whole archive with the new
toolchain. And, BTW, the new toolchain was available in experimental for
ages. Gcc 4.0 was also uploaded quite some time to unstable before it
was made the default.


Cheers,
Andi


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Andreas Barth
* Ingo Juergensmann ([EMAIL PROTECTED]) [050822 10:42]:
 On Sun, Aug 21, 2005 at 03:58:24AM +0200, Wouter Verhelst wrote:
  4. The requirement that any port has to have 5 developers support it,
 and be able to demonstrate that there are (at least) 50 users.
 
 How should this demonstration should be achieved? What is the procedure for
 this? When I grep my /etc/passwd I have 28 users for m68k on my own, but
 that machine just counts as one user on popcon.d.o.
  [...]
 ... and this paragraph makes clear, that you just can't use popcon for that
 issue. So, how shall those users be counted?

Well, this was not specified by purpose. If popcon shows very many
machines (i.e. i386), than that's all. If it doesn't show, than use
another way. If you have 70 people actively using m68k on your place
every day, than this statement from you might be enough (if you provide
enough details or whatever of course). Another way would be to show that
a lot of different users wrote bug reports where the run reportbug on
that arch. Or whatever. It just means it needs to be demonstrated,
and, frankly speaking, that should really be able for any arch.

(And the reason is quite simple: If a port is used, much more specific
bugs are found than if only buildds use this port.)


  - must be a developer-accessible debian.org machine for the
architecture
 
 A single machine is sufficient? Should this be a public available d.o
 machine or is limited access sufficient?

Any developer needs to have access to that machine.



  - binary packages must be built from unmodified Debian source

 Uhm? When there is a new arch upcoming, they need to modifiy the Debian
 source, at least sometimes, right?

But they can't upload that binary package, even if only for legal
reasons.  Of course, the source-package might be NMUed ...

(and yes, that's nothing new)

 Or do you mean by overall requirements just the already established ports?
 My interpretation of overall is that this counts for both (new and old
 ports).

It counts for all. And any binary package in the archive must be built
from the source package of the same(*) version in the archive. (where
same considers any binNMU-version of the package the same as the basic
version, i.e. version 1-2==1-2.0.1)

  - binaries must have been built and signed by official Debian
Developers

 This has been always the case, right?

Yes. As the built from unmodified source. But it doesn't hurt to write
it down here.




Cheers,
Andi


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Manoj Srivastava
On Mon, 22 Aug 2005 11:51:55 +0200, Aurelien Jarno [EMAIL PROTECTED] said: 

 Sven Luther a écrit :
 All packages should be built by official debian buildds anyway, not
 on developper machines with random cruft and unsecure packages
 installed, or even possibly experimental or home-modified stuff.

 What about packages built on developer machines, but using the same
 software as on the official debian buildds? I mean using sbuild in a
 dedicated chroot. I sometimes do that for my packages when buildd
 are lagging or when a package fails to build because of missing
 dependencies.

The end goal is not just to have packages built on the
 buildd -- and important goal for Debian, certainly, but not the only
 one we have. As promoters of free software, we also are committed to
 have packages build for our users, in a real environment, not just
 a sterile, controlled. artificial debian specific test
 environment. 

Moving towards requiring all packages be built in such an
 artificial test environment takes us further away from
 ensuring the second goal is met.

manoj
-- 
I know you think you thought you knew what you thought I said, but I'm
not sure you understood what you thought I meant.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Hamish Moffatt
On Mon, Aug 22, 2005 at 12:52:06PM +0200, Sven Luther wrote:
 On Mon, Aug 22, 2005 at 11:51:55AM +0200, Aurelien Jarno wrote:
  Sven Luther a écrit :
  All packages should be built by official debian buildds anyway, not on
  developper machines with random cruft and unsecure packages installed, or 
  even
  possibly experimental or home-modified stuff.
  
  What about packages built on developer machines, but using the same 
  software as on the official debian buildds? I mean using sbuild in a 
  dedicated chroot. I sometimes do that for my packages when buildd are 
  lagging or when a package fails to build because of missing dependencies.
 
 Should be ok, but the security level would still be higher using only official
 buildds and centraly controled.

Really? The maintainer can still embed rm -rf / in the postinst either
way. We need to be able to trust developers.

Similarly, sponsored packages should be rebuilt because the project
hasn't decided to official trust those contributors.


Hamish
-- 
Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Mario Fux
Am Sonntag, 21. August 2005 03.58 schrieb Wouter Verhelst:
 Hi all,

Good morning

Most of the time I only read on this list and so I've done with this 
discussion. But sometimes I dare to write something and suggest somthing ;-) 
(see below).

snip


 Initial:
 - must be publically available to buy new
 - must be freely usable (without NDA)
 - must be able to run a buildd 24/7 without crashing
 - must have an actual, working buildd
 - must include basic UNIX functionality
 - 5 developers must send in a signed request for the addition
 - must demonstrate to have at least 50 users
 - must be able to keep up with unstable with 2 buildd machines, and must
   have one redundant buildd machine

 Overall:
 - must have successfully compiled 98% of the archive's source (excluding
   arch-specific packages)
 - must have a working, tested installer
 - security team, DSA, and release team must not veto inclusion
 - must be a developer-accessible debian.org machine for the
   architecture
 - binary packages must be built from unmodified Debian source
 - binaries must have been built and signed by official Debian
   Developers

It seems that a lot of people discuss this topic on a fairly theoretic base 
which I think wouldn't be necessary if one could see which architectures pass 
which condition. Then you could discuss about specific items and not just 
theoretically (which is not by definition bad).

I think the work for such an architecture list isn't that heavy and would even 
do it myself but I'm just a user and promoter of Debian and don't know the 
sites where I find the information.

snip

Anyway thx for your great work I use every day
Mario


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Manoj Srivastava
On Mon, 22 Aug 2005 11:27:33 +0200, Jonas Smedegaard [EMAIL PROTECTED] said: 

 Also, as Manoj[1] and others have pointed out, sponsors are
 _expected_ to recompile packages they sign, but I believe it is not
 part of policy.

Which policy? 

 So I ask again: Is this an intended (and IMO quite welcome) change
 of policy, or a typo?

I don't believe this is a change in sponsoring policy,
 personally.

manoj
-- 
I believe in a God which doesn't need heavy financing. Fletch
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Martin Pitt
Hi!

Manoj Srivastava [2005-08-22  7:58 -0500]:
 The end goal is not just to have packages built on the
  buildd -- and important goal for Debian, certainly, but not the only
  one we have. As promoters of free software, we also are committed to
  have packages build for our users, in a real environment, not just
  a sterile, controlled. artificial debian specific test
  environment. 

But it is exactly the derivations from the canonical standard system
on a buildd that has the potential to break stuff for users (wrong
shlib dependencies, different toolchain, etc.).

Why we deprive the majority (i386) of our users the privilege of
getting known good packages? (Since the majority of DDs does binary
uploads for i386 as well)?

/provoke

Martin

-- 
Martin Pitthttp://www.piware.de
Ubuntu Developer   http://www.ubuntu.com
Debian Developer   http://www.debian.org


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Olaf van der Spek
On 8/22/05, Andreas Barth [EMAIL PROTECTED] wrote:
 * Olaf van der Spek ([EMAIL PROTECTED]) [050822 12:35]:
  On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
   In particular, we invariably run into arch-specific problems every time
   a new version of a toolchain package is uploaded to unstable.  Some may
   remember that the new glibc/gcc blocked non-toolchain progress for
   months during the beginning of the sarge release cycle, and that the
   aftermath took months more to be sorted out.  So far, etch threatens to
   be more of the same; in the past month we've had:
 
  I've been wondering, why isn't the new toolchain tested and the
  resulting errors fixed before it's uploaded to unstable or made the
  default?
 
 Because apparently nobody does. To really find out (some of) the
 toolchain bugs, you need to compile the whole archive with the new
 toolchain. And, BTW, the new toolchain was available in experimental for
 ages. Gcc 4.0 was also uploaded quite some time to unstable before it
 was made the default.

I understand most maintainers don't try the new toolchain themselves,
but wouldn't it be possible for someone else to build the entire
archive (or parts of it by multiple people) and (automatically) report
bugs?



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Olaf van der Spek
On 8/22/05, Hamish Moffatt [EMAIL PROTECTED] wrote:
 Really? The maintainer can still embed rm -rf / in the postinst either
 way. We need to be able to trust developers.
 
 Similarly, sponsored packages should be rebuilt because the project
 hasn't decided to official trust those contributors.

But it's far easier to check (audit?) source code then to check binaries.



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Olaf van der Spek
On 8/22/05, Manoj Srivastava va, manoj [EMAIL PROTECTED] wrote:
  The end goal is not just to have packages built on the
  buildd -- and important goal for Debian, certainly, but not the only
  one we have. As promoters of free software, we also are committed to
  have packages build for our users, in a real environment, not just
  a sterile, controlled. artificial debian specific test
  environment.

If the two builds result in (significantly) different packages
wouldn't that be a bug?

Moving towards requiring all packages be built in such an
  artificial test environment takes us further away from
  ensuring the second goal is met.

Isn't it just in addition to being build on the developer's system?



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Andreas Barth
* Olaf van der Spek ([EMAIL PROTECTED]) [050822 17:01]:
 On 8/22/05, Andreas Barth [EMAIL PROTECTED] wrote:
  * Olaf van der Spek ([EMAIL PROTECTED]) [050822 12:35]:
   On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
In particular, we invariably run into arch-specific problems every time
a new version of a toolchain package is uploaded to unstable.  Some may
remember that the new glibc/gcc blocked non-toolchain progress for
months during the beginning of the sarge release cycle, and that the
aftermath took months more to be sorted out.  So far, etch threatens to
be more of the same; in the past month we've had:
  
   I've been wondering, why isn't the new toolchain tested and the
   resulting errors fixed before it's uploaded to unstable or made the
   default?
  
  Because apparently nobody does. To really find out (some of) the
  toolchain bugs, you need to compile the whole archive with the new
  toolchain. And, BTW, the new toolchain was available in experimental for
  ages. Gcc 4.0 was also uploaded quite some time to unstable before it
  was made the default.

 I understand most maintainers don't try the new toolchain themselves,
 but wouldn't it be possible for someone else to build the entire
 archive (or parts of it by multiple people) and (automatically) report
 bugs?

Building is possible and has happened. About 500 of such bugs were still
not resolved when gcc-4.0 was made the standard. But some obscure
usage-breakage won't be noticed by that, unless the package has a decent
self-test.

So, in sum: yes, we try to do that as much as possible.



Cheers,
Andi


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Gunnar Wolf
Sven Luther dijo [Mon, Aug 22, 2005 at 12:52:06PM +0200]:
  What about packages built on developer machines, but using the same 
  software as on the official debian buildds? I mean using sbuild in a 
  dedicated chroot. I sometimes do that for my packages when buildd are 
  lagging or when a package fails to build because of missing dependencies.
 
 Should be ok, but the security level would still be higher using only official
 buildds and centraly controled.
 
 The only reason this does not happen is that the ftp-masters dislike the x86
 buildds lagging or breaking and people bothering them about it.

Huh? Would an off-the-shelf old 1.5GHz P4 lag behind a top-of-the-line
m68k or ARM? Would it break more often than a MIPS?

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5623-0154
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Gunnar Wolf
Jonas Smedegaard dijo [Sun, Aug 21, 2005 at 07:28:55PM +0200]:
  We also came to the conclusion that some of the requirements proposed in
  Vancouver would make sense as initial requirements -- requirements that
  a port would need to fulfill in order to be allowed on the mirror
  network -- but not necessarily as an 'overall' requirement -- a
  requirement that a port will always need to fulfill if it wants to be
  part of a stable release, even if it's already on the mirror network.
  Those would look like this:
 [snip]
  Overall:
 [snip]
  - binaries must have been built and signed by official Debian
Developers
 
 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.
 
 Is that intended to change, or is it a typo in the proposal?

Umh... It is possible, yes, to sign and upload a package you didn't
build yourself - I would only do it in case I know my sponsoree knows
_very_ well (i.e. at least as well as me) how to do stuff. I think
there is (but I cannot say for sure) the common practice to build
before uploading - You cannot be sure, of course. This, also, cannot
become policy, as it cannot be checked... But I feel this is common
practice already.

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5623-0154
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread David Nusinow
On Mon, Aug 22, 2005 at 12:22:47AM -0700, Steve Langasek wrote:
 There was discussion in Vancouver about requiring ports to have an
 upstream kernel maintainer, FSO upstream; perhaps we should be
 considering requiring there to be a glibc/gcc/binutils upstream for each
 port, so that we don't get the first sign of these bugs when the
 packages hit unstable.

What sort of qa would upstream be doing that would help us out here? Can
the port teams do this kind of work themselves prior to packages hitting
unstable?

 - David Nusinow


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Andreas Barth
* Gunnar Wolf ([EMAIL PROTECTED]) [050822 18:01]:
 Huh? Would an off-the-shelf old 1.5GHz P4 lag behind a top-of-the-line
 m68k or ARM?

If you manage to put enough ram in the current arm: Definitly yes. Last
time when I was about to buy me a new machine, the only reason why I
didn't buy an arm-machine was because it was way too expansive.
Otherwise, it's speed-wise at least as good as i386.


Cheers,
Andi


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Adrian von Bidder
On Monday 22 August 2005 12.58, Marc Haber wrote:

 I can imagine that for archs with less than 50 machines reporting to
 popcon it could be possible to have some kind of registration
 mechanism.

Uh, please don't add huge technical overhead for corner cases that will 
rarely happen, if ever.  I'm confident that unless a port is really showing 
severe neglect, nobody will ever be interested in a precise body count.  

Personally, I think the 50 users limit is just silly.  Let's stick with what 
really matters:  can we (the Debian project) maintain the port?  Thus I 
propose we only limit on the number of developers:  are there people who 
are willing and competent to maintain kernel, boot loader, platform 
specific installer bits, libc and toolchain?

Furthermore, I think port maintainers should be much more aggressive to 
exclude packages from being built on their port.  For example (without 
having the experience) it might not make sense to build KDE3 for PDP8 or 
ENIAC - yet the packages are built and take a HUGE chunk of buildd time on 
every upload.  Why not have a per-port blacklist (maintained by the port 
maintainers, not the package maintainers) of packages that are not suitable 
for a port, and just put up a section in the release notes (or wherever) on 
why such-and-such packages are not available.  If enough people want them, 
someobdy will certainly run to put up an archive on apt-get.org with 
unofficial packages.

(discalimer: I only run x86 myself, so perhaps this is a stupid idea.)

cheers
-- vbi

-- 
Die große Koalition ist die formierte Gesellschaft des Parlaments zur
Abwehr mißgünstiger Wählereinflüße.
-- Helmar Nahr


pgpn3ynLKYmDs.pgp
Description: PGP signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread W. Borgert
On Mon, Aug 22, 2005 at 07:29:31PM +0200, Adrian von Bidder wrote:
 really matters:  can we (the Debian project) maintain the port?  Thus I 
 propose we only limit on the number of developers:  are there people who 
 are willing and competent to maintain kernel, boot loader, platform 
 specific installer bits, libc and toolchain?

That sounds sensible.

 Furthermore, I think port maintainers should be much more aggressive to 
 exclude packages from being built on their port.  For example (without 
...
 (discalimer: I only run x86 myself, so perhaps this is a stupid idea.)

Same for me, so I like the idea.  Any port should have to have
essential and standard, optional is - optional.

Cheers,
-- 
W. Borgert [EMAIL PROTECTED], http://people.debian.org/~debacle/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Roger Leigh
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Olaf van der Spek [EMAIL PROTECTED] writes:

 On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
 In particular, we invariably run into arch-specific problems every time
 a new version of a toolchain package is uploaded to unstable.  Some may
 remember that the new glibc/gcc blocked non-toolchain progress for
 months during the beginning of the sarge release cycle, and that the
 aftermath took months more to be sorted out.  So far, etch threatens to
 be more of the same; in the past month we've had:

 I've been wondering, why isn't the new toolchain tested and the
 resulting errors fixed before it's uploaded to unstable or made the
 default?

Andreas Jochens in particular did a lot of hard work in fixing most of
the GCC 4.0 failures and regressions over the last year while porting
for amd64.  The fact that many maintainers have not yet applied, or at
least carefully reviewed and applied amended patches, is a pity.

As the bug lists at http://bts.turmzimmer.net show, most of the RC
bugs currently have patches.  Grab yours today!


Regards,
Roger

- -- 
Roger Leigh
Printing on GNU/Linux?  http://gimp-print.sourceforge.net/
Debian GNU/Linuxhttp://www.debian.org/
GPG Public Key: 0x25BFB848.  Please sign and encrypt your mail.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFDCi/wVcFcaSW/uEgRAhYbAKDsNkOH9LUl+bW4uR114cEoBl8ExQCfe2+8
BoV4CLcexBsbxNSH7xvIx6E=
=WS6i
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Steve Langasek
On Mon, Aug 22, 2005 at 11:44:05AM +0200, Olaf van der Spek wrote:
 On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
  In particular, we invariably run into arch-specific problems every time
  a new version of a toolchain package is uploaded to unstable.  Some may
  remember that the new glibc/gcc blocked non-toolchain progress for
  months during the beginning of the sarge release cycle, and that the
  aftermath took months more to be sorted out.  So far, etch threatens to
  be more of the same; in the past month we've had:

 I've been wondering, why isn't the new toolchain tested and the
 resulting errors fixed before it's uploaded to unstable or made the
 default?

Tested by *who*, exactly?  That's precisely the question here.  These
are bugs that don't get found by just doing a test compile of one or two
programs, they only get found by making extensive use of the toolchain.  
If the toolchain maintainers don't use the architecture in question, and
there's no porter involved in the toolchain (packaging or upstream),
then this doesn't happen.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
[EMAIL PROTECTED]   http://www.debian.org/


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Olaf van der Spek
On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
 On Mon, Aug 22, 2005 at 11:44:05AM +0200, Olaf van der Spek wrote:
  On 8/22/05, Steve Langasek [EMAIL PROTECTED] wrote:
   In particular, we invariably run into arch-specific problems every time
   a new version of a toolchain package is uploaded to unstable.  Some may
   remember that the new glibc/gcc blocked non-toolchain progress for
   months during the beginning of the sarge release cycle, and that the
   aftermath took months more to be sorted out.  So far, etch threatens to
   be more of the same; in the past month we've had:
 
  I've been wondering, why isn't the new toolchain tested and the
  resulting errors fixed before it's uploaded to unstable or made the
  default?
 
 Tested by *who*, exactly?  That's precisely the question here.  These
 are bugs that don't get found by just doing a test compile of one or two
 programs, they only get found by making extensive use of the toolchain.
 If the toolchain maintainers don't use the architecture in question, and
 there's no porter involved in the toolchain (packaging or upstream),
 then this doesn't happen.

I'm not aware of the details and natures of the bugs but if they
indeed cause months of troubles it may be an idea to look at getting
test builds done of the entire archive for all architectures. At least
for i386.



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Sven Luther
On Mon, Aug 22, 2005 at 10:32:31AM -0500, Gunnar Wolf wrote:
 Sven Luther dijo [Mon, Aug 22, 2005 at 12:52:06PM +0200]:
   What about packages built on developer machines, but using the same 
   software as on the official debian buildds? I mean using sbuild in a 
   dedicated chroot. I sometimes do that for my packages when buildd are 
   lagging or when a package fails to build because of missing dependencies.
  
  Should be ok, but the security level would still be higher using only 
  official
  buildds and centraly controled.
  
  The only reason this does not happen is that the ftp-masters dislike the x86
  buildds lagging or breaking and people bothering them about it.
 
 Huh? Would an off-the-shelf old 1.5GHz P4 lag behind a top-of-the-line
 m68k or ARM? Would it break more often than a MIPS?

Well, the problems is that they are fearing an hoard of angry x86 users
complaining to them if there is be it only one day delay in the build of some
random x86 package :) Also consider the effect of toolchain or buildd breakage
to the angry mails demaning immediate availability of said packages.

Apparently the alternative arch users are much more calm and reasonable :)

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Peter Samuelson

[Adrian von Bidder]
 Why not have a per-port blacklist (maintained by the port
 maintainers, not the package maintainers) of packages that are not
 suitable for a port

They do.

 and just put up a section in the release notes (or wherever) on why
 such-and-such packages are not available.

That part they don't do.


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Steve Langasek
On Mon, Aug 22, 2005 at 06:22:11PM +, W. Borgert wrote:
 On Mon, Aug 22, 2005 at 07:29:31PM +0200, Adrian von Bidder wrote:
  really matters:  can we (the Debian project) maintain the port?  Thus I 
  propose we only limit on the number of developers:  are there people who 
  are willing and competent to maintain kernel, boot loader, platform 
  specific installer bits, libc and toolchain?

 That sounds sensible.

It ignores the fact that every port is a drain on centralized project
resources, whether it has users or not.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
[EMAIL PROTECTED]   http://www.debian.org/


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Hamish Moffatt
On Mon, Aug 22, 2005 at 04:45:28PM +0200, Olaf van der Spek wrote:
 On 8/22/05, Manoj Srivastava va, manoj [EMAIL PROTECTED] wrote:
   The end goal is not just to have packages built on the
   buildd -- and important goal for Debian, certainly, but not the only
   one we have. As promoters of free software, we also are committed to
   have packages build for our users, in a real environment, not just
   a sterile, controlled. artificial debian specific test
   environment.
 
 If the two builds result in (significantly) different packages
 wouldn't that be a bug?

Yes, but not one that we are likely to detect. (Not unless somebody
plans to run mass-rebuilds with random other packages installed, for
example.)

An example: my latest Xpdf package won't compile if libstroke0-dev is
installed, because libstroke0-dev provides dud autoconf macros (also my 
bug). Hence it build-conflicts until I fix that other package. A nice clean
chroot wouldn't find this problem, but it would confuse anyone trying to
rebuild xpdf who did have that other package installed (as it did me!).


Hamish
-- 
Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-22 Thread Thomas Bushnell BSG
Roger Leigh [EMAIL PROTECTED] writes:

 Andreas Jochens in particular did a lot of hard work in fixing most of
 the GCC 4.0 failures and regressions over the last year while porting
 for amd64.  The fact that many maintainers have not yet applied, or at
 least carefully reviewed and applied amended patches, is a pity.

The reason I didn't was that I didn't want to make potentially
destabilizing changes with sarge in progress.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Wouter Verhelst
Hi all,

Vancouver has gotten a very specific meaning in the Debian community:
one of a visionary proposal[1] that received quite its share of flames from
many Debian contributors, including myself. Since it appeared to many of us
that the intentional result of this proposal would have been to essentially
kill off many of our architectures, many of us weren't too happy with the
proposal.

In subsequent communication with some of the people present at the
Vancouver meeting, however, it became apparent to me that this was not
the idea; rather, the proposal tried to create a set of quality
requirements for all of our ports, so that our users would be guaranteed
to get, for example, a Debian/SPARC of the same quality as
Debian/PowerPC.

This in itself is a laudable goal; but as I felt that the requirements,
as proposed, did not meet that goal, I called for a meeting at DebConf5
with all parties involved and present at the conference.

This meeting has taken place on 2005-07-11[2], with the following people
attending:

* Bdale Garbee, DPL team member, has been involved with the startup of 5
  ports;
* Branden Robinson, DPL;
* Wouter Verhelst, m68k porter;
* Gerfried Fuchs, local admin to buildd machines;
* Joey Hess, core d-i developer, present at the Vancouver meeting;
* Kurt Roeckx, non-DD amd64 porter;
* Anthony Towns, FTP-master, present at the Vancouver meeting;
* Jeroen van Wolffelaar, DPL team member, FTP team member, present at
  the Vancouver meeting;
* James Troup, arm and sparc buildd admin, DSA team member, FTP-master,
  present at the Vancouver meeting;
* Florian Lohoff, mips/mipsel porter, local admin to buildd machines;
* Andreas Barth, Release Assistant, present at the Vancouver meeting;
* Guido Günther, mips/mipsel porter;
* Robert Jordens, DD;
* Steinar Gunderson, DD

In addition, I have, beforehand, exchanged mail with Joey Schulze of the
Debian Security team about this meeting, and he's provided me with his
opinion on the matter.

While I did my best to get a wide range of people to attend, two notable
absentees are both our Release Managers. Since they couldn't be in
Helsinki, they obviously couldn't be at this meeting either (although
they've had the opportunity to review this text before it was sent out);
therefore, while we've come up with all sorts of things, they're not to
be seen as any sort of official release policy statement -- unless, of
course, it is officially added to our release policy by the release
team.

Anyway.

The problematic items we discussed at this meeting included the
following four points:

1. The requirement that 'an architecture must be publically available to
   buy new'.

   It was explained that this requirement was not made to be applied
   retroactively to already existing ports; rather, it was designed to
   avoid new hardware which, as of yet, is only available under NDA, or
   to avoid things such as a Vax port of Debian. Older ports, such as
   m68k and arm, are expected to reach a natural end-of-life to a point
   where it no longer is possible for Debian and the port's porters to
   support it, at which point the port would then, of course, be
   dropped.

   With this explanation and rationale, nobody at the meeting no longer
   had any opposition to the requirement, and it was retained.

2. The requirement that any architecture needs to be able to keep up
   with unstable by using only two buildd machines.

   The rationale for this requirement was that there is a nontrivial
   cost to each buildd, which increases super-linearly; apparently,
   there have been cases in the past where this resulted in ports with
   many autobuilders slacking when updates were necessary (such as with
   the recent security autobuilder problems).

   On the flip side, it was argued that more autobuilders results in
   more redundancy; with a little overcapacity, there is a gain here
   over an architecture which has just one autobuilder, where then that
   single autobuilder goes down.

   This item was much debated, and we didn't reach an agreement; in the
   end, we decided to move on. We hope that after more debate, we will
   reach a solution that is acceptable to everyone, but in the mean
   time, the requirement remains (but see below).

3. The veto powers given to the DSA team, the Security team, and the
   Release team, on a release of any given port.

   Some of us feared for abuse of this veto power. All understood the
   problems that exist if any port is of such low quality that it would
   suck up the time of any of the three given teams; however, we felt
   that a nonspecific veto power as proposed would be too far-reaching.

   At first, a counter-proposal was made which would require the three
   teams to discuss a pending removal of a port together with the
   porters team, and require them to come to an agreement. This was
   dismissed, since a) this would move the problems to somewhere else,
   rather than fix them (by refusing to drop a port, a 

Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Jonas Smedegaard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 21-08-2005 03:58, Wouter Verhelst wrote:

 We also came to the conclusion that some of the requirements proposed in
 Vancouver would make sense as initial requirements -- requirements that
 a port would need to fulfill in order to be allowed on the mirror
 network -- but not necessarily as an 'overall' requirement -- a
 requirement that a port will always need to fulfill if it wants to be
 part of a stable release, even if it's already on the mirror network.
 Those would look like this:
[snip]
 Overall:
[snip]
 - binaries must have been built and signed by official Debian
   Developers

Currently, sponsored packages are only signed, not built, by official
Debian Developers.


Is that intended to change, or is it a typo in the proposal?


Please cc me, as I am not subscribed to d-devel.


 - Jonas

- --
* Jonas Smedegaard - idealist og Internet-arkitekt
* Tlf.: +45 40843136  Website: http://dr.jones.dk/

 - Enden er nær: http://www.shibumi.org/eoti.htm
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDCLnWn7DbMsAkQLgRAseqAJ0ZueM27PSaFXodL8KOtg0MQp7tzACgpm16
KrJIgM9u/LeAdulAN6/YdS4=
=eir/
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Aurelien Jarno

Jonas Smedegaard a écrit :

Currently, sponsored packages are only signed, not built, by official
Debian Developers.


Is that intended to change, or is it a typo in the proposal?

I don't know what is the rule but personnally, I never upload a package 
I haven't build, I rebuild all packages I sponsor. I hope others 
developers do the same.


Bye,
Aurelien

--
  .''`.  Aurelien Jarno | GPG: 1024D/F1BCDB73
 : :' :  Debian GNU/Linux developer | Electrical Engineer
 `. `'   [EMAIL PROTECTED] | [EMAIL PROTECTED]
   `-people.debian.org/~aurel32 | www.aurel32.net


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Joe Wreschnig
On Sun, 2005-08-21 at 19:28 +0200, Jonas Smedegaard wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 21-08-2005 03:58, Wouter Verhelst wrote:
 
  We also came to the conclusion that some of the requirements proposed in
  Vancouver would make sense as initial requirements -- requirements that
  a port would need to fulfill in order to be allowed on the mirror
  network -- but not necessarily as an 'overall' requirement -- a
  requirement that a port will always need to fulfill if it wants to be
  part of a stable release, even if it's already on the mirror network.
  Those would look like this:
 [snip]
  Overall:
 [snip]
  - binaries must have been built and signed by official Debian
Developers
 
 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.
 
 
 Is that intended to change, or is it a typo in the proposal?

I have always rebuilt (with pbuilder) packages I sponsor before
uploading them. This has accidentally broken a sponsored package once
due to a misconfiguration, but it's also caught missing build-deps,
builds against testing, etc, a dozen times.
-- 
Joe Wreschnig [EMAIL PROTECTED]


signature.asc
Description: This is a digitally signed message part


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Bas Zoetekouw
Hi Jonas!

You wrote:

  - binaries must have been built and signed by official Debian
Developers
 
 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.

Sponsors do build the packages they sponsor themselves.  
Or at least, they should.

-- 
Kind regards,
++
| Bas Zoetekouw  | GPG key: 0644fab7 |
|| Fingerprint: c1f5 f24c d514 3fec 8bf6 |
| [EMAIL PROTECTED], [EMAIL PROTECTED] |  a2b1 2bae e41f 0644 fab7 |
++ 


signature.asc
Description: Digital signature


Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread David Weinehall
On Sun, Aug 21, 2005 at 07:28:55PM +0200, Jonas Smedegaard wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 21-08-2005 03:58, Wouter Verhelst wrote:
 
  We also came to the conclusion that some of the requirements proposed in
  Vancouver would make sense as initial requirements -- requirements that
  a port would need to fulfill in order to be allowed on the mirror
  network -- but not necessarily as an 'overall' requirement -- a
  requirement that a port will always need to fulfill if it wants to be
  part of a stable release, even if it's already on the mirror network.
  Those would look like this:
 [snip]
  Overall:
 [snip]
  - binaries must have been built and signed by official Debian
Developers
 
 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.

I don't know about others, but I never sign and upload packages
built by others; I always rebuild packages when I sponsor someone.

I really hope others do the same.


Regards: David Weinehall
-- 
 /) David Weinehall [EMAIL PROTECTED] /) Rime on my window   (\
//  ~   //  Diamond-white roses of fire //
\)  http://www.acc.umu.se/~tao/(/   Beautiful hoar-frost   (/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Henrique de Moraes Holschuh
On Sun, 21 Aug 2005, Jonas Smedegaard wrote:
 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.

They are supposed to be BUILT by the sponsor of non-DDs, not just signed.

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Richard Atterer
On Sun, Aug 21, 2005 at 07:28:55PM +0200, Jonas Smedegaard wrote:
 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.

Ahem, no! As the sponsor, you should rebuild the package from source using
the diff from the packager, and using the upstream sources, not the sources
provided by the packager. See this page:
http://www.debian.org/doc/developers-reference/ch-beyond-pkging.en.html#s-sponsoring

  Richard

-- 
  __   _
  |_) /|  Richard Atterer |  GnuPG key:
  | \/¯|  http://atterer.net  |  0x888354F7
  ¯ '` ¯


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Manoj Srivastava
On Sun, 21 Aug 2005 19:28:55 +0200, Jonas Smedegaard [EMAIL PROTECTED] said: 

 Currently, sponsored packages are only signed, not built, by
 official Debian Developers.

Can you share with us the list of developers merely signing
 sponsored packages, so action can be taken?

 Is that intended to change, or is it a typo in the proposal?

It is a bug in any such maintainers.


manoj
-- 
Beer -- it's not just for breakfast anymore.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Laszlo Boszormenyi
On Sun, 2005-08-21 at 19:55 +0200, Aurelien Jarno wrote:
 Jonas Smedegaard a écrit :
  Currently, sponsored packages are only signed, not built, by official
  Debian Developers.
  
  Is that intended to change, or is it a typo in the proposal?
  
 I don't know what is the rule but personnally, I never upload a package 
 I haven't build, I rebuild all packages I sponsor. I hope others 
 developers do the same.
 I do rebuild them and more on this that I download the .orig.tar.gz for
myself from the official upstream location and check the diff ofcourse.
This may sound paranoid, but this is me.

Regards,
Laszlo/GCS
-- 
BorsodChem Joint-Stock Company   www.debian.org Linux Support Center
Software engineerDebian Developer   Developer
+36-48-511211/23-85 +36-20-4441745


signature.asc
Description: This is a digitally signed message part


Team have veto rights, because they can just refuse the work anyway? (Was: Results of the meeting in Helsinki about the Vancouver proposal)

2005-08-21 Thread Petter Reinholdtsen
[Wouter Verhelst]
b) the three beforementioned teams could already refuse to
support a port anyhow, simply by not doing the work.

This is not really a valid argument.  If a team in debian refuses to
accept decisions made by a majority of debian developers, or rejects
democratic control, this team will just have to be replaced by the DPL.

The fact that some team in debian are able to refuse democratic
control in the short term do not mean that their refusal should be
accepted, nor be allowed to block the will of the majority of the
project.  No-one in Debian are irreplacable, so I am sure we can find
good people to take over if such team should ever appear in debian.  I
trust us all to behave with reason, so we never end up in a situation
where we need to test such hostile replacement.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Jonas Smedegaard
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 21-08-2005 21:42, Manoj Srivastava wrote:
 On Sun, 21 Aug 2005 19:28:55 +0200, Jonas Smedegaard [EMAIL PROTECTED] 
 said: 
 
 
Currently, sponsored packages are only signed, not built, by
official Debian Developers.
 
 
 Can you share with us the list of developers merely signing
  sponsored packages, so action can be taken?
 
 
Is that intended to change, or is it a typo in the proposal?
 
 
   It is a bug in any such maintainers.

Thanks to all of you who commented (similarly) to this.

I must say that I favor your way of putting it, Manoj. You have a great
sense of words :-)


 - Jonas

Who does not sponsor packages but help new maintainers in other ways.


P.S.

Please cc me if responding to this: I am not subscribed to d-devel.

- --
* Jonas Smedegaard - idealist og Internet-arkitekt
* Tlf.: +45 40843136  Website: http://dr.jones.dk/

 - Enden er nær: http://www.shibumi.org/eoti.htm
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDCPgdn7DbMsAkQLgRAg8QAJ4n/evuckM+LUEamuV6jUhb0zrf5gCfZR+0
B6PVKECPd9IQlI7GcRO14hg=
=+OfD
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Team have veto rights, because they can just refuse the work anyway? (Was: Results of the meeting in Helsinki about the Vancouver proposal)

2005-08-21 Thread Mark Brown
On Sun, Aug 21, 2005 at 11:29:51PM +0200, Petter Reinholdtsen wrote:
 [Wouter Verhelst]

 b) the three beforementioned teams could already refuse to
 support a port anyhow, simply by not doing the work.

 This is not really a valid argument.  If a team in debian refuses to
 accept decisions made by a majority of debian developers, or rejects
 democratic control, this team will just have to be replaced by the DPL.

It's a perfectly sensible thing to say here: from a purely practical
point of view this *could* happen and all the proposal says here is that
that isn't really on.  Instead it says that people need to tell people
why they don't want something to go ahead - discuss things rather than
passively blocking them.  That seems like a good starting point for
dealing with the conflicts that are likely to occur if this happens both
here and in other areas.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Team have veto rights, because they can just refuse the work anyway? (Was: Results of the meeting in Helsinki about the Vancouver proposal)

2005-08-21 Thread David Nusinow
On Sun, Aug 21, 2005 at 11:29:51PM +0200, Petter Reinholdtsen wrote:
 [Wouter Verhelst]
 b) the three beforementioned teams could already refuse to
 support a port anyhow, simply by not doing the work.
 
 This is not really a valid argument.  If a team in debian refuses to
 accept decisions made by a majority of debian developers, or rejects
 democratic control, this team will just have to be replaced by the DPL.

I think the reality of this situation is that a team would refuse to do the
work due to valid reasons. The easy ones I can imagine are the ones we've
heard already, such as We don't have the people to do this or We don't
have enough time. 

Simply replacing teams in this sort of case would be foolhearty, as all
that is really required is someone to come along and do the work, at which
point they join the team and are responsible for what they've taken on. 

If the teams themselves can't or won't do the work for good reasons and no
one else is willing to do it as well, then the DPL can't force people to do
the work.

 - David Nusinow


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Results of the meeting in Helsinki about the Vancouver proposal

2005-08-21 Thread Otavio Salvador
Jonas Smedegaard [EMAIL PROTECTED] writes:

 - binaries must have been built and signed by official Debian
   Developers

 Currently, sponsored packages are only signed, not built, by official
 Debian Developers.

I always build the packages before sponsor it since I usually check
against trivial mistakes and every time I skiped this I experienced
bad results. Now I don't upload packages without double check it.

-- 
O T A V I OS A L V A D O R
-
 E-mail: [EMAIL PROTECTED]  UIN: 5906116
 GNU/Linux User: 239058 GPG ID: 49A5F855
 Home Page: http://www.freedom.ind.br/otavio
-
Microsoft gives you Windows ... Linux gives
 you the whole house.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]