Re: Linux Core Consortium

2004-12-17 Thread Wouter Verhelst
Op do, 16-12-2004 te 17:07 -0800, schreef Adam McKenna:
 On Fri, Dec 17, 2004 at 01:13:11AM +0100, Bill Allombert wrote:
  I think Wouter is only asking for reciprocity here. If they don't care
  about his concerns why should he care about theirs ? Or alternatively
  not caring is a freedom.
 
 We care because our priorities are our users and free software.  Just because
 someone works for an ISV or develops on/for proprietary software does not 
 make them a second class user.
 
 That said, I am not arguing for or against LCC, I just didn't like the tone
 of Wouter's e-mail, or the sentiment implied in it.

I did not intend that sentiment; I explained what I meant to say in a
new thread.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune




Re: Linux Core Consortium

2004-12-17 Thread Marco d'Itri
On Dec 16, Wouter Verhelst [EMAIL PROTECTED] wrote:

 I refuse to accept that 'providing a common binary core' would be the
 only way to fix the issue. It is probably the easiest way, and for lazy
 people it may look as if it is the only one; but we should not dismiss
 the idea that it is possible to fix the Free software or the standard so
 that the LSB /will/ work.
Agreed. Certifying binaries is simply not acceptable.

-- 
ciao, |
Marco | [9856 dojTFydnOyWkk]


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-17 Thread Steve Langasek
On Thu, Dec 16, 2004 at 02:37:09PM -0500, Ian Murdock wrote:
 On Tue, 2004-12-14 at 18:15 +0100, Tollef Fog Heen wrote:
  This sounds a bit like the government whose country had three
  different types of power plugs.  None compatible, of course.  Somebody
  then got the great idea that if they invented another one to supersede
  the three plugs in use (since that caused overhead).  That country now
  has four plugs in use.

 Actually, it's more like the country that has a few dozen different
 types of power plugs, and they're all so minutely different from each
 other that the consumer has no idea why it's like that, only that if he
 buys something that requires electricity, he can only use what he
 buys in 50% of the places that have electrical power. Also, the
 differences are small enough that he *might* be able to plug in
 what he has bought in some of the other places, but he's never
 sure if or when the thing he plugs in will blow up. Three of the
 six makers of the most common plug types then get together, realize
 the stupidity of the current situation, and decide to correct it.
 At the very worst, there are two fewer plug types. At the very best,
 the dozens of others gradually disappear too. The end result is that
 consumers can now buy electrical equipment that work in more places.

s/power plugs/Windows service packs/

Wait... what was the ISVs' argument against having to test their
software on multiple slightly-incompatible distros again? ;-)

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-17 Thread Adam McKenna
On Fri, Dec 17, 2004 at 10:05:00AM +0100, Wouter Verhelst wrote:
 Op do, 16-12-2004 te 17:07 -0800, schreef Adam McKenna:
  On Fri, Dec 17, 2004 at 01:13:11AM +0100, Bill Allombert wrote:
   I think Wouter is only asking for reciprocity here. If they don't care
   about his concerns why should he care about theirs ? Or alternatively
   not caring is a freedom.
  
  We care because our priorities are our users and free software.  Just 
  because
  someone works for an ISV or develops on/for proprietary software does not 
  make them a second class user.
  
  That said, I am not arguing for or against LCC, I just didn't like the tone
  of Wouter's e-mail, or the sentiment implied in it.
 
 I did not intend that sentiment; I explained what I meant to say in a
 new thread.

I apologize for misinterpreting your sentiment.

--Adam

-- 
Adam McKenna  [EMAIL PROTECTED]  [EMAIL PROTECTED]




Re: Linux Core Consortium

2004-12-17 Thread Manoj Srivastava
On Thu, 16 Dec 2004 12:51:54 -0800, Adam McKenna [EMAIL PROTECTED] said: 

 On Thu, Dec 16, 2004 at 09:25:38PM +0100, Wouter Verhelst wrote:
 Op do, 16-12-2004 te 14:46 -0500, schreef Ian Murdock:
  We've heard directly from the biggest ISVs that nothing short of
  a common binary core will be viable from their point of view.
 
 Well, frankly, I don't care what they think is 'viable'.
 
 'ISV' is just another name for 'Software Hoarder'. I thought Debian
 was about Freedom, not about how can we make the life of
 proprietary software developers easy?

 Regardless of how you feel about proprietary software, it is someone
 else's work and they are free to sell or license it as they see fit.

Quite so. But I am in no way obligated to spend my time making
 it easier for them to do so in a proprietary fashion -- even if some
 users out there happen to like such software.

 I don't see how someone advocating freedom can in the same
 (virtual) breath presume to dictate what other people do with their
 work.

My point exactly. Why should the free software  have to go out
 of its way to make things easier for them? (But .. our users luuv non
 free software only stretches so far).

Perhaps I am missing the point there?

manoj
-- 
A witty saying proves nothing. Voltaire
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-17 Thread Manoj Srivastava
On Thu, 16 Dec 2004 12:45:15 -0800, Bruce Perens [EMAIL PROTECTED] said: 

 1.  (*) text/plain ( ) text/html
 Wouter Verhelst wrote:

 'ISV' is just another name for 'Software Hoarder'.
 

 Please keep in mind this portion of Debian's Social Contract:

/We will support our users who develop and run non-free software
on Debian /

Right. We'll not do anything actively sabotaging running
 non-free software -- we probably shall even expend a modicum of
 effort to help them.  But there is a line, beyond which such effort
 is better spent on free software.

 One of the reasons for this is that you can get more people to
 appreciate Free Software if you can get it into their hands
 first. If they have no choice but to stick with RH and SuSE because
 they can't get their stuff supported elsewhere, they will never get
 our message.

Right. So I am cautiously in favour  of giving LCC a shot --
 while bearing in mind that this is not a do or die effort. If it does
 not impact the users of free software, and it does not require major
 effort, we should see if LCC can be supported.

manoj
-- 
She always believed in the old adage -- leave them while you're
looking good. Anita Loos, Gentlemen Prefer Blondes
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-16 Thread Ian Murdock
On Tue, 2004-12-14 at 23:22 +0100, Christoph Hellwig wrote:
 On Tue, Dec 14, 2004 at 08:34:17AM -0500, Ian Murdock wrote:
  On Fri, 2004-12-10 at 00:44 +0100, Christoph Hellwig wrote:
   Besides that the LCC sounds like an extraordinarily bad idea, passing
   around binaries only makes sense if you can't easily reproduce them from
   the source (which I defined very broadly to include all build scripts
   and depencies), and that case is the worst possible thing a distribution
   can get into.
  
  The LCC core will be fully reproducible from source. You may
  (and probably will) lose any certifications if you do that,
  for the same reason that the distros reproduced from the Red
  Hat Enterprise Linux source (e.g., White Box Linux) lose them.
  But it won't be take it or leave it. If reproducing from
  source and/or modifying the core packages is more important to
  you than the certifications, you will be able to do that.
 
 So again what do you gain by distributing binaries if their reproducible
 from source?  

So, again, ISV and IHV certifications.
-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-16 Thread Ian Murdock
On Tue, 2004-12-14 at 18:15 +0100, Tollef Fog Heen wrote:
 This sounds a bit like the government whose country had three
 different types of power plugs.  None compatible, of course.  Somebody
 then got the great idea that if they invented another one to supersede
 the three plugs in use (since that caused overhead).  That country now
 has four plugs in use.

Actually, it's more like the country that has a few dozen different
types of power plugs, and they're all so minutely different from each
other that the consumer has no idea why it's like that, only that if he
buys something that requires electricity, he can only use what he
buys in 50% of the places that have electrical power. Also, the
differences are small enough that he *might* be able to plug in
what he has bought in some of the other places, but he's never
sure if or when the thing he plugs in will blow up. Three of the
six makers of the most common plug types then get together, realize
the stupidity of the current situation, and decide to correct it.
At the very worst, there are two fewer plug types. At the very best,
the dozens of others gradually disappear too. The end result is that
consumers can now buy electrical equipment that work in more places.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-16 Thread Ian Murdock
On Wed, 2004-12-15 at 23:55 +0100, Bill Allombert wrote:
 On Wed, Dec 15, 2004 at 02:36:52PM -0800, Bruce Perens wrote:
  Bill Allombert wrote:
   But overriding them means we lose the certification ? 
  
  We can't allow it to be the case that overriding due to an existing and 
  unremedied security issue causes loss of certification. There's no 
  common sense in that.
 
 Then could you elaborate the scope of the certification ? It is one
 of the main contender for me. I though the certification require to
 ship the exact same binary as provided by the LCC.

The LCC will be pursuing its certification efforts through the
existing LSB certification process. The smaller ISVs will be more
willing to be flexible, so changes to the core that don't result
in loss of LSB compliance may be acceptable to them. We've heard
directly from the biggest ISVs that nothing short of a common
binary core will be viable from their point of view. So,
as with all things in this business, there will be tradeoffs
involved--you'll be free to make changes, at the potential
loss of some, though not necessarily all, ISV certifications.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-16 Thread Ian Murdock
On Wed, 2004-12-15 at 07:42 -0800, Steve Langasek wrote:
  Core means implemention of LSB, and the packages/libraries that will
  constitute that are being determined now, as a collaborative process.
 
 Well, for instance, the libacl package was brought up as an example in the
 context of changing library SONAMEs/package names.  The current LSB has
 nothing to say about this library; it covers only glibc, libstdc++, various
 core X libraries, zlib, libpam, libgcc, and ncurses.  So there was some doubt
 in my mind as to how broad a core ISVs were actually demanding be
 specified, as well as doubt regarding the conclusion that the LSB process
 was flawed if the simple fact was that ISVs were demanding standardization
 of libraries that no one has asked the LSB to address.

Doing an apt-cache showpkg on libacl1, I see coreutils depends on it. I
don't see coreutils in the list of dependencies of the lsb package,
so I assume it's implicitly pulled in by one of the other dependencies,
since LSB 2.0 does require ls and some of the other commands provided
by coreutils. So, by following the dependencies, libacl1
would be in that core too even if the LSB has nothing to say about it.

 I still think implementation of the LSB is a murky concept; the LSB
 specifies very little about kernel behavior after all (though some things
 are implicit given the description of libc ABIs plus the de facto glibc
 implementation of them), so I don't know whether and how many userspace
 tools it's also seen as essential to cover.

Yes, it's murky, but as you point out, and as the libacl1 example shows,
there's still considerable room for interpretation while remaining
LSB-compliant. We're trying to make the LSB stronger and more tangible
by building a reference implementation that software developers and
ISVs can target, knowing that what they're targeting is in use in the
real world and not just a sample implementation that may differ
dramatically from the LSB-certified distros their customers are using.

  I assumed Debian would want to be involved in this process too, rather
  than being presented with a more well-defined scope as a fait accompli..
 
 Particularly considering involvement in the LCC would require essentially
 unanimous consent of the maintainers of the covered core packages, I think
 it's impossible for Debian to be meaningfully involved without first having
 a clear understanding of what would be expected of us -- especially when
 late bouts of scope creep could hamstring the entire effort by suddenly
 requiring the consent of a new package maintainer who doesn't approve!  I
 think it's better if this can all be laid out up front so we can get a clear
 yea or nay from the affected maintainers before sending people off to
 committee.

I agree, and I probably pushed too strongly on my ideal scenario,
which no doubt led to the all or nothing perception that seems to
be prevailing in this thread.

The right thing to do now is for interested parties
within the project to begin discussing what Debian would want in the
core, development processes, etc., and to engage the LCC as a
separate group and/or as individuals to make sure Debian has a voice.

Whether Debian uses or is at least influenced by the result of the
LCC's work down the road should be a separate question, though it's
a strongly related question, because if what would be expected of
us turns out to be unrealistic or unachievable, the former will
be a non-starter. That's why Debian's voice needs to be heard now.

  As I said at the very outset, one possible way forward is to simply
  produce a Debian-packaged version of the LCC core independently,
  and to make sure that core is 100% compatible with Debian (i.e., you
  can take any Debian package and install it on the LCC Debian core
  and get the same results as if you'd installed it on Debian itself).
 
 I would prefer to take this a step further in the opposite direction.  Given
 that the LSB specifies a determined linker path as part of its ABI other than
 the one used by the toolchain by default, and given that the kernel is
 largely an independent, drop-in replacement wrt the rest of the packaging
 system, why *couldn't* we provide the LCC binaries in the same fashion as the
 current lsb package -- as a compatibility layer on top of the existing
 Debian system?  This sidesteps the problem of losing certification over
 small divergences that significantly impact our other ports and is a much
 more tractable goal, while still giving us a target to aim for with our base
 system.

That's certainly a way forward too, and we will explore it.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-16 Thread Wouter Verhelst
Op do, 16-12-2004 te 14:46 -0500, schreef Ian Murdock:
 We've heard
 directly from the biggest ISVs that nothing short of a common
 binary core will be viable from their point of view.

Well, frankly, I don't care what they think is 'viable'.

'ISV' is just another name for 'Software Hoarder'. I thought Debian was
about Freedom, not about how can we make the life of proprietary
software developers easy?

As a distribution consisting only of Free Software, Debian has very good
reasons to not distribute third-party binaries. That's what the LCC
binaries will be: third-party. We should not compromise all that we have
and all that we stand for, for the benefit of some Enterprise Managers
and people advocating non-free software.

If the LSB doesn't work, the non-free hoarders are free to suggest
improvements where applicable; if it is impossible to create a single
binary that will run on all LSB-certified distributions, then that is a
bug in either the LSB process (by failing to provide a well enough
defined standard), the non-free software (by relying too much on a
single implementation of the standard), or the toolchain (by not being
able to correctly manage the ABI of built libraries). That bug may be
worked around by providing a common binary core; it will, however, not
actually be fixed by doing so.

One of the major benefits of Free Software is the ability to fix bugs
without having to rely on external factors. If, however, rebuilding your
C library will result in the loss of your support contract, you will
essentially have lost that benefit, and many more with it.

I refuse to accept that 'providing a common binary core' would be the
only way to fix the issue. It is probably the easiest way, and for lazy
people it may look as if it is the only one; but we should not dismiss
the idea that it is possible to fix the Free software or the standard so
that the LSB /will/ work.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune




Re: Linux Core Consortium

2004-12-16 Thread Bruce Perens




Wouter Verhelst wrote:

  
'ISV' is just another name for 'Software Hoarder'.


Please keep in mind this portion of Debian's Social Contract:
We will support our users who develop and run non-free
software on Debian
  
One of the reasons for this is that you can get more people to
appreciate Free Software if you can get it into their hands first. If
they have no choice but to stick with RH and SuSE because they can't
get their stuff supported elsewhere, they will never get our message.

 Thanks

 Bruce




Re: Linux Core Consortium

2004-12-16 Thread Michael K. Edwards
me binutils and modutils both depend on it.

Bruce On flex? No. At least not in unstable.

sorry, I meant to write Build-Depend.

me Or is the LCC proposing to standardize on a set of binaries without
me specifying the toolchain that's used to reproduce them?

Bruce Linking and calling conventions should be standardized and should
Bruce survive for reasonably long. We need to know what we use to build the
Bruce packages, but we are not currently proposing to standardize development
Bruce tools on the target system.

Agreed there needn't be development tools on the target system.  But
the development system itself needs to be fully and accurately
specified, both among the participating distros and to the end users. 
That's what it takes to satisfy the letter of the GPL, at least as I
read it, and it's certainly the standard to which Debian packages are
held.  It's going beyond the level of effort that has historically
been put into binary distributions, but I don't think it's too much to
ask in this context.

me Not having a policy is also a choice.  For a variety of reasons, a
me written policy about legal and technical issues can be a handicap to
me the sort of calculated risk that many business decisions boil down to.

Bruce The sort of calculated risk that many business decisions boil down to
Bruce is too vague to have meaning. What you may be groping at is that some
Bruce publicized policy can be taken as a promise. The organizations
Bruce participating in LCC have chosen to make such promises.

I wasn't groping, I was trying to leave it to the reader's imagination
rather than rehash old flamewars.  On the legal side, for instance,
some distros have been known to be cavalier about GPL+QPL and
GPL+SSLeay license incompatibilities.  On the technical side,
expecting releases to be self-hosting can constrain release timing
relative to toolchain changes.

I tend to be skeptical of promises that I think are logically
contradictory.  Promising ISVs that they need only test against
golden builds, while promising end users the Four Freedoms, doesn't
add up.

Note that if Distro X distributes both NonFreeWareApp and glibc, and
only offers technical support on NonFreeWareApp to those who don't
recompile their glibc, then Distro X's right to distribute glibc under
the LGPL is automatically revoked.  (IANAL, etc., etc., but I don't
see much ambiguity in this.)

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-16 Thread Adam McKenna
On Thu, Dec 16, 2004 at 09:25:38PM +0100, Wouter Verhelst wrote:
 Op do, 16-12-2004 te 14:46 -0500, schreef Ian Murdock:
  We've heard
  directly from the biggest ISVs that nothing short of a common
  binary core will be viable from their point of view.
 
 Well, frankly, I don't care what they think is 'viable'.
 
 'ISV' is just another name for 'Software Hoarder'. I thought Debian was
 about Freedom, not about how can we make the life of proprietary
 software developers easy?

Regardless of how you feel about proprietary software, it is someone else's
work and they are free to sell or license it as they see fit.  I don't see
how someone advocating freedom can in the same (virtual) breath presume to
dictate what other people do with their work.

--Adam
-- 
Adam McKenna  [EMAIL PROTECTED]  [EMAIL PROTECTED]




Re: Linux Core Consortium

2004-12-16 Thread Michael K. Edwards
On Thu, 16 Dec 2004 21:25:38 +0100, Wouter Verhelst [EMAIL PROTECTED] wrote:
[snip]
 Well, frankly, I don't care what [ISVs] think is 'viable'.

I do care.  Apparently some ISVs think a common binary core is
viable.  I think they might change their minds if the argument against
golden binaries is made with reference to historical reality (golden
binaries don't stay golden) as well as Free Software principle, and if
their problem is taken seriously and respectfully enough to propose a
better answer.  It is not enlightenment to despise the unenlightened.

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-16 Thread Bill Allombert
On Thu, Dec 16, 2004 at 02:46:53PM -0500, Ian Murdock wrote:
 On Wed, 2004-12-15 at 23:55 +0100, Bill Allombert wrote:
  On Wed, Dec 15, 2004 at 02:36:52PM -0800, Bruce Perens wrote:
   Bill Allombert wrote:
But overriding them means we lose the certification ? 
   
   We can't allow it to be the case that overriding due to an existing and 
   unremedied security issue causes loss of certification. There's no 
   common sense in that.
  
  Then could you elaborate the scope of the certification ? It is one
  of the main contender for me. I though the certification require to
  ship the exact same binary as provided by the LCC.
 
 The LCC will be pursuing its certification efforts through the
 existing LSB certification process. The smaller ISVs will be more
 willing to be flexible, so changes to the core that don't result
 in loss of LSB compliance may be acceptable to them. We've heard
 directly from the biggest ISVs that nothing short of a common
 binary core will be viable from their point of view. So,
 as with all things in this business, there will be tradeoffs
 involved--you'll be free to make changes, at the potential
 loss of some, though not necessarily all, ISV certifications.

So Debian will help to build a certification that will ultimately
discriminate against itself, so now proprietary software vendors
have one more reason to not support Debian ?

Alternatively, what would prevent the LCC to standardize on woody core
(or sarge core when it came out) ? I mean, the core softwares the LCC is
interested in are already present in several distros today, so probably
it is possible to evaluate today what change e.g. sarge core would require
to be acceptable as a core to the LCC ? That would certainly give us
some ideas about where the LCC is going.

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




Re: Linux Core Consortium

2004-12-16 Thread Bill Allombert
On Thu, Dec 16, 2004 at 12:51:54PM -0800, Adam McKenna wrote:
 On Thu, Dec 16, 2004 at 09:25:38PM +0100, Wouter Verhelst wrote:
  Op do, 16-12-2004 te 14:46 -0500, schreef Ian Murdock:
   We've heard
   directly from the biggest ISVs that nothing short of a common
   binary core will be viable from their point of view.
  
  Well, frankly, I don't care what they think is 'viable'.
  
  'ISV' is just another name for 'Software Hoarder'. I thought Debian was
  about Freedom, not about how can we make the life of proprietary
  software developers easy?
 
 Regardless of how you feel about proprietary software, it is someone else's
 work and they are free to sell or license it as they see fit.  I don't see
 how someone advocating freedom can in the same (virtual) breath presume to
 dictate what other people do with their work.

I think Wouter is only asking for reciprocity here. If they don't care
about his concerns why should he care about theirs ? Or alternatively
not caring is a freedom.

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




RE: Linux Core Consortium

2004-12-16 Thread Wichmann, Mats D

Unfortunally, some distributions don't seem to be willing to do more 
than the minimal changes to adhere to the LSB. I did some patches for 
RedHat - and the bugreport is still open (I don't know whether the 
patches still work).

Failing some required tests seems to be quite a motivator
to at least taking a look. Barring that...

*Officially*, when you certify a distribution to the LSB you're
making a promise about conforming to the spec, and the test suites
serve as an *indicator* of compliance (not proof: if you violate
the spec and nothing in the test suites catch it, you still have
to fix it), but in practice, people implement to pass the tests.

SUSE seems to try hardest to be LSB complient and Debian was rather 
quick to implement my requests. I had no access to other distributions.

Unfortunately, while we got spec contribution in this
area, we didn't get matching code contributions: tests
OR sample implementation.

(I think I'm ment with regards to the first two points.) Regarding the 
latter, SUSE's implementation should completely fullfil the LSB 
requirements (tough the init-functions may be a bit SUSE centric) 
whereas Debian's system is also quite ok. (Though start-stop-damon 
doesn't find out that my PERL script damon is running...)

I didn't really mean to single you out, Tobias.  There were
a number of other contributors to the initscript spec section
over time.

I agree with Mats that the best way to enforce init script support are 
test cases.

Seems that way.




Re: Linux Core Consortium

2004-12-16 Thread Adam McKenna
On Fri, Dec 17, 2004 at 01:13:11AM +0100, Bill Allombert wrote:
 I think Wouter is only asking for reciprocity here. If they don't care
 about his concerns why should he care about theirs ? Or alternatively
 not caring is a freedom.

We care because our priorities are our users and free software.  Just because
someone works for an ISV or develops on/for proprietary software does not 
make them a second class user.

That said, I am not arguing for or against LCC, I just didn't like the tone
of Wouter's e-mail, or the sentiment implied in it.

--Adam

-- 
Adam McKenna  [EMAIL PROTECTED]  [EMAIL PROTECTED]




Re: Linux Core Consortium

2004-12-16 Thread Bill Allombert
On Thu, Dec 16, 2004 at 05:07:44PM -0800, Adam McKenna wrote:
 On Fri, Dec 17, 2004 at 01:13:11AM +0100, Bill Allombert wrote:
  I think Wouter is only asking for reciprocity here. If they don't care
  about his concerns why should he care about theirs ? Or alternatively
  not caring is a freedom.
 
 We care because our priorities are our users and free software.  Just because
 someone works for an ISV or develops on/for proprietary software does not 
 make them a second class user.

What users ? There is no appearance the 'biggest ISV' that Ian mentionned are
Debian users. If they were, they much certainly would support Debian in
the first place so this discussion would be moot.

 That said, I am not arguing for or against LCC, I just didn't like the tone
 of Wouter's e-mail, or the sentiment implied in it.

I see...

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




Re: Linux Core Consortium

2004-12-15 Thread Michael Meskes
On Tue, Dec 14, 2004 at 06:16:03AM -0800, Steve Langasek wrote:
 That wasn't my question.  My question was, why should any ISV care if
 their product has been LSB-*certified*?  ISVs can test against, and
 advertise support for, whatever they want to without getting the LSB's
 imprimatur.  I cannot fathom why any ISV would go through the expense (money
 or time) of getting some sort of LSB certification instead of simply making
 this part of their in-house product QA; and therefore I don't see how the
 absence of LSB-certified applications can be regarded as a failure of the
 LSB process.

I don't think we are talking about ISVs paying to get LSB certification
but about ISVs certifying their own applications for a certain LSB
level, aren't we?

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!




Re: Linux Core Consortium

2004-12-15 Thread Michael Meskes
On Fri, Dec 10, 2004 at 04:04:22PM +0100, Bill Allombert wrote:
 It seems to me than one of the main value of Debian is in the quality of
 its core distribution.  One of the reason of the quality is that it
 is not developed for itself but as a platform for the 10^4+ packages
 and the 10+ architectures in Debian. For example the compiler must be
 ...
 Given that, an attempt to develop the core distribution as a separate 
 entity is going to be impractical and to reduce its quality.

Why? In fact you are proving your own argument wrong. If a seperate core
distribution is developed as a core of more, let alone all, Linux
distributions including Debian, the amount of packages using it as
platform will certainly increase.

 On the other hand, nothing bars the LCC to build a distribution on top of
 Debian. There is a lot of precedent for doing so (Xandros,libranet,
 lycoris, linspire, mepis, etc.).

This is one argument why I'd say, we surely should work with LCC.

 As a practical matter, what if the Debian gcc team decide to release
 etch with gcc 3.3 because 3.4 break ABI on some platforms and gcc-4.x is
 not stable enough on all the platforms ? Will LCC follow ? If not, how
 are we going to share binary package if we do not use the same compiler?

Another reason why we should work together as the problem will arise
with the other dists anyway.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!




Re: Linux Core Consortium

2004-12-15 Thread Michael Meskes
On Sat, Dec 11, 2004 at 12:22:13PM +0100, Florian Weimer wrote:
 I don't think Debian should try to adopt an extensive, externally
 specified ABI.  For a few core packges, this may make some sense, but
 not for most libraries.

Lcc is also about those few core packages.

 Instead, proprietary software vendors should ship all libraries in the
 versions they need, or link their software statically.  I wouldn't

From a technical standpoint this may make sense, but not from the
commercial standpoint ISVs have to take. Building your own environment
to work on all distros is certainly much more work than just certifying
for the one distribution you use in your development labs anyway.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!




Re: Linux Core Consortium

2004-12-15 Thread Michael Meskes
On Fri, Dec 10, 2004 at 12:44:05AM +0100, Christoph Hellwig wrote:
 In fact I'm using Debian exactly because it doesn't try to apeal ISVs,
 IHVs, OEMs and other business-driven three-letter acronyms.  As soon
 as you ty to please them quality of implementation goes down.

Why? 

It took me some time to read all those mails, in particular because some
new threads were created in reply to this one creating a giant thread in
my mutt view. :-)

Anyway, I did not find any mention of Ian asking Debian to lose
anything. His question was if we are interested in participating. So
this certainly does not lower our quality of implementation. Also there
was talk about ADDITONAL packages replacing those base packages that
need to be changed. No one is forced to use them the way I understand
things. 

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!




Re: Linux Core Consortium

2004-12-15 Thread Steve Langasek
On Tue, Dec 14, 2004 at 11:53:54AM -0500, Ian Murdock wrote:

   What about the LCC's scope isn't clear?

  Er, the fact that no actual scope has been stated?  What does core mean?
  What packages (libraries) are included in this core?

 Core means implemention of LSB, and the packages/libraries that will
 constitute that are being determined now, as a collaborative process.

Well, for instance, the libacl package was brought up as an example in the
context of changing library SONAMEs/package names.  The current LSB has
nothing to say about this library; it covers only glibc, libstdc++, various
core X libraries, zlib, libpam, libgcc, and ncurses.  So there was some doubt
in my mind as to how broad a core ISVs were actually demanding be
specified, as well as doubt regarding the conclusion that the LSB process
was flawed if the simple fact was that ISVs were demanding standardization
of libraries that no one has asked the LSB to address.

I still think implementation of the LSB is a murky concept; the LSB
specifies very little about kernel behavior after all (though some things
are implicit given the description of libc ABIs plus the de facto glibc
implementation of them), so I don't know whether and how many userspace
tools it's also seen as essential to cover.

 I assumed Debian would want to be involved in this process too, rather
 than being presented with a more well-defined scope as a fait accompli..

Particularly considering involvement in the LCC would require essentially
unanimous consent of the maintainers of the covered core packages, I think
it's impossible for Debian to be meaningfully involved without first having
a clear understanding of what would be expected of us -- especially when
late bouts of scope creep could hamstring the entire effort by suddenly
requiring the consent of a new package maintainer who doesn't approve!  I
think it's better if this can all be laid out up front so we can get a clear
yea or nay from the affected maintainers before sending people off to
committee.

  If so, and given that
  these are very likely outcomes, what reason remains for Debian to get
  involved in the LCC if it's not going to result in any ISVs supporting
  Debian as a platform through the LCC effort?

 At the very least, the differences would be smaller than they
 otherwise would be, and that can only be a good thing for
 LCC, Debian, and the Linux world as a whole.

On the contrary, I think that providing a single binary platform
representing an overwhelming majority of the Linux market share that ISVs
can write to, rather than enforcing the rigors of writing to an open
standard such as the LSB, makes it much more likely for ISVs to demand
bug-for-bug compatibility with the LCC binaries; which means that anyone not
using the certified binaries is just that much farther up the proverbial
creek.  Keeping the differences small is only a benefit if ISVs do choose to
test against Debian as well as against the LCC.

 And, who knows, with Debian participation, perhaps the differences would
 end up being small enough and the processes, methods, and
 mechanisms defined such that it's no longer a very likely outcome.

I think that accepting outside binaries for core packages would be regarded
by many in the community as devastating to Debian's security model.  I think
that requiring immutable sources for core packages would be equally
devastating to Debian's development model.

  (If you see ways that Progeny
  would benefit from Debian's involvement in the LCC even if the official
  Debian releases are not LCC-certified in the end, I think that's relevant
  here; but I would like to know how you think it would benefit Progeny and
  other companies building on Debian, and why you think that Debian itself
  warrants representation at the table.)

 As I said at the very outset, one possible way forward is to simply
 produce a Debian-packaged version of the LCC core independently,
 and to make sure that core is 100% compatible with Debian (i.e., you
 can take any Debian package and install it on the LCC Debian core
 and get the same results as if you'd installed it on Debian itself).

I would prefer to take this a step further in the opposite direction.  Given
that the LSB specifies a determined linker path as part of its ABI other than
the one used by the toolchain by default, and given that the kernel is
largely an independent, drop-in replacement wrt the rest of the packaging
system, why *couldn't* we provide the LCC binaries in the same fashion as the
current lsb package -- as a compatibility layer on top of the existing
Debian system?  This sidesteps the problem of losing certification over
small divergences that significantly impact our other ports and is a much
more tractable goal, while still giving us a target to aim for with our base
system.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-15 Thread Bill Allombert
On Wed, Dec 15, 2004 at 12:51:21PM +0100, Michael Meskes wrote:
 On Fri, Dec 10, 2004 at 04:04:22PM +0100, Bill Allombert wrote:
  It seems to me than one of the main value of Debian is in the quality of
  its core distribution.  One of the reason of the quality is that it
  is not developed for itself but as a platform for the 10^4+ packages
  and the 10+ architectures in Debian. For example the compiler must be
  ...
  Given that, an attempt to develop the core distribution as a separate 
  entity is going to be impractical and to reduce its quality.
 
 Why? In fact you are proving your own argument wrong. 

[Please avoid resorting to such statement. It is worthless and can only
irritate the person you reply to. Thanks.]

 distribution is developed as a core of more, let alone all, Linux
 distributions including Debian, the amount of packages using it as
 platform will certainly increase.

My fear is that a core distribution developed as a core will only be
tested against itself and not against all the other packages in Debian.

  On the other hand, nothing bars the LCC to build a distribution on top of
  Debian. There is a lot of precedent for doing so (Xandros,libranet,
  lycoris, linspire, mepis, etc.).
 
 This is one argument why I'd say, we surely should work with LCC.

IIUC, the LCC is not about creating a source distribution and using it
as a base for several distros, but to bless some binary build in a way
that make inconvenient to use another build because you then lose the
certification.  This means Debian will have the choice between fixing
bugs or keeping the certification but not both.

  As a practical matter, what if the Debian gcc team decide to release
  etch with gcc 3.3 because 3.4 break ABI on some platforms and gcc-4.x is
  not stable enough on all the platforms ? Will LCC follow ? If not, how
  are we going to share binary package if we do not use the same compiler?
 
 Another reason why we should work together as the problem will arise
 with the other dists anyway.

Could you elaborate ?

if that matters, I meant 'share binary package' in the sense of sharing
the LCC binary packages, not sharing 3th-party packages.

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




Re: Linux Core Consortium

2004-12-15 Thread Joey Hess
Ian Murdock wrote:
 Because the LSB bases its certification process on a standard ABI/API
 specification alone, and this approach simply hasn't worked.

Surely you're simplifying here? (See LSB-Core 2.0.1, chapters 3, 4, 5.)

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-15 Thread Florian Weimer
* Michael Meskes:

 Instead, proprietary software vendors should ship all libraries in the
 versions they need, or link their software statically.  I wouldn't

From a technical standpoint this may make sense, but not from the
 commercial standpoint ISVs have to take. Building your own environment
 to work on all distros is certainly much more work than just certifying
 for the one distribution you use in your development labs anyway.

LCC could concentrate on providing such a distribution-independent
execution environment, and perform the necessary integration tests for
commercially relevant distributions.

Just an idea.  I think this is far more promising than lobbying
distributions to delegate responsibility for core packages.




Re: Linux Core Consortium

2004-12-15 Thread Chasecreek Systemhouse
On Wed, 15 Dec 2004 18:59:05 +0100, Florian Weimer [EMAIL PROTECTED] wrote:

 LCC could concentrate on providing such a distribution-independent
 execution environment, and perform the necessary integration tests for
 commercially relevant distributions.
 
 Just an idea.  I think this is far more promising than lobbying
 distributions to delegate responsibility for core packages.


I very much I agree.

I'm totally confused by this whole thread.  What advantage - both in
compatibility and competitiveness - would LCC benefit any
distribution?

First I read that LCC is  a standards body (implying a *like* to have
feature set) next I read its a certification body (implying a *must*
have feature set) -- is RH/FC gonna change all their certifications or
will Debian change to conform to RH/FC testing?  Which Distribution
set the *standard* -- maybe I missed that?

Please understand, I'm not tryng to start a flame war but simply
trying to see what real tangible benefit to systems admins/developers
rolling out hetergenous architectures using RH/FC/SuSE/Debian...

Maybe the thread has went off topic and I got lost somewhere.
-- 
WC -Sx- Jones
http://insecurity.org/




Re: Linux Core Consortium

2004-12-15 Thread Manoj Srivastava
On Wed, 15 Dec 2004 12:51:21 +0100, Michael Meskes [EMAIL PROTECTED] said: 

 On Fri, Dec 10, 2004 at 04:04:22PM +0100, Bill Allombert wrote:
 It seems to me than one of the main value of Debian is in the
 quality of its core distribution.  One of the reason of the quality
 is that it is not developed for itself but as a platform for the
 10^4+ packages and the 10+ architectures in Debian. For example the
 compiler must be ...  Given that, an attempt to develop the core
 distribution as a separate entity is going to be impractical and to
 reduce its quality.

 Why? In fact you are proving your own argument wrong. If a seperate
 core distribution is developed as a core of more, let alone all,
 Linux distributions including Debian, the amount of packages using
 it as platform will certainly increase.

Only if this separate entity also is committed to all 11
 architectures. And follows a consistent technical policy. Ans is
 supported by a QA team, security team, and potentially 1000
 experienced developers to pick up slack.

I am not sure I am convinced that the benefits are worth
 outsourcing the core of our product -- and I think that most business
 shall tell you that is a bad idea.

 As a practical matter, what if the Debian gcc team decide to
 release etch with gcc 3.3 because 3.4 break ABI on some platforms
 and gcc-4.x is not stable enough on all the platforms ? Will LCC
 follow ? If not, how are we going to share binary package if we do
 not use the same compiler?

 Another reason why we should work together as the problem will arise
 with the other dists anyway.

I think that very often Debian's technical solution have been
 better than the other distributions, since there is a tendency to do
 the right thing in Debian, as opposed to meeting marketing deadlines.

manoj
-- 
The time is right to make new friends.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens
Manoj Srivastava wrote:
I am not sure I am convinced that the benefits are worth
outsourcing the core of our product -- and I think that most business
shall tell you that is a bad idea.
Well, please don't tell this to all of the people who we are attempting 
to get to use Linux as the core of their products.

Also, please make sure to tell the upstream maintainers that we aren't 
going to use their code any longer, because we have decided that it's a 
bad idea to outsource the core of our product.

   Bruce


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Linux Core Consortium

2004-12-15 Thread Manoj Srivastava
On Thu, 09 Dec 2004 12:40:29 -0500, Ian Murdock [EMAIL PROTECTED]
said:  

 If you're having trouble picturing how Debian might engage the LCC,
 here's my ideal scenario: the source packages at the core of Debian
 are maintained in collaboration with the other LCC members, and the
 resulting binaries built from those source packages are made
 available in both RPM and .deb format.

Hmm. Does this not impede Debian in new directions we may like
 to take the distribution, like, say, making Debian be Se-Linux
 compatible, if we so choose? Fedora core 3 has done it, and it
 requires changes to the kernel, which Debian has accepted, and
 changes to core package, which is an option for Etch? What if the LCC
 members do not decide to do that?

What happens if the situation is reversed? (LCC decides to go
 with RSBAC while we do not).  Would outsourcing the core packages to
 third parties not make us less nimble (if I can use the word with a
 straight face)?

manoj
-- 
Truth has no special time of its own.  Its hour is now --
always. Albert Schweitzer
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-15 Thread Michael K. Edwards
 Bruce

 Well, please don't tell this to all of the people who we are attempting
 to get to use Linux as the core of their products.

core (software architecture) != core (customer value).

 Also, please make sure to tell the upstream maintainers that we aren't
 going to use their code any longer, because we have decided that it's a
 bad idea to outsource the core of our product.

Debian isn't a product, it's a project, and the core of the project
isn't code, it's principles and processes.  Outsourcing the core of
Debian would be delegating judgements about software freeness and
integrity.

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-15 Thread Manoj Srivastava
On Wed, 15 Dec 2004 10:44:50 -0800, Bruce Perens [EMAIL PROTECTED] said: 

 Manoj Srivastava wrote:
 I am not sure I am convinced that the benefits are worth
 outsourcing the core of our product -- and I think that most
 business shall tell you that is a bad idea.
 
 Well, please don't tell this to all of the people who we are
 attempting to get to use Linux as the core of their products.

 Also, please make sure to tell the upstream maintainers that we
 aren't going to use their code any longer, because we have decided
 that it's a bad idea to outsource the core of our product.

Hmm. I am not sure how to take this: either you are spoiling
 for a fight, or you do not take your duties as a developer very
 seriously. For the nonce, though, I'll treat this email seriously.

When I package software from upstream, I have skimmed the
 sources, ensured that the resulting .deb meets Debian policy, which
 may require major changes to upstream's layout, and I listen to my
 users, adding features, removing lacunae, and generally being more
 than a mechanical packager of software.

I am not just swilling pap sight unseen into Debian's
 repository, and my work is what makes it different from outsourcing
 the package upstream.

If you say that we can do the same to any LCC package, I have
 no objections; we can surely treat LCC as an upstream contributor,
 and massage their changes into debian packages like we have always
 done.

manoj
-- 
Good people are good because they've come to wisdom through failure.
-William Saroyan
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens
Manoj Srivastava wrote:
Hmm. Does this not impede Debian in new directions we may like
to take the distribution, like, say, making Debian be Se-Linux
compatible, if we so choose?
I think it means that Debian gets to be leader regarding the things it 
cares about. I doubt that the other distributions participating would 
object to having NSA Secure Linux compatibility dropped in their lap.

What happens if the situation is reversed? (LCC decides to go
with RSBAC while we do not).
It would be an interesting discussion. I don't see any reason that 
Debian needs to be steam-rollered, though.

At the bottom of these two competing security implementations are two 
currently-incompatible APIs through which they connect to the kernel. 
It's not clear to me that REG and GFAC patches must be incompatible with 
LSM. It also seems that REG and GFAC abstract more facilities while LSM 
provides raw (and changing) access to those facilities. It would be nice 
if they would come to a merge.

 Would outsourcing the core packages to
third parties not make us less nimble (if I can use the word with a
straight face)?
Nobody is saying that you can't override the external stuff when 
necessary. Security would be a good reason to do so, if LCC is being 
tardy compared to Debian.

   Thanks
   Bruce


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Linux Core Consortium

2004-12-15 Thread Manoj Srivastava
On Wed, 15 Dec 2004 11:21:02 -0800, Michael K Edwards [EMAIL PROTECTED] said: 

 Bruce Well, please don't tell this to all of the people who we are
 attempting to get to use Linux as the core of their products.

 core (software architecture) != core (customer value).

 Also, please make sure to tell the upstream maintainers that we
 aren't going to use their code any longer, because we have decided
 that it's a bad idea to outsource the core of our product.

 Debian isn't a product, it's a project,

So it is good that I did not say Debian, or the project, eh?
 Did you really read what I said?

Debian, the project, produces an OS, which it releases, from
 time to time (often too long in between).  This OS, in the current
 form, has a core set of packages, built around the Linux kernel,
 though in the future the Hurd or the BSD style kernels may be
 substituted.  We are talking about outsourcing these core packages,
 which for the core of the product, the OS, that the Debian project
 produces. 

I would have not expected to have to explain this on the
 -devel mailing list.

 and the core of the project isn't code, it's principles and
 processes.  Outsourcing the core of Debian would be delegating
 judgements about software freeness and integrity.

You do a nice job tilting at paper tigers. Bravo!

manoj

-- 
Never call a man a fool; borrow from him.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-15 Thread Manoj Srivastava
On Wed, 15 Dec 2004 11:29:47 -0800, Bruce Perens [EMAIL PROTECTED] said: 

 Nobody is saying that you can't override the external stuff when
 necessary. Security would be a good reason to do so, if LCC is being
 tardy compared to Debian.

Well, that does address my concern, thanks.

manoj
-- 
Okay ... I'm going home to write the I HATE RUBIK's CUBE HANDBOOK FOR
DEAD CAT LOVERS ...
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens




Manoj Srivastava wrote:

  Hmm. I am not sure how to take this: either you are spoiling
 for a fight, or you do not take your duties as a developer very
 seriously.
  

I was taking the implications of your statements farther than you
intended, in order to get you to give them additional thought. This is
a common rhetorical device. You should consider that I might be using
it before you get to "he's crazy or irresponsible" :-)

  	When I package software from upstream, I have skimmed the
 sources, ensured that the resulting .deb meets Debian policy, which
 may require major changes to upstream's layout, and I listen to my
 users, adding features, removing lacunae, and generally being more
 than a mechanical packager of software.
  

And hopefully this is an act of collaboration with the upstream
developer.

I looked for core ABI packages that you maintain. The closest I found
was libselinux1. You had a half-megabyte (uncompressed) patch for that,
which it turns out is because your arch repositories and other
arch-related cybercrud are in there. Probably this is common in debian
diffs. When I filtered that out, I got this:
Only in libselinux-1.18.Manoj: debian --- lots of files
under here.
diff -r libselinux-1.18/man/man3/security_setenforce.3
libselinux-1.18.Manoj/man/man3/security_setenforce.3
1c1
 .so security_getenforce.3
---
 .so man3/security_getenforce.3.gz
diff -r libselinux-1.18/src/dso.h libselinux-1.18.Manoj/src/dso.h
10c10
 # ifdef __alpha__
---
 # if defined(__alpha__) || defined(__mips__)

Congratulations, you got all Debian-specific stuff under debian/ except
for one little issue about compressing man pages that perhaps should be
hacked into groff. You have a lot of makefile-related stuff under
debian. I would have to assume that is almost all implementing Debian
policy and fitting into the autobuild mechanism for all Debian
architectures.

It seems to me to be the sort of thing we'd be able to come to
agreement about across LCC. IMO Debian is ahead of the others as far as
policy is concerned, and acceptance of much of the Debian policy manual
into LCC would be the first order of business for me.

  	I am not just swilling pap sight unseen into Debian's
 repository, and my work is what makes it different from outsourcing
 the package upstream.
  

Here you are making an assumption that I feel is not warranted. You
assume that the other distributions concerned with this matter will
wish to run rough-shod over Debian's policies and your own quality
process, without giving you a say. We have no reason to believe that
yet.

 Thanks

 Bruce




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Linux Core Consortium

2004-12-15 Thread Tobias Burnus
Hello,
Wichmann, Mats D wrote:
My experience as a developer who's tried to write
an app to use the LSB (only the init script interface)
is that it's poorly enough specified and/or implemented
divergently within the spec to the point that I had to
test my implementation on every LSB distriution I
wanted to support, and might as well have written
distro-specific code in the first place.
   

Unfortunally, some distributions don't seem to be willing to do more 
than the minimal changes to adhere to the LSB. I did some patches for 
RedHat - and the bugreport is still open (I don't know whether the 
patches still work).
SUSE seems to try hardest to be LSB complient and Debian was rather 
quick to implement my requests. I had no access to other distributions.

Unfortunately, while we got spec contribution in this
area, we didn't get matching code contributions: tests
OR sample implementation.
(I think I'm ment with regards to the first two points.) Regarding the 
latter, SUSE's implementation should completely fullfil the LSB 
requirements (tough the init-functions may be a bit SUSE centric) 
whereas Debian's system is also quite ok. (Though start-stop-damon 
doesn't find out that my PERL script damon is running...)

I agree with Mats that the best way to enforce init script support are 
test cases. Unfortunally, I ended up doing not much more than writing 
the 
http://www.physik.fu-berlin.de/~tburnus/lsb/LSB-SYSINIT-TS_SPEC_V1.0. 
Now - one year later - I really should continue the work I started!

Regarding
Tobias



Re: Linux Core Consortium

2004-12-15 Thread Bill Allombert
On Wed, Dec 15, 2004 at 01:36:47PM -0600, Manoj Srivastava wrote:
 On Wed, 15 Dec 2004 11:21:02 -0800, Michael K Edwards [EMAIL PROTECTED] 
 said: 
 
  Bruce Well, please don't tell this to all of the people who we are
  attempting to get to use Linux as the core of their products.
 
  core (software architecture) != core (customer value).
 
  Also, please make sure to tell the upstream maintainers that we
  aren't going to use their code any longer, because we have decided
  that it's a bad idea to outsource the core of our product.
 
  Debian isn't a product, it's a project,
 
   So it is good that I did not say Debian, or the project, eh?
  Did you really read what I said?

I hate to interfer, but it seems you assume Michael is replying to you
where in fact he was replying to Bruce's answer to your post.

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




Re: Linux Core Consortium

2004-12-15 Thread Manoj Srivastava
On Wed, 15 Dec 2004 12:13:58 -0800, Bruce Perens [EMAIL PROTECTED] said: 

 1.  (*) text/plain ( ) text/html
 Manoj Srivastava wrote:

 Hmm. I am not sure how to take this: either you are spoiling for a
 fight, or you do not take your duties as a developer very
 seriously.
 
 
 I was taking the implications of your statements farther than you
 intended, in order to get you to give them additional thought. This
 is a common rhetorical device. You should consider that I might be
 using it before you get to he's crazy or irresponsible :-)

So it was inflammatory, then. Comes under spoiling for a fight.

 I looked for core ABI packages that you maintain. The closest I
 found was libselinux1. You had a half-megabyte (uncompressed) patch
 for that, which it turns out is because your arch repositories and
 other arch-related cybercrud are in there. Probably this is common
 in debian diffs. When I filtered that out, I got this:


Rather than look at new, hectically maintained packages that
 have yet to see real use, try my standard packages -- make, or
 flex.

Make has had four separate lines of development integrated
 into it, apart from upstream -- one is trivially updating the
 autotools, another is additional i18n documentation, yet another is a
 varbuf fix, apart from other, minor debian tweaks,

Flex is far more interesing. It has been broken up into
 flex-old, and flex, and each branch has several independent fixes in
 them -- is a listing of the branches involved.

  flex--autotools-refresh--2.5
  flex--debian--2.5
  flex--devo--2.5
  flex--doc-fix--2.5
  flex--m4-quotes-fix--2.5
  flex--non-asni-fn-fix--2.5
  flex--str-fix--2.5
  flex--stream-ptr-fix--2.5
  flex--unistd-fix--2.5

  flex--old-c-fixups--2.5
  flex--old-debian--2.5
  flex--old-devo--2.5
  flex--old-doc-fixes--2.5
  flex--old-i18l-fix--2.5

  flex--upstream--2.5


Now, separate out the devo branch -- which is the integration
 branch, this is still a fair shake from just packaging any old thing
 upstream slings at us.

 # if defined(__alpha__) || defined(__mips__)

This was a minor FTBS issue for arches upstream does not
have. 

 It seems to me to be the sort of thing we'd be able to come to
 agreement about across LCC. IMO Debian is ahead of the others as far
 as policy is concerned, and acceptance of much of the Debian policy
 manual into LCC would be the first order of business for me.

So, which version of flex you think you want to ship? The old
 one, which is POSIXLY correct, but does not work with modern g++, or
 the new one that is threaded, renterant, works with modern compilers,
 and disdains POSIX?

 I am not just swilling pap sight unseen into Debian's repository,
 and my work is what makes it different from outsourcing the package
 upstream.
 
 
 Here you are making an assumption that I feel is not warranted. You
 assume that the other distributions concerned with this matter will
 wish to run rough-shod over Debian's policies and your own quality
 process, without giving you a say. We have no reason to believe that
 yet.

Are other distributions willing to abide by Debian policy? If
 so, I may come around to favour us participating  even now.

manoj
-- 
What is the sound of one hand clapping?
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Linux Core Consortium

2004-12-15 Thread Bill Allombert
On Wed, Dec 15, 2004 at 11:29:47AM -0800, Bruce Perens wrote:
  Would outsourcing the core packages to
 third parties not make us less nimble (if I can use the word with a
 straight face)?
 
 Nobody is saying that you can't override the external stuff when 
 necessary. Security would be a good reason to do so, if LCC is being 
 tardy compared to Debian.

But overriding them means we lose the certification ? 

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens
Bill Allombert wrote:
But overriding them means we lose the certification ? 
 

We can't allow it to be the case that overriding due to an existing and 
unremedied security issue causes loss of certification. There's no 
common sense in that.

   Thanks
   Bruce


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens
Manoj Srivastava wrote:
	So it was inflammatory, then. Comes under spoiling for a fight.
 

Only if you confuse Socrates and Sophism.
So, which version of flex you think you want to ship?
Fortunately, flex isn't in the problem space. If you stick to what 
version of libc, etc., it'll make more sense.

Are other distributions willing to abide by Debian policy? If
so, I may come around to favour us participating  even now.
 

Do you know of any other distribution that has taken the trouble to 
write down as much policy as Debian has? It's not clear that the others 
have anything to put against it.

   Thanks
   Bruce


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Linux Core Consortium

2004-12-15 Thread Bill Allombert
On Wed, Dec 15, 2004 at 02:36:52PM -0800, Bruce Perens wrote:
 Bill Allombert wrote:
 
 But overriding them means we lose the certification ? 
  
 
 We can't allow it to be the case that overriding due to an existing and 
 unremedied security issue causes loss of certification. There's no 
 common sense in that.

Then could you elaborate the scope of the certification ? It is one
of the main contender for me. I though the certification require to
ship the exact same binary as provided by the LCC.

Adding support for SE Linux (for example, I am not pushing SE Linux)
can imply rebuild some softwares with configure --with-selinux and link
them with libselinux.so. They certainly won't be identical to the LLC
one.

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens
Bill Allombert wrote:
Then could you elaborate the scope of the certification ?
It's still a matter for negotiation. If the certification won't admit to 
common-sense rules, it won't work for anyone - not just Debian.

   Thanks
   Bruce


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Linux Core Consortium

2004-12-15 Thread Marco d'Itri
On Dec 15, Bruce Perens [EMAIL PROTECTED] wrote:

 Do you know of any other distribution that has taken the trouble to 
 write down as much policy as Debian has? It's not clear that the others 
 have anything to put against it.
Bug for bug compatibility required by their customer looks like a good
candidate.

-- 
ciao, |
Marco | [9830 anG8ZM0oS4rOo]


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-15 Thread Michael K. Edwards
Whoops, I guess that's what I get for trying to be concise for once. 
I'll try again.

Bruce Well, please don't tell this [i. e., outsourcing your core is
a bad idea]
Bruce to all of the people who we are attempting to get to use Linux
Bruce as the core of their products.

me core (software architecture) != core (customer value).

In other words, Bruce seemed to be conflating the usage of core in a
software architecture sense (kernel, toolchain, libraries) with core
in a business sense (value proposition to the customer).  It's smart
for many businesses to adopt the GNU/Linux core precisely because
writing operating systems isn't their own core competence and wouldn't
make their products better.

Bruce Also, please make sure to tell the upstream maintainers that we
Bruce aren't going to use their code any longer, because we have decided
Bruce that it's a bad idea to outsource the core of our product.

me Debian isn't a product, it's a project,

[snip Manoj's response, which seems to have been aimed at someone else]

me and the core of the project isn't code, it's principles and
me processes.  Outsourcing the core of Debian would be delegating
me judgements about software freeness and integrity.

What I was trying to say is that Linux (or any other chunk of upstream
code) doesn't represent the core of Debian, so Bruce's argument that
we've already outsourced our core doesn't hold water.  Our core is the
DFSG and the Social Contract, plus the processes we have in place to
deliver on the promises they contain.

I would argue that any strategy that consecrates particular binaries
-- even those built by Debian maintainers -- flies in the face of
those principles and processes.  Even a commitment to sync up at the
source code level constitutes a delegation of judgments about how to
maintain software integrity, and it risks a delegation of judgments
about freeness (think firmware BLOBs, or the XFree86 license changes).
 That's the part of what the LCC proposes which I think would
constitute outsourcing Debian's core.

Is there a paper tiger lurking in there somewhere?

Cheers, 
- Michael




Re: Linux Core Consortium

2004-12-15 Thread Michael K. Edwards
Bruce Fortunately, flex isn't in the problem space. If you stick to what
Bruce version of libc, etc., it'll make more sense.

Flex isn't in the problem space if we're talking core ABIs.  But it
certainly is if we're talking core implementations, as binutils and
modutils both depend on it.  Or is the LCC proposing to standardize on
a set of binaries without specifying the toolchain that's used to
reproduce them?

Bruce Do you know of any other distribution that has taken the trouble to
Bruce write down as much policy as Debian has? It's not clear that the others
Bruce have anything to put against it.

Not having a policy is also a choice.  For a variety of reasons, a
written policy about legal and technical issues can be a handicap to
the sort of calculated risk that many business decisions boil down to.
 Debian has flamewars about license compatibility and degree of
dependency on non-free materials precisely because it has a policy and
tries to abide by it.

But again, you may not always get what you pay for, but you rarely
fail to pay for what you get.  If all distros were as sensitive as
Debian is to questions of reproducibility from unencumbered source
code and build environments, then perhaps we wouldn't be debating the
need for golden binaries.

Cheers,
- Michael




Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens
Michael K. Edwards wrote:
binutils and modutils both depend on it.
On flex? No. At least not in unstable.
However, Debian seems to have addressed the issue by providing both 
versions of flex. I don't see what would prevent us from going on with 
that practice.

Or is the LCC proposing to standardize on a set of binaries without specifying the toolchain that's used to reproduce them?
 

Linking and calling conventions should be standardized and should 
survive for reasonably long. We need to know what we use to build the 
packages, but we are not currently proposing to standardize development 
tools on the target system.

Not having a policy is also a choice.  For a variety of reasons, a
written policy about legal and technical issues can be a handicap to
the sort of calculated risk that many business decisions boil down to.
 

The sort of calculated risk that many business decisions boil down to 
is too vague to have meaning. What you may be groping at is that some 
publicized policy can be taken as a promise. The organizations 
participating in LCC have chosen to make such promises.

   Thanks
   Bruce



Re: Linux Core Consortium

2004-12-15 Thread Steve Langasek
On Wed, Dec 15, 2004 at 05:00:11PM -0800, Bruce Perens wrote:
 Michael K. Edwards wrote:
 binutils and modutils both depend on it.

 On flex? No. At least not in unstable.

Yes, it does.

$ apt-cache showsrc binutils
Package: binutils
Binary: binutils-hppa64, binutils, binutils-doc, binutils-dev, 
binutils-multiarch
Version: 2.15-5
Priority: standard
Section: devel
Maintainer: James Troup [EMAIL PROTECTED]
Build-Depends: autoconf (= 2.13), bison, flex, gettext, texinfo, binutils (= 
2.9.5.0.12), gcc (= 2.95.2-1), dejagnu (= 1.4.2-1.1), expect (= 5.32.2-1), 
dpatch, file
Architecture: any
Standards-Version: 3.6.1.0
Format: 1.0
Directory: pool/main/b/binutils
Files:
 b20cf60b07384592ed5fa71314c6d2d9 1401 binutils_2.15-5.dsc
 ea140e23ae50a61a79902aa67da5214e 15134701 binutils_2.15.orig.tar.gz
 055e74792e7118ddf33ae6b04d640818 38173 binutils_2.15-5.diff.gz
$

 However, Debian seems to have addressed the issue by providing both 
 versions of flex. I don't see what would prevent us from going on with 
 that practice.

 Or is the LCC proposing to standardize on a set of binaries without 
 specifying the toolchain that's used to reproduce them?

 Linking and calling conventions should be standardized and should 
 survive for reasonably long. We need to know what we use to build the 
 packages, but we are not currently proposing to standardize development 
 tools on the target system.

Not standardizing the toolchain used to build a set of standardized binaries
would seem to leave the LCC open to a repeat of the gcc-2.96 fiasco,
however...

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-15 Thread Bruce Perens




Steve Langasek wrote:

  
On flex? No. At least not in unstable.

  
  
Yes, it does.
  

Oh, you mean build-depends.

  
Not standardizing the toolchain used to build a set of standardized binaries
would seem to leave the LCC open to a repeat of the gcc-2.96 fiasco,
however...
  

The GCC fiasco was due to a change in calling conventions. I said we'd
standardize that.

 Thanks

 Bruce





smime.p7s
Description: S/MIME Cryptographic Signature


Re: Linux Core Consortium

2004-12-14 Thread Michael K. Edwards
  me
 Ian Murdock (quotes out of order)

  If the LSB only attempts to certify things that haven't forked, then
  it's a no-op.  Well, that's not quite fair; I have found it useful to
  bootstrap a porting effort using lsb-rpm.  But for it to be a software
  operating environment and not just a software porting environment, it
  needs to have a non-trivial scope, which means an investment by both
  ISVs and distros.
 
 That's precisely what we're trying to do. :-)

The context of my remark was your claim that by definition, the LSB
codifies existing
standards, i.e., things everyone already agree[s] with.  If that's
true, then the LSB doesn't represent a non-trivial investment on the
distros' part, and no one should be surprised that the ISVs don't care
about it.  Agreeing on a set of LCC binaries would be non-trivial for
the distros but its entire justification would be that it's trivial
for the ISVs.  That would be fine (on the assumption that ISV success
is what matters) except that I don't think it will work.

I respect your efforts to make commercial software on Linux more
viable.  But be careful what you wish for -- you might get it.  Make
it impossible to remove implementation warts because the binaries are
more authoritative than the specification, and pretty soon you have a
thriving ISV market -- for antivirus software, system repair
utilities, interface hacks, and virtual machines to graft
unreproducible system images onto new hardware.

 Wishing the ISVs operated a different way doesn't really get us any
 closer to a solution..

You seem to have missed my point.  I don't wish for ISVs to operate
a different way; I cope daily with the way that they _do_ operate, and
am distressed by proposals that I think will make it worse.  In my
opinion, catering to poor software engineering practice by applying
Holy Penguin Pee to a particular set of core binaries is unwise.  I
would have expected you and Bruce to agree -- in fact, to hold rather
more radical opinions than I do -- so I must be missing something. 
Here's the case I would expect a Debian founder to be making.

In short, I don't think that ISVs can afford for their releases to
become hothouse flowers that only grow in the soil in which they
germinated.  It's understandable for ISVs to pressure distros to pool
their QA efforts and to propose that this be done at a tactical level
by sharing binaries.  But I think that's based on a naive analysis of
the quality problem.  Inconsistent implementation behavior from one
build to another is generally accompanied by similar inconsistencies
from one use case to another, which don't magically get better by
doing fewer builds.

The requirement that software run on multiple distros today serves as
a proxy for the much more important requirement that it run on the
same distro today and tomorrow.  It's a poor substitute for testing in
a controlled environment, exposing only functionality which is common
denominator among distros, but that takes skill, understanding, and
labor beyond what ISVs are willing to invest.  In practice, there are
still going to be things that break when bits change out from under
them.  But it makes a great deal of difference whether an ISV is
committed to fixing accidental dependencies on binary internals.

Suppose I want to use an ISV's product on Debian myself, or to support
its use on systems that I control.  Usually the ISV's approach to
Linux is to list a set of supported RedHat and SuSE distributions
(often from years ago) on which they have more or less tested their
software.  That gives me some idea of what its de facto environmental
requirements are.  Then I reverse engineer the actual requirements
(shared library versions, which shell /bin/sh is assumed to be, which
JVM installed where, etc.) and select/adapt Debian packages
accordingly.  This is a pain but at least I get to use competently
packaged open source code for all of the supporting components, and I
can fix things incrementally without expecting much help from the ISV.

If I'm going to go to this trouble, it's actually to my advantage that
Debian packages are completely independently built -- separate
packaging system, separate configure parameters, separately chosen
patches.  I find a lot of things up front that would otherwise hit me
in the middle of an urgent upgrade -- calls to APIs that are supposed
to be private, incorrectly linked shared objects (I do an ldd -r -v on
everything), code built with dubious toolchains, weird uses of
LD_LIBRARY_PATH, FHS violations.  Sometimes ISVs even appreciate a
heads-up about these problems and fix them in the next build.

Given this strategy for ISV handling, obviously I prefer the ABI /
test kit approach to distro convergence.  For one thing, if commercial
distros cooperated that way, it would make the reverse engineering
easier.  More importantly, any ISV which publicly buys into ABI-based
testing will immediately gain credibility with me in a way that I can
explain to 

Re: Linux Core Consortium

2004-12-14 Thread Ian Murdock
On Fri, 2004-12-10 at 00:44 +0100, Christoph Hellwig wrote:
 Besides that the LCC sounds like an extraordinarily bad idea, passing
 around binaries only makes sense if you can't easily reproduce them from
 the source (which I defined very broadly to include all build scripts
 and depencies), and that case is the worst possible thing a distribution
 can get into.

The LCC core will be fully reproducible from source. You may
(and probably will) lose any certifications if you do that,
for the same reason that the distros reproduced from the Red
Hat Enterprise Linux source (e.g., White Box Linux) lose them.
But it won't be take it or leave it. If reproducing from
source and/or modifying the core packages is more important to
you than the certifications, you will be able to do that.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-14 Thread Ian Murdock
On Fri, 2004-12-10 at 10:07 +0100, Wouter Verhelst wrote:
 If this is what's going to happen, then the first time a security fix
 comes along in one of those binaries the system suddenly isn't
 LCC-compiant anymore (due to the fact that different distributions
 handle security updates differently -- one backports fixes, the other
 throws in the newest upstream release).

Because the LCC is developed collaboratively by the distros that use it,
this won't happen--the security fix will be applied to the LCC core,
and the distros that are based on the LCC will incorporate the result
in a uniform, consistent manner. In other words, there will be a single
security update policy, not divergent ones for each LCC-based distro.

 It'll also severely hurt a distribution's ability to easily update to
 the newest upstream, or even to release when it's ready (but the next
 version of the LCC for some reason isn't).

The LCC core will be reproducible from source, so if a distro wishes
to do something differently, it may do so--though at the risk of losing
certifications (and the ability to call it LCC-based, since it won't
be, by definition, as long as its core is different from the LCC core).

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-14 Thread Ian Murdock
On Thu, 2004-12-09 at 14:33 -0600, John Hasler wrote:
 Why don't standard ABIs suffice?

Because the LSB bases its certification process on a standard ABI/API
specification alone, and this approach simply hasn't worked.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-14 Thread Ian Murdock
On Fri, 2004-12-10 at 10:57 +0100, Adrian von Bidder wrote:
 On Friday 10 December 2004 06.15, Gunnar Wolf wrote:
  John Goerzen dijo [Thu, Dec 09, 2004 at 09:40:51PM -0600]:
 
   we could participate in this organization even if we didn't take
   their packages?  That is, perhaps we could influence the direction to
   a more useful one?
 
  Then we would be non-participants, we would be just bitchers
 
 No, I don't think so.  I think what Bruce and Ian are aiming at is
  - Debian can get influence in LCC, so
  - some things LCC does might actually make sense, so Debian does these 
 things in the way LCC does.
  - other things will be done in LCC-space, that will not make sense in 
 Debian, so Debian can still do it in its own way.
 
 What is the benefit? The divergence between LCC and Debian will still be 
 smaller than when Debian just stays outside. So
  - vendors may offer compatibility to LCC with manageable overhead (Ubuntu, 
 perhaps?)
  - porting LCC applications to Debian is limited to those small areas where 
 divergence between LCC and Debian diverge.
 
 I think about things like hardware detection and autoconfiguration, where 
 there's a lot development right now, and there are a lot of different 
 solutions.  In many cases, the various solutions are more or less 
 equivalent and things are done differently mainly because of personal taste 
 of whoever does the implementation.  Having a voice in how LCC does these 
 things and doing it the same way in Debian, in these cases, would be a Very 
 Good Idea(tm), I feel.

Well said. Even if Debian ends up not adopting the LCC core,
Debian participation would 1. help make the LCC core, community,
and processes better and thus more likely to achieve the
desired result; and 2. help make the eventual differences between
the LCC core and the Debian core smaller, which at least eases
the compatibility problem even if it can't be eliminated entirely.

In short, it's not an all-or-nothing thing.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-14 Thread Steve Langasek
On Mon, Dec 13, 2004 at 05:07:12PM -0500, Ian Murdock wrote:
 On Sat, 2004-12-11 at 03:49 -0800, Steve Langasek wrote: 
  On Thu, Dec 09, 2004 at 03:39:55PM -0500, Ian Murdock wrote:
   You've just described the way the LSB has done it for years, which thus
   far, hasn't worked--while there are numerous LSB-certified distros,
   there are exactly zero LSB-certified applications. The reason for this
   is that substantially the same isn't good enough--ISVs want *exactly
   the same*, and there's a good reason for that, as evidenced by the fact
   that while Debian is technically (very nearly) LSB compliant, there are
   still a lot of edge cases like file system and package namespace
   differences that fall outside the LSB that vastly complicate the
   certify to an ABI, then support all distros that implement
   the ABI as defined by whether or not it passes a test kit model.

  Well, my first question is why, irrespective of how valuable the LSB itself
  is to them, any ISV would choose to get their apps LSB certified.  The
  benefits of having one's distro LSB certified are clear, but what does an
  LSB certification give an ISV that their own internal testing can't?

 In an ideal world, ISVs could certify once, to the LSB, and their
 applications would run the same on every LSB-certified distro. This
 dramatically reduces the amount of internal distro-specific work
 each has to do. The stronger the LSB, the closer the distro-specific
 work is to zero, and the closer they get to a single Linux port.

That wasn't my question.  My question was, why should any ISV care if
their product has been LSB-*certified*?  ISVs can test against, and
advertise support for, whatever they want to without getting the LSB's
imprimatur.  I cannot fathom why any ISV would go through the expense (money
or time) of getting some sort of LSB certification instead of simply making
this part of their in-house product QA; and therefore I don't see how the
absence of LSB-certified applications can be regarded as a failure of the
LSB process.

  Secondly, is this merely conjecture about the needs of ISVs, or have you (or
  someone else involved with the LCC) actually talked to people who've said
  these things?  If this initiative is truly a response to the needs of ISVs,
  it should be fairly easy to publically specify the LCC's scope up front so
  that Debian can decide whether there's any sense in trying to get involved.

 We have absolutely been talking to ISVs about their needs--indeed, this
 has been a conversation that has been ongoing for years..

 What about the LCC's scope isn't clear?

Er, the fact that no actual scope has been stated?  What does core mean?
What packages (libraries) are included in this core?  If Debian is not
going to accept external binaries provided by the LCC into the archive, or
finds it necessary to patch the source during the course of a release, does
this mean Debian will not be a certified platform?  If so, and given that
these are very likely outcomes, what reason remains for Debian to get
involved in the LCC if it's not going to result in any ISVs supporting
Debian as a platform through the LCC effort?  (If you see ways that Progeny
would benefit from Debian's involvement in the LCC even if the official
Debian releases are not LCC-certified in the end, I think that's relevant
here; but I would like to know how you think it would benefit Progeny and
other companies building on Debian, and why you think that Debian itself
warrants representation at the table.)

I don't think any of the above questions are ones to be hashed out by the
distros participating in the LCC.  If the impetus for the LCC really comes
from ISVs, then the answers to these questions must be *their* answers; and
if we don't have those answers, inter-distro talks would be aimless and
futile.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-14 Thread Adrian von Bidder
Yo all!

Seeing this discussion wander in many directions, please consider what is 
acutally under discussion here:

Bruce:
 I would not suggest that Debian commit to using LCC packages at this 
 time. We should participate for a while and see how many changes we'd 
 have to make and whether the project works for us. But I think we should 
 be at the table and in a position to influence the project. The other  
 members are willing to have us on those terms. 

 - Debian is invited to participate in the LCC
 - Debian does not, at this time, have to commit to anything at all, 
whatever the output of the LCC is.

Looking just at this, it seems - to me - more or less obvious that Debian 
should participate in this project, if there is manpower available to do 
so.

Any 'Debianisms' that get pushed into LCC will lower the work ISVs have to 
do to support their applications on Debian.

Any LCCisms that get accepted into Debian (for whatever reason) will do the 
same.

Any output of LCC that some affected package maintainer does not like, he 
may ignore. (Having sound reasons may help keeping the discussions short, 
though :^)


Whether Debian should or should not strive to be an LCC compliant 
distribution is not under discussion right now.  The same goes for all the 
other 27 topics that are being discussed in this thread.


(Hmm. Shouldn't this actually be under discussion on -project instead of 
here?)

thanks for listening
-- vbi

(Hmmm. Can the list server be set up to reject replies to new mails within 
the first 2h after them having been sent out? ;-)

-- 
Beware of the FUD - know your enemies. This week
* Patent Law, and how it is currently abused. *
http://fortytwo.ch/opinion


pgpqFKJC1Fnr2.pgp
Description: PGP signature


Re: Linux Core Consortium

2004-12-14 Thread Ian Murdock
On Tue, 2004-12-14 at 06:16 -0800, Steve Langasek wrote: 
 On Mon, Dec 13, 2004 at 05:07:12PM -0500, Ian Murdock wrote:
  On Sat, 2004-12-11 at 03:49 -0800, Steve Langasek wrote:
   Well, my first question is why, irrespective of how valuable the LSB 
   itself
   is to them, any ISV would choose to get their apps LSB certified.  The
   benefits of having one's distro LSB certified are clear, but what does an
   LSB certification give an ISV that their own internal testing can't?
 
  In an ideal world, ISVs could certify once, to the LSB, and their
  applications would run the same on every LSB-certified distro. This
  dramatically reduces the amount of internal distro-specific work
  each has to do. The stronger the LSB, the closer the distro-specific
  work is to zero, and the closer they get to a single Linux port.
 
 That wasn't my question.  My question was, why should any ISV care if
 their product has been LSB-*certified*?  ISVs can test against, and
 advertise support for, whatever they want to without getting the LSB's
 imprimatur.  I cannot fathom why any ISV would go through the expense (money
 or time) of getting some sort of LSB certification instead of simply making
 this part of their in-house product QA; and therefore I don't see how the
 absence of LSB-certified applications can be regarded as a failure of the
 LSB process.

My point was this: *If* getting their products LSB-certified would allow
them to support those products on any LSB-certified distro
without the major investment necessary to deal with the edge cases the
LSB doesn't currently cover, they would do it. That isn't the case
now, which is why none of them are LSB-certifying their products today.

Certification should mean it works. That's not the case as
regards LSB certification today, so the ISVs put the investment
into supporting each distro separately. If certify to LSB,
run on any LSB-certified distro was a reality, they could put that
investment into the LSB and end up with a longer list of supported
distros to boot--smaller cost, larger benefit, i.e., it's a win-win.

  What about the LCC's scope isn't clear?
 
 Er, the fact that no actual scope has been stated?  What does core mean?
 What packages (libraries) are included in this core?

Core means implemention of LSB, and the packages/libraries that will
constitute that are being determined now, as a collaborative process. I
assumed Debian would want to be involved in this process too, rather
than being presented with a more well-defined scope as a fait accompli..

 If Debian is not
 going to accept external binaries provided by the LCC into the archive, or
 finds it necessary to patch the source during the course of a release, does
 this mean Debian will not be a certified platform?

Yes.

 If so, and given that
 these are very likely outcomes, what reason remains for Debian to get
 involved in the LCC if it's not going to result in any ISVs supporting
 Debian as a platform through the LCC effort?

At the very least, the differences would be smaller than they
otherwise would be, and that can only be a good thing for
LCC, Debian, and the Linux world as a whole. And, who
knows, with Debian participation, perhaps the differences would end
up being small enough and the processes, methods, and
mechanisms defined such that it's no longer a very likely outcome.

 (If you see ways that Progeny
 would benefit from Debian's involvement in the LCC even if the official
 Debian releases are not LCC-certified in the end, I think that's relevant
 here; but I would like to know how you think it would benefit Progeny and
 other companies building on Debian, and why you think that Debian itself
 warrants representation at the table.)

As I said at the very outset, one possible way forward is to simply
produce a Debian-packaged version of the LCC core independently,
and to make sure that core is 100% compatible with Debian (i.e., you
can take any Debian package and install it on the LCC Debian core
and get the same results as if you'd installed it on Debian itself).
I'm truly hoping that's not the only way forward though, which is
why I'm trying to engage the Debian community to find another way.

One thing is clear: No matter how we end up approaching the ideal of
common core that can form the basis of both RPM- and Debian-based
distros, it'll clearly be better for all involved that the
differences between Debian and LCC remain small. The smaller the
differences, the closer the LCC Debian core is to Debian itself,
which in turn benefits Debian because those folks are more
closely working with Debian rather than working on LCC Debian.

 I don't think any of the above questions are ones to be hashed out by the
 distros participating in the LCC.  If the impetus for the LCC really comes
 from ISVs, then the answers to these questions must be *their* answers; and
 if we don't have those answers, inter-distro talks would be aimless and
 futile.

The ISVs have spoken. They want to support 

Re: Linux Core Consortium

2004-12-14 Thread Tollef Fog Heen
* Ian Murdock 

| On Fri, 2004-12-10 at 10:07 +0100, Wouter Verhelst wrote:
|  If this is what's going to happen, then the first time a security fix
|  comes along in one of those binaries the system suddenly isn't
|  LCC-compiant anymore (due to the fact that different distributions
|  handle security updates differently -- one backports fixes, the other
|  throws in the newest upstream release).
| 
| Because the LCC is developed collaboratively by the distros that use it,
| this won't happen--the security fix will be applied to the LCC core,
| and the distros that are based on the LCC will incorporate the result
| in a uniform, consistent manner. In other words, there will be a single
| security update policy, not divergent ones for each LCC-based distro.

This sounds a bit like the government whose country had three
different types of power plugs.  None compatible, of course.  Somebody
then got the great idea that if they invented another one to supersede
the three plugs in use (since that caused overhead).  That country now
has four plugs in use.

(Whether this is an urban legend, whether it was three, five or eight
different kinds of plugs is beside the point here.)

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  




Re: Linux Core Consortium

2004-12-14 Thread Gunnar Wolf
Ian Murdock dijo [Tue, Dec 14, 2004 at 11:53:54AM -0500]:
(snip)
 The ISVs have spoken. They want to support as few ports as possible,
 because those ports cost money. They also want to support as much
 of the market as possible, and the current reality is that many of
 those markets are out of reach today. Beyond that, they don't care. If
 there's an open standard they can certify to and reach a broader
 market, they'll be very happy with that. If commercial Linux
 ends up being owned by Red Hat, they'll be fine with that too. I
 for one am hoping it doesn't come to that. The current reality seems
 like a pretty big opportunity to me to define a different future.

Ok, so with this you are stating that the only way to get the ISVs to
certify Debian is to gain market share. If by adhering to the LSB we
got no results then, how would you think that by adhering to the LCC
we would? Yes, it will be a bit simpler for them to have their
applications run natively on all LCC-certified distributions... But
they want to be sure to be able to guarantee to each of their users
the application will work just as they tested it. And LCC is just one
step, there is still a lot of components outside it. I really doubt
that the LCC will be enough to lure them in.

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5554-9450
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF




Re: Linux Core Consortium

2004-12-14 Thread Christoph Hellwig
On Tue, Dec 14, 2004 at 08:34:17AM -0500, Ian Murdock wrote:
 On Fri, 2004-12-10 at 00:44 +0100, Christoph Hellwig wrote:
  Besides that the LCC sounds like an extraordinarily bad idea, passing
  around binaries only makes sense if you can't easily reproduce them from
  the source (which I defined very broadly to include all build scripts
  and depencies), and that case is the worst possible thing a distribution
  can get into.
 
 The LCC core will be fully reproducible from source. You may
 (and probably will) lose any certifications if you do that,
 for the same reason that the distros reproduced from the Red
 Hat Enterprise Linux source (e.g., White Box Linux) lose them.
 But it won't be take it or leave it. If reproducing from
 source and/or modifying the core packages is more important to
 you than the certifications, you will be able to do that.

So again what do you gain by distributing binaries if their reproducible
from source?  




Re: Re: Linux Core Consortium

2004-12-14 Thread Wichmann, Mats D

Joey Hess wrote (on debian-devel):

 My experience as a developer who's tried to write
 an app to use the LSB (only the init script interface)
 is that it's poorly enough specified and/or implemented
 divergently within the spec to the point that I had to
 test my implementation on every LSB distriution I
 wanted to support, and might as well have written
 distro-specific code in the first place. 

I got pointed here, I'm not on debian-devel, so I'm
coming a little late to the thread.

It's kind of ironic:  the LSB doesn't want to invent new
stuff, just standardize existing best practice.  One of
the VERY few places where we were forced to do something
exactly because there was so much divergence was this 
initscript area, and of course it's been the source of a
number of problems completely out of proportion to the 
size of the topic, reinforcing why we really don't want
to be in the invent business - I guess we'll leave
that to HP :-)

Unfortunately, while we got spec contribution in this
area, we didn't get matching code contributions: tests
OR sample implementation.  It sure would be helpful to
get either or both, and also helpful would be bugreports
at bugs.linuxbase.org when it doesn't work right.

The takeaway: if the LSB is going to succeed as the 
community standard for the Linux core, we need as much
of the community as possible to let us know how it's
working, and when it's not.

-- mats wichmann

P.S.: turnabout being fair play, I think lsb-related
activities have turned up a disproportionate number
of issues in alien and the now-orphaned rpm...
sorry, Joey!




Re: Linux Core Consortium

2004-12-13 Thread Goswin von Brederlow
Goswin von Brederlow [EMAIL PROTECTED] writes:

 Kurt Roeckx [EMAIL PROTECTED] writes:

 On Sun, Dec 12, 2004 at 08:29:16PM +0100, Goswin von Brederlow wrote:
 Tollef Fog Heen [EMAIL PROTECTED] writes:
 
  The problem is not the autobuilder infrastructure per se.  It is that
  testing and unstable are largely in sync (!).  This, combinded with the
  fact that testing must not have versions newer than unstable (they

 Why aren't security uploads for testing done as testing-security
 unstable? Why leave the bug open in sid when fixing it in testing?

 The issue of testing being newer only arises when sarge and sid have
 the same version, otherwise you have the t-p-u case with testing being
 lower.

  will then be rejected) means testing-security wouldn't work at the
  moment.
 
 How is that different from testing-proposed-updates?

 Because they're ussualy for fixing bugs in testing where there is
 an other version in unstable?  Why else would you be using
 testing-proposed-updates?


 Kurt

 MfG
 Goswin

Small update:

The patch (see other post in this thread) and talk on irc solved this
so no need to explain it again.

MfG
Goswin




Re: Linux Core Consortium

2004-12-13 Thread Ian Murdock
On Thu, 2004-12-09 at 13:04 -0800, Michael K. Edwards wrote:
 If ISVs want exactly the same, they are free to install a chroot
 environment containing the binaries they certify against and to supply
 a kernel that they expect their customers to use.  That's the approach
 I've had to take when bundling third-party binaries built by people
 who were under the illusion that exactly the same was a reasonable
 thing to ask for.  Once I got things working in my chroot, and
 automated the construction of the chroot, I switched back to the
 kernel I wanted to use; the ISV and I haven't troubled one another
 since.

Wishing the ISVs operated a different way doesn't really get us any
closer to a solution..

 If the LSB only attempts to certify things that haven't forked, then
 it's a no-op.  Well, that's not quite fair; I have found it useful to
 bootstrap a porting effort using lsb-rpm.  But for it to be a software
 operating environment and not just a software porting environment, it
 needs to have a non-trivial scope, which means an investment by both
 ISVs and distros.

That's precisely what we're trying to do. :-)

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-13 Thread Ian Murdock
On Sat, 2004-12-11 at 03:49 -0800, Steve Langasek wrote: 
 On Thu, Dec 09, 2004 at 03:39:55PM -0500, Ian Murdock wrote:
  You've just described the way the LSB has done it for years, which thus
  far, hasn't worked--while there are numerous LSB-certified distros,
  there are exactly zero LSB-certified applications. The reason for this
  is that substantially the same isn't good enough--ISVs want *exactly
  the same*, and there's a good reason for that, as evidenced by the fact
  that while Debian is technically (very nearly) LSB compliant, there are
  still a lot of edge cases like file system and package namespace
  differences that fall outside the LSB that vastly complicate the
  certify to an ABI, then support all distros that implement
  the ABI as defined by whether or not it passes a test kit model.
 
 Well, my first question is why, irrespective of how valuable the LSB itself
 is to them, any ISV would choose to get their apps LSB certified.  The
 benefits of having one's distro LSB certified are clear, but what does an
 LSB certification give an ISV that their own internal testing can't?

In an ideal world, ISVs could certify once, to the LSB, and their
applications would run the same on every LSB-certified distro. This
dramatically reduces the amount of internal distro-specific work
each has to do. The stronger the LSB, the closer the distro-specific
work is to zero, and the closer they get to a single Linux port.

It's all cost-benefit. Each new port costs money. Does the port bring in
more money than it costs to produce it? The current approach of
supporting each distro as a separate port allows them to reach a large
part of the market while minimizing cost. But there's still a huge
swath of the market that's not covered (see, again, Oracle and Asianux).

 Secondly, is this merely conjecture about the needs of ISVs, or have you (or
 someone else involved with the LCC) actually talked to people who've said
 these things?  If this initiative is truly a response to the needs of ISVs,
 it should be fairly easy to publically specify the LCC's scope up front so
 that Debian can decide whether there's any sense in trying to get involved.

We have absolutely been talking to ISVs about their needs--indeed, this
has been a conversation that has been ongoing for years..

What about the LCC's scope isn't clear? The basic are fairly simple:
Make the cost-benefit equation a no-brainer by giving ISVs a single
common core to support (lower cost) while getting that single common
core into as many distros as possible (raise benefit by opening
new markets to the ISVs' products). This is what the industry wants.

That said, the details are far from simple. It's up to us, the distros,
to deliver on the basic idea, and that's what we're trying to do.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-13 Thread Ian Murdock
On Thu, 2004-12-09 at 23:07 +0100, Goswin von Brederlow wrote:
 Ian Murdock [EMAIL PROTECTED] writes:
  Can someone provide an example of where the name of a dynamic
  library itself (i.e., the one in the file system, after the
  package is unpacked) would change? I'd be surprised if this was
  a big issue. The LSB/FHS should take care of file system level
  incompatibilities already (though Debian may put some things in
  /lib that RPM-based distros put in /usr/lib due to more strict policy
  about such things), so I'd think the main issue wouldn't so much be
  about the names of the dynamic libraries themselves, but the names of
  the packages they come in (acl vs. libacl1, as per my last message).
 
 When multiarch hits all (core) libs will move around
 drastically:
 
 /lib/* - /lib/`gcc -dumpmachine`/
 /usr/lib/* - /usr/lib/`gcc -dumpmachine`/
 /usr/X11R6/lib/* - /usr/X11R6/lib/`gcc -dumpmachine`/
 
 If you aim at having the same path to libs (which only broken rpath
 needs) then this will be your nightmare.
 
 PS: The above lib dirs are the best and only practical solution we
 have for multiarch.

I understand the LSB is beginning to think about the multiarch issue,
and I suspect Debian is far ahead of others in terms of practical
experience with this problem; so, it's not only reasonable to expect
the LSB will help resolve this potential nightmare,
but also that Debian could be at the forefront of helping resolve it.

-- 
Ian Murdock
317-578-8882 (office)
http://www.progeny.com/
http://ianmurdock.com/

All men dream: but not equally. Those who dream by night in
the dusty recesses of their minds wake in the day to find that it was
vanity: but the dreamers of the day are dangerous men, for they may
act their dreams with open eyes, to make it possible. -T.E. Lawrence





Re: Linux Core Consortium

2004-12-13 Thread Bill Allombert
On Mon, Dec 13, 2004 at 05:07:12PM -0500, Ian Murdock wrote:
 We have absolutely been talking to ISVs about their needs--indeed, this
 has been a conversation that has been ongoing for years..
 
 What about the LCC's scope isn't clear? The basic are fairly simple:
 Make the cost-benefit equation a no-brainer by giving ISVs a single
 common core to support (lower cost) while getting that single common
 core into as many distros as possible (raise benefit by opening
 new markets to the ISVs' products). This is what the industry wants.

What is not clear is the benefit to Debian: a loss of flexibility
in the handling of our core distribution and potentialy the inability
to fix bugs (because we would then louse the certification) for a benefit
(support for some closed source apps) that is at best tangential to our
core goal.

We are in a similar situation as the Linux kernel in regards of
proprietary kernel modules. If the kernel interface to modules
was frozen, it would be much easier to support proprietary kernel
modules. The Linux developers have elected to keep their hands free
and not support binary kernel modules, even if it is not what the
industry wants.

The benefit of free software is going to be thin if you are forced to
run a specific binary build of such and such softwares due to
certification requirement even if such build is buggy and a fix is 
available.

Cheers,
-- 
Bill. [EMAIL PROTECTED]

Imagine a large red swirl here. 




Re: Linux Core Consortium

2004-12-13 Thread Andrew M.A. Cater
On Mon, Dec 13, 2004 at 05:11:32PM -0500, Ian Murdock wrote:
 On Thu, 2004-12-09 at 23:07 +0100, Goswin von Brederlow wrote:
  Ian Murdock [EMAIL PROTECTED] writes:
 
 I understand the LSB is beginning to think about the multiarch issue,
 and I suspect Debian is far ahead of others in terms of practical
 experience with this problem; so, it's not only reasonable to expect
 the LSB will help resolve this potential nightmare,
 but also that Debian could be at the forefront of helping resolve it.
 
[Ha, ha, only serious here - please don't dismiss this out of hand]

OK, I'll bite - your signature (kept below) is apposite here :) 
Dare to dream for a moment or ten.  I owe you personally a debt of 
gratitude as I do to Bruce.  Bruce's UserLinux isn't here yet as far 
as I can see - Progeny has moved from Progeny Linux to Progeny 
services/infrastructure stuff - but you are both experienced 
with trying to forge the best distribution there is.  

Get together with Mark Shuttleworth and Ubuntu: pool your efforts. 

Bruce can pull in industry contacts - Progeny can produce heavyweight 
support - Ubuntu can produce a free but commercial Debian.  Pull in 
Mepis if they'll talk to the consortium.  Build an LSB compliant 
Debian with ISV support - hell, if you'll provide really heavyweight 
support of _any_ kind for Debian in the UK, I might not need to push 
my bosses so hard :) Get HP or whoever to commit to something Debian-based 
on its merits.

Produce a commercial grade Debian you feel happy selling to Oracle/SAP/IBM
or whoever.  If you can do that, then alien may do much of the rpm
conversion for LSB.  Don't fork altogether from Debian but work with the
Project to build good stuff and help each other.  Compromises need to be
made to produce a commercial distribution - so make them and make them
explicitly but allow people to cross-grade to the Debian we know and
love if they need to apt-get stuff not in your commercial distro _AT
THEIR RISK_ (though if, for example, I choose to install frozen-bubble
that act, in and of itself, shouldn't affect e.g. Oracle so
shouldn't affect my support).

Don't dumb down Debian - don't compromise on the good stuff like
multi-architecture, structured dependencies and build quality. Bring
other distributions up to scratch on this.

The commercial world is simultaneously potentially too big for any one of
Ubuntu/Progeny/UserLinux to go it completely alone and too small for big
businesses to appreciate and accept minor differences in style and feel 
between the various Debian derivatives

This ought to be practicable given goodwill on all sides - everyone wins
in this stone soup game.

Now can we get back to releasing a new Debian release on which you can
base all this wonderful stuff ?? :)

Andy

 All men dream: but not equally. Those who dream by night in
 the dusty recesses of their minds wake in the day to find that it was
 vanity: but the dreamers of the day are dangerous men, for they may
 act their dreams with open eyes, to make it possible. -T.E. Lawrence
 




Re: Linux Core Consortium

2004-12-12 Thread Joey Hess
Andrew Suffield wrote:
  http://wiki.debian.net/index.cgi?ReleaseProposals
 
 Every single one of these falls into one of these four groups:

Please note the wiki in the URL and the edit page button on the
page.

(Or are you just pointlessly bitching?)

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-12 Thread Joey Hess
Steve Langasek wrote:
 Well, my first question is why, irrespective of how valuable the LSB itself
 is to them, any ISV would choose to get their apps LSB certified.  The
 benefits of having one's distro LSB certified are clear, but what does an
 LSB certification give an ISV that their own internal testing can't?  Or do
 you really mean there are no ISVs *writing* to the LSB?

My experience as a developer who's tried to write an app to use the LSB
(only the init script interface) is that it's poorly enough specified
and/or implemented divergently within the spec to the point that I had
to test my implementation on every LSB distriution I wanted to support,
and might as well have written distro-specific code in the first place.

Just for example, from my changelog:

  * Seems start_daemon on mandrake 9.1 wants just the name, not the path,
so changed that. Note that my reading of the lsb says the path should
be there, and the debian implementation agrees. For now, supporting
mandrake is more important.
  * Similarly, {remove,install}_initd on Mandrake seems not to want the full
path of /etc/init.d/mooix. This is clearly a violation of
the lsb, but for now I will cater to it.
  * Remove set -e from lsb/init, since on Mandrake the init functions are not
-e safe. (Sheesh.)
  * Add chkconfig stuff to lsb/init, since mandrake's sad excuse for a lsb
compliant init system seems to require it.

(This was a bit more than one year ago, distros tested included Debian,
Mandrake, Red Hat, SuSe.)

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-12 Thread Thomas Hood
On Sun, 12 Dec 2004 18:40:07 +0100, Joey Hess wrote:

 Andrew Suffield wrote:
  http://wiki.debian.net/index.cgi?ReleaseProposals
 
 Every single one of these falls into one of these four groups:
 
 Please note the wiki in the URL and the edit page button on the
 page.


Inspired by A.S.'s comment I've just sorted the proposals
into four groups, though not exactly the ones he defined.

-- 
Thomas Hood




Re: Linux Core Consortium

2004-12-12 Thread Tollef Fog Heen
* Brian Nelson 

| Anyone, developer or non-developer, can help fix toolchain problems.
| However, the only people who can work on the testing-security
| autobuilders are ... the security team and the ftp-masters?  What's
| that, a handful of people?  With a bottleneck like that, isn't that a
| much more important issue?

The problem is not the autobuilder infrastructure per se.  It is that
testing and unstable are largely in sync (!).  This, combinded with the
fact that testing must not have versions newer than unstable (they
will then be rejected) means testing-security wouldn't work at the
moment.

If the above is a tad unclear, consider this case:

Package: foo
Version: 1.0-1 (in both testing and unstable)

This has a security bug, and the security team uploads 1.0-1sarge0
with «testing» in the changelog.  This works fine on
security.debian.org and is mapped to the testing-security repository.
Then security.debian.org uploads to ftp-master, but the upload is
rejected because of the version mismatch.

We have exactly the same problem with stable, but stable and unstable
are a lot less in sync than testing and unstable, so we don't see the
problem as much.

There are a few ways to solve those problems, they are being explored
and worked on, but none of them are pretty.

Thanks a lot to both Daniel Silverstone and Colin Watson for their
helpful explanations about this.  (And a good meal.)

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  




Re: Linux Core Consortium

2004-12-12 Thread Goswin von Brederlow
Tollef Fog Heen [EMAIL PROTECTED] writes:

 * Brian Nelson 

 | Anyone, developer or non-developer, can help fix toolchain problems.
 | However, the only people who can work on the testing-security
 | autobuilders are ... the security team and the ftp-masters?  What's
 | that, a handful of people?  With a bottleneck like that, isn't that a
 | much more important issue?

 The problem is not the autobuilder infrastructure per se.  It is that
 testing and unstable are largely in sync (!).  This, combinded with the
 fact that testing must not have versions newer than unstable (they
 will then be rejected) means testing-security wouldn't work at the
 moment.

How is that different from testing-proposed-updates?

And why is testing-proposed-updates infrastructure still lacking 2
buildds?

MfG
Goswin




Re: Linux Core Consortium

2004-12-12 Thread Kurt Roeckx
On Sun, Dec 12, 2004 at 08:29:16PM +0100, Goswin von Brederlow wrote:
 Tollef Fog Heen [EMAIL PROTECTED] writes:
 
  The problem is not the autobuilder infrastructure per se.  It is that
  testing and unstable are largely in sync (!).  This, combinded with the
  fact that testing must not have versions newer than unstable (they
  will then be rejected) means testing-security wouldn't work at the
  moment.
 
 How is that different from testing-proposed-updates?

Because they're ussualy for fixing bugs in testing where there is
an other version in unstable?  Why else would you be using
testing-proposed-updates?


Kurt




Re: Linux Core Consortium

2004-12-12 Thread Tollef Fog Heen
* Goswin von Brederlow 

| Tollef Fog Heen [EMAIL PROTECTED] writes:
| 
|  * Brian Nelson 
| 
|  | Anyone, developer or non-developer, can help fix toolchain problems.
|  | However, the only people who can work on the testing-security
|  | autobuilders are ... the security team and the ftp-masters?  What's
|  | that, a handful of people?  With a bottleneck like that, isn't that a
|  | much more important issue?
| 
|  The problem is not the autobuilder infrastructure per se.  It is that
|  testing and unstable are largely in sync (!).  This, combinded with the
|  fact that testing must not have versions newer than unstable (they
|  will then be rejected) means testing-security wouldn't work at the
|  moment.
| 
| How is that different from testing-proposed-updates?

t-p-u is not uploaded from another host through a mapping.  (Remember,
uploads to stable are mapped to stable-security on
security.debian.org, then uploaded to stable from that host.  The
.changes file however, does not list stable-security, it only lists
stable.  And the trivial fix, to drop the mapping won't help either,
since then any DD could upload to stable by uploading to
stable-security, and we don't want that.)

Also, AIUI, t-p-u will mostly be used when there's a newer version in
unstable and you can't get the version in unstable in (because of
dependencies) or you have to get a fix in immediately, in which case
you upload to unstable testing-proposed-updates, so you don't hit
the version skew issue.

| And why is testing-proposed-updates infrastructure still lacking 2
| buildds?

I don't know.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  




Re: Linux Core Consortium

2004-12-12 Thread Andreas Barth
* Tollef Fog Heen ([EMAIL PROTECTED]) [041212 21:35]:
 * Goswin von Brederlow 
 | Tollef Fog Heen [EMAIL PROTECTED] writes:

 |  * Brian Nelson 
 | 
 |  | Anyone, developer or non-developer, can help fix toolchain problems.
 |  | However, the only people who can work on the testing-security
 |  | autobuilders are ... the security team and the ftp-masters?  What's
 |  | that, a handful of people?  With a bottleneck like that, isn't that a
 |  | much more important issue?
 | 
 |  The problem is not the autobuilder infrastructure per se.  It is that
 |  testing and unstable are largely in sync (!).  This, combinded with the
 |  fact that testing must not have versions newer than unstable (they
 |  will then be rejected) means testing-security wouldn't work at the
 |  moment.
 | 
 | How is that different from testing-proposed-updates?
 
 t-p-u is not uploaded from another host through a mapping.  (Remember,
 uploads to stable are mapped to stable-security on
 security.debian.org, then uploaded to stable from that host.  The
 .changes file however, does not list stable-security, it only lists
 stable.  And the trivial fix, to drop the mapping won't help either,
 since then any DD could upload to stable by uploading to
 stable-security, and we don't want that.)

IIRC the changes file lists stable-security, e.g.:
 hpsockd (0.6.woody1) stable-security; urgency=high
(just looked into queue/done for that). And katie on ftp-master maps
stable-security to proposed-updates.

 Also, AIUI, t-p-u will mostly be used when there's a newer version in
 unstable and you can't get the version in unstable in (because of
 dependencies) or you have to get a fix in immediately, in which case
 you upload to unstable testing-proposed-updates, so you don't hit
 the version skew issue.

Also uploads to testing-security will go to t-p-u on ftp-master, via the
same mapping mechanismn. (Just look into katie.conf, you'll see the
mappings there.)


BTW, there is now at http://people.debian.org/~aba/dak.patch a draft of
a patch tackling the necessary changes for *security. However, as this
is my first real katie-patch, there might be issues I don't (currently)
see, and even if not, implementing on ftp-master requires more than just
a short glance over it.


Cheers,
Andi
-- 
   http://home.arcor.de/andreas-barth/
   PGP 1024/89FB5CE5  DC F1 85 6D A6 45 9C 0F  3B BE F1 D0 C5 D1 D9 0C




Re: Linux Core Consortium

2004-12-12 Thread Goswin von Brederlow
Tollef Fog Heen [EMAIL PROTECTED] writes:

 * Goswin von Brederlow 

 | Tollef Fog Heen [EMAIL PROTECTED] writes:
 | 
 |  * Brian Nelson 
 | 
 |  | Anyone, developer or non-developer, can help fix toolchain problems.
 |  | However, the only people who can work on the testing-security
 |  | autobuilders are ... the security team and the ftp-masters?  What's
 |  | that, a handful of people?  With a bottleneck like that, isn't that a
 |  | much more important issue?
 | 
 |  The problem is not the autobuilder infrastructure per se.  It is that
 |  testing and unstable are largely in sync (!).  This, combinded with the
 |  fact that testing must not have versions newer than unstable (they
 |  will then be rejected) means testing-security wouldn't work at the
 |  moment.
 | 
 | How is that different from testing-proposed-updates?

 t-p-u is not uploaded from another host through a mapping.  (Remember,
 uploads to stable are mapped to stable-security on
 security.debian.org, then uploaded to stable from that host.  The
 .changes file however, does not list stable-security, it only lists
 stable.  And the trivial fix, to drop the mapping won't help either,
 since then any DD could upload to stable by uploading to
 stable-security, and we don't want that.)

 Also, AIUI, t-p-u will mostly be used when there's a newer version in
 unstable and you can't get the version in unstable in (because of
 dependencies) or you have to get a fix in immediately, in which case
 you upload to unstable testing-proposed-updates, so you don't hit
 the version skew issue.

Which is exactly what you have with security. There is a newer version
in unstable than what you upload.

The problem seems to be more in rejecting unauthorized uploads to
testing-security than a version problem.

 | And why is testing-proposed-updates infrastructure still lacking 2
 | buildds?

 I don't know.

 -- 
 Tollef Fog Heen,''`.
 UNIX is user friendly, it's just picky about who its friends are  : :' :

Thanks for the update.

MfG
Goswin




Re: Linux Core Consortium

2004-12-12 Thread Goswin von Brederlow
Kurt Roeckx [EMAIL PROTECTED] writes:

 On Sun, Dec 12, 2004 at 08:29:16PM +0100, Goswin von Brederlow wrote:
 Tollef Fog Heen [EMAIL PROTECTED] writes:
 
  The problem is not the autobuilder infrastructure per se.  It is that
  testing and unstable are largely in sync (!).  This, combinded with the
  fact that testing must not have versions newer than unstable (they

Why aren't security uploads for testing done as testing-security
unstable? Why leave the bug open in sid when fixing it in testing?

The issue of testing being newer only arises when sarge and sid have
the same version, otherwise you have the t-p-u case with testing being
lower.

  will then be rejected) means testing-security wouldn't work at the
  moment.
 
 How is that different from testing-proposed-updates?

 Because they're ussualy for fixing bugs in testing where there is
 an other version in unstable?  Why else would you be using
 testing-proposed-updates?


 Kurt

MfG
Goswin




Re: Linux Core Consortium

2004-12-12 Thread Andreas Barth
* Goswin von Brederlow ([EMAIL PROTECTED]) [041212 22:20]:
 Tollef Fog Heen [EMAIL PROTECTED] writes:
  t-p-u is not uploaded from another host through a mapping.  (Remember,
  uploads to stable are mapped to stable-security on
  security.debian.org, then uploaded to stable from that host.  The
  .changes file however, does not list stable-security, it only lists
  stable.  And the trivial fix, to drop the mapping won't help either,
  since then any DD could upload to stable by uploading to
  stable-security, and we don't want that.)
 
  Also, AIUI, t-p-u will mostly be used when there's a newer version in
  unstable and you can't get the version in unstable in (because of
  dependencies) or you have to get a fix in immediately, in which case
  you upload to unstable testing-proposed-updates, so you don't hit
  the version skew issue.

 Which is exactly what you have with security. There is a newer version
 in unstable than what you upload.

Not if testing and unstable are in sync. In this case, the upload to
testing-security needs to also go to unstable, and not only to
testing-proposed-updates.

 The problem seems to be more in rejecting unauthorized uploads to
 testing-security than a version problem.

No, that's easy. Allow security only via scp to queue/unchecked, and not
via anonymous ftp, means only to the few people that have direct access
to ftp-master, including a wrapper for the security team.


Cheers,
Andi
-- 
   http://home.arcor.de/andreas-barth/
   PGP 1024/89FB5CE5  DC F1 85 6D A6 45 9C 0F  3B BE F1 D0 C5 D1 D9 0C




Re: Linux Core Consortium

2004-12-12 Thread Andreas Metzler
Goswin von Brederlow [EMAIL PROTECTED] wrote:
[...]
 Why aren't security uploads for testing done as testing-security
 unstable? Why leave the bug open in sid when fixing it in testing?
[...]

It is not possible to target more than one distribution (i.e.
testing-security and unstable) in one upload (The latest release of
dev-ref documents this.).
   cu andreas
-- 
See, I told you they'd listen to Reason, [SPOILER] Svfurlr fnlf,
fuhggvat qbja gur juveyvat tha.
Neal Stephenson in Snow Crash




Re: Linux Core Consortium

2004-12-11 Thread Andreas Barth
* Brian Nelson ([EMAIL PROTECTED]) [041210 19:55]:
 Yup.  There's never been a sense of urgency.  The RM's throw out release
 dates and goals every once in a while, but no one seems to take those
 seriously.

Not true. (And, perhaps you noticed, the release team avoided to give
specific days in the last time, and the timeline depends on a day N.)

 And probably for good reason. 

Remarks like this _are_ driving the release back.

 For the second straight
 release, when we've gotten to a point that it seemed we were nearly
 ready for a release, we then discover we have no security autobuilders.
 The release then gets pushed back a few more months, while the plebeian
 developers sit around twiddling their thumbs unable to help wondering
 why the hell no one thought to set up the autobuilders in the 2+ years
 we've been preparing a release.

Be assured, the setting up the security autobuilders are a topic since
I'm following the process of releasing sarge closely. Like e.g. in
http://lists.debian.org/debian-devel-announce/2004/08/msg3.html
which tells us that we need them for being able to freeze. Or, a bit
later,
http://lists.debian.org/debian-devel-announce/2004/09/msg5.html.
This even tells:
| The bad news is that we still do not have an ETA for the
| testing-security autobuilders to be functional.  This continues to be
| the major blocker for proceeding with the freeze

I don't think this mean that we suddenly discover it, but as other
issues were more prominently blockers e.g. in July (like the toolchain),
those issues were resolved back in September (and are still resolved
now).


Cheers,
Andi
-- 
   http://home.arcor.de/andreas-barth/
   PGP 1024/89FB5CE5  DC F1 85 6D A6 45 9C 0F  3B BE F1 D0 C5 D1 D9 0C




Re: Linux Core Consortium

2004-12-11 Thread Brian Nelson
On Sat, Dec 11, 2004 at 09:41:47AM +0100, Andreas Barth wrote:
 * Brian Nelson ([EMAIL PROTECTED]) [041210 19:55]:
  Yup.  There's never been a sense of urgency.  The RM's throw out release
  dates and goals every once in a while, but no one seems to take those
  seriously.
 
 Not true. (And, perhaps you noticed, the release team avoided to give
 specific days in the last time, and the timeline depends on a day N.)
 
  And probably for good reason. 
 
 Remarks like this _are_ driving the release back.

No, they aren't.  My remark is a symptom of the overall discouragement I
feel with the release process, not a cause.  Probably the biggest thing
keeping sarge from releasing is the overall discouragement and
disenchantment developers feel with the release process.

  For the second straight
  release, when we've gotten to a point that it seemed we were nearly
  ready for a release, we then discover we have no security autobuilders.
  The release then gets pushed back a few more months, while the plebeian
  developers sit around twiddling their thumbs unable to help wondering
  why the hell no one thought to set up the autobuilders in the 2+ years
  we've been preparing a release.
 
 Be assured, the setting up the security autobuilders are a topic since
 I'm following the process of releasing sarge closely. Like e.g. in
 http://lists.debian.org/debian-devel-announce/2004/08/msg3.html
 which tells us that we need them for being able to freeze. Or, a bit
 later,
 http://lists.debian.org/debian-devel-announce/2004/09/msg5.html.
 This even tells:
 | The bad news is that we still do not have an ETA for the
 | testing-security autobuilders to be functional.  This continues to be
 | the major blocker for proceeding with the freeze

I was being sarcastic when I said we suddenly discovered it.  Of course,
it's been known we'd need a security autobuilder infrastructure for
sarge since, uhhh, before woody's release.

 I don't think this mean that we suddenly discover it, but as other
 issues were more prominently blockers e.g. in July (like the toolchain),
 those issues were resolved back in September (and are still resolved
 now).

Anyone, developer or non-developer, can help fix toolchain problems.
However, the only people who can work on the testing-security
autobuilders are ... the security team and the ftp-masters?  What's
that, a handful of people?  With a bottleneck like that, isn't that a
much more important issue?

Besides, woody was release 2.5 years ago.  In all that time, no one who
had the power to setup the autobuilder infrastructure bothered to do it?
Let's face it--that's a major fuckup.  I'm not blaming you or anyone
else in particular.  We're all volunteers, we're all busy, we can't
force anyone to do the work, etc.  But we're not exactly lacking in
manpower here.  If those with the power didn't have the time to setup
the infrastructure, surely over the course of 2.5 years we could have
found someone out of the 1,000 or so developers we have that had the
time and skill to do it.  So why didn't that happen?

-- 
For every sprinkle I find, I shall kill you!




Re: Linux Core Consortium

2004-12-11 Thread Florian Weimer
* Bruce Perens:

 The Linux Core Consortium would like to have Debian's involvement. This 
 organization has revived what I originally proposed to do as the LSB - 
 to make a binary base for Linux distributions that could be among 
 several distributions who would share in the effort of maintaining 
 certain packages.

I don't think Debian should try to adopt an extensive, externally
specified ABI.  For a few core packges, this may make some sense, but
not for most libraries.

Instead, proprietary software vendors should ship all libraries in the
versions they need, or link their software statically.  I wouldn't
even mind if they installed something that approaches a stripped-down
distribution in a chroot jail (and just certify specific kernel
versions).  Most of our software licenses permit this, so why don't we
use this advantage our system has over proprietary competitors?

My reasoning is as follows: If the ABI is externally specified (not by
Debian, not by upstream), we will in inevitably face ABI conformity
bugs.  Because of the nature of such a bug, a fix requires an ABI
change.  This means that most dependent packages will have to be
recompiled, and uploaded at roughly the same time (libXXX
transition), and it's always a big mess.  Therefore, I fear that an
external ABI specification will incur substantial inconvenience for
our developers and users.  For me, this is a bit too much for a
hypothetical group of users who currently cannot admit to using Debian
(and probably never will).




Re: Linux Core Consortium

2004-12-11 Thread Florian Weimer
* Michael Banck:

 2. GNOME succeeded for the desktop.

Are there any proprietary COTS applications for GNOME where vendor
support isn't bound to specific GNU/Linux distributions?

Maybe GNOME is a good example of cross-vendor cooperation (but so is
GCC), but would be quite surprised if this automatically leads to the
uniformity that old-school ISVs desire.




Re: Linux Core Consortium

2004-12-11 Thread Florian Weimer
* Brian Nelson:

 Anyone, developer or non-developer, can help fix toolchain problems.
 However, the only people who can work on the testing-security
 autobuilders are ... the security team and the ftp-masters?

It's about infrastructure, so the security team is out (they are just
users of this infrastructure, but they don't administrate it).  Most
ftp-masters, too, I believe.




Re: Linux Core Consortium

2004-12-11 Thread Martin Michlmayr
* Florian Weimer [EMAIL PROTECTED] [2004-12-11 12:36]:
  Anyone, developer or non-developer, can help fix toolchain problems.
  However, the only people who can work on the testing-security
  autobuilders are ... the security team and the ftp-masters?
 
 It's about infrastructure, so the security team is out (they are just
 users of this infrastructure, but they don't administrate it).  Most
 ftp-masters, too, I believe.

The archive tools are available (http://cvs.debian.org/dak/?cvsroot=dak)
and there's a long TODO list.  So while you might not be able to
help set up the infrastructure itself, you could be involved in
writing the tools running the infrastructure (and this might be a
good way to become an ftpmaster later).

-- 
Martin Michlmayr
http://www.cyrius.com/




Re: Linux Core Consortium

2004-12-11 Thread Steve Langasek
On Thu, Dec 09, 2004 at 03:39:55PM -0500, Ian Murdock wrote:
 You've just described the way the LSB has done it for years, which thus
 far, hasn't worked--while there are numerous LSB-certified distros,
 there are exactly zero LSB-certified applications. The reason for this
 is that substantially the same isn't good enough--ISVs want *exactly
 the same*, and there's a good reason for that, as evidenced by the fact
 that while Debian is technically (very nearly) LSB compliant, there are
 still a lot of edge cases like file system and package namespace
 differences that fall outside the LSB that vastly complicate the
 certify to an ABI, then support all distros that implement
 the ABI as defined by whether or not it passes a test kit model.

Well, my first question is why, irrespective of how valuable the LSB itself
is to them, any ISV would choose to get their apps LSB certified.  The
benefits of having one's distro LSB certified are clear, but what does an
LSB certification give an ISV that their own internal testing can't?  Or do
you really mean there are no ISVs *writing* to the LSB?

Secondly, is this merely conjecture about the needs of ISVs, or have you (or
someone else involved with the LCC) actually talked to people who've said
these things?  If this initiative is truly a response to the needs of ISVs,
it should be fairly easy to publically specify the LCC's scope up front so
that Debian can decide whether there's any sense in trying to get involved.

The fact that ISVs would be interested in package namespace differences at
all worries me; it suggests to me that either the scope of the LSB simply
needs to be broadened to meet their needs, or these ISVs are not in tune
with the goals of the LSB as it is, and no amount of harmonizing package
namespaces will address their real concerns.

 I'm not knocking the LSB--by definition, the LSB codifies existing
 standards, i.e., things everyone already agree with. The things
 we're talking about here (package naming differences, network
 configuration differences, all that) are clearly points of disagreement
 between distributions (perhaps backed more by inertia than by anything
 else). The LCC aims to complement the LSB by agreeing on a single set of
 solutions for these edge cases, then by putting the necessary glue in
 place to make sure whatever inertia or otherwise has propagated
 the differences for so long doesn't remain an insurmountable obstacle.
 And with enough mass, the edge cases become stuff we agree on.

Um, what's the concrete use case for a cross-distro standard network
configuration interface?  These are starting to sound like ISVs I don't
*want* touching my Debian systems...

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-11 Thread Florian Weimer
* Steve Langasek:

 Um, what's the concrete use case for a cross-distro standard network
 configuration interface?

VPN software, intrusion detection systems, software for CALEA support,
centralized management software.




Re: Linux Core Consortium

2004-12-11 Thread Andrew Suffield
On Fri, Dec 10, 2004 at 01:42:57PM -0600, Gunnar Wolf wrote:
 Adrian von Bidder dijo [Fri, Dec 10, 2004 at 04:38:10PM +0100]:
   we don't exactly have a strong history of being able to pull off
   timely releases
  
  Did Debian even try?
  
  I didn't follow the woody release too closely, being a Debian newbie at the 
  time, so I don't know.  But - this was my impression - from the start, 
  sarge was prepared with the 'we release when we're ready' idea, which makes 
  everybody feel that they have more than enough time.
 
 Yes, it did. Debian has long tried to shorten the release cycles,
 without any success. That's the reason why Testing was introduced
 (after Slink, IIRC). I got involved in Debian close to the Woody
 release. We were quite optimist that Sarge would release in ~1 year

Who was? Everybody with any sense knew that it wasn't going to happen.

 There are many proposals to make Etch and future releases come out
 sooner, please check them at
 http://wiki.debian.net/index.cgi?ReleaseProposals

Every single one of these falls into one of these four groups:

(a) Give up (and maybe do something else entirely, like making
unstable releases)

(b) Split Debian into pieces, which we haven't tried before but we
know won't make the pieces any easier to release (and what's the
use of releasing a 'core' chunk early before there are any
applications to run on it?)

(c) Stuff that we've tried before and abandoned (like freezing
unstable)

(d) Stuff that isn't related to making releases faster

-- 
  .''`.  ** Debian GNU/Linux ** | Andrew Suffield
 : :' :  http://www.debian.org/ |
 `. `'  |
   `- --  |


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-10 Thread Adrian von Bidder
On Friday 10 December 2004 06.15, Gunnar Wolf wrote:
 John Goerzen dijo [Thu, Dec 09, 2004 at 09:40:51PM -0600]:

  we could participate in this organization even if we didn't take
  their packages?  That is, perhaps we could influence the direction to
  a more useful one?

 Then we would be non-participants, we would be just bitchers

No, I don't think so.  I think what Bruce and Ian are aiming at is
 - Debian can get influence in LCC, so
 - some things LCC does might actually make sense, so Debian does these 
things in the way LCC does.
 - other things will be done in LCC-space, that will not make sense in 
Debian, so Debian can still do it in its own way.

What is the benefit? The divergence between LCC and Debian will still be 
smaller than when Debian just stays outside. So
 - vendors may offer compatibility to LCC with manageable overhead (Ubuntu, 
perhaps?)
 - porting LCC applications to Debian is limited to those small areas where 
divergence between LCC and Debian diverge.

I think about things like hardware detection and autoconfiguration, where 
there's a lot development right now, and there are a lot of different 
solutions.  In many cases, the various solutions are more or less 
equivalent and things are done differently mainly because of personal taste 
of whoever does the implementation.  Having a voice in how LCC does these 
things and doing it the same way in Debian, in these cases, would be a Very 
Good Idea(tm), I feel.

-- vbi

-- 
featured link: http://fortytwo.ch/gpg/intro


pgpT84fNXbZ2P.pgp
Description: PGP signature


Re: Linux Core Consortium

2004-12-10 Thread Michael Banck
On Thu, Dec 09, 2004 at 12:40:29PM -0500, Ian Murdock wrote:
 Let me first say unequivocally that the LCC is very interested in
 getting Debian involved. The question has always been: How do we do
 that? 

I think there is one obvious answer to this question: 'Learn from
history'.

1. Unix and UnitedLinux failed. LSB party succeeded but has no practical
importance.

2. GNOME succeeded for the desktop.

The reason why the above failed have already been outlined in this
thread and one quote from Bruce sums it up pretty well: 'The members
considered that they had proprietary value at the level at which they
were collaborating'.

The reason why GNOME succeeded is because it builds a solid, predictable
and free base for vendors and distributions to build on. Every major
distribution which is interested (mostly RedHat, Novell and Sun) has
people working on GNOME and collaborating with each other.

The other reason why GNOME succeeded is because it spectaculary managed
to reinvent itself to make it feasable for others to build upon it.
Before those mentioned above used GNOME as their base, it was pretty
much similar to what Debian is today: No proper release schedules,
delays and not much of a broad and far-reaching vision.

So I think the obvious conclusion to the above answer ('learn from
history') is: 


*** The interested parties of the LCC should pick Debian as a base and
Debian should make this possible. ***


Rather than everybody just throwing all their stuff in together and
mixing it up. 

Of course, this would also mean for Debian to change. Debian is free
and solid today, but not predictable. Thus:

 * We should commit to strict release cylces of a base system others
   (and Debian itself) can build value upon.

 * We should proabably also commit to a set of core architectures which
   *need* to be bug-free on release, while the rest *should* be, but
   would not delay the release.

 * We should look into having a reality-check with respect to licensing
   and the way we treat each other.

On the other hand, this would also mean: The other partners should get
involved into Debian development, both by getting their toolchain/base
developers into the Debian project and possibly accepting Debian
developers into their ranks. 

All this could not be done easily, but it is the only viable solution
for a solid, free and predictable base system. There is no alternative
to it.


thanks,

Michael




Re: Linux Core Consortium

2004-12-10 Thread Ron Johnson
On Thu, 2004-12-09 at 21:40 -0600, John Goerzen wrote:
 On Thu, Dec 09, 2004 at 07:08:48PM -0800, Russ Allbery wrote:
  Bruce Perens [EMAIL PROTECTED] writes:
  
  I think that tying core Debian packages to the Red Hat boat anchor is a
  horrible, horrible idea.
 
 I tend to agree with sentiments like this, but didn't Bruce mention
 that we could participate in this organization even if we didn't take
 their packages?  That is, perhaps we could influence the direction to
 a more useful one?
 
 If that is the case, it seems zero risk to me.

Yes, this is the bottom line: it does not negatively impact Debian
for (for example) the DPL to go talk/email/IRC with the LCC
representatives.

If Debian's concerns can't be satisfactorily resolved, then Debian
says thanks, but no thanks, and continues down it's current path.
It's *that* simple.

-- 
-
Ron Johnson, Jr.
Jefferson, LA USA
PGP Key ID 8834C06B I prefer encrypted mail.

Vegetarian - an old Indian word meaning 'lousy hunter'.



signature.asc
Description: This is a digitally signed message part


Re: Linux Core Consortium

2004-12-10 Thread Marco d'Itri
On Dec 09, Ian Murdock [EMAIL PROTECTED] wrote:

   Let me first say unequivocally that the LCC is very interested in
   getting Debian involved. The question has always been: How do we do
   that?
  As usual: by sending patches.
 So, the flow can only be unidirectional?
No, interested developers can subscribe to the mailing list or whatever
they need to do to partecipate.
It's not that I don't like the idea of cooperation in defining things
like sonames or some programs having an upstream maintainer instead of
being independently patched to death by each distribution (especially
for mature or orphaned packages like ppp, tcp-wrappers, netkit, etc...),
but I'm can't see any benefit in blindly commiting to some standard, as
it may mean lowering the quality of Debian.

  And which I doubt we will get with LCC, since the kernel is the most
  important piece which needs to be certificated.
 The common core will include a common kernel. See the FAQ at
Christoph Hellwig already explained the obvious problem with this.

-- 
ciao, |
Marco | [9687 apucCy4LNj8KQ]


signature.asc
Description: Digital signature


Re: Linux Core Consortium

2004-12-10 Thread Ron Johnson
On Thu, 2004-12-09 at 23:15 -0600, Gunnar Wolf wrote:
 John Goerzen dijo [Thu, Dec 09, 2004 at 09:40:51PM -0600]:
   I think that tying core Debian packages to the Red Hat boat anchor is a
   horrible, horrible idea.
  
  I tend to agree with sentiments like this, but didn't Bruce mention
  that we could participate in this organization even if we didn't take
  their packages?  That is, perhaps we could influence the direction to
  a more useful one?
  
  If that is the case, it seems zero risk to me.
 
 Then we would be non-participants, we would be just bitchers, telling
 everybody how fucked-up their process and QA are. We would gain
 nothing, and we would lose as everybody would say that Debian refuses
 to play together with the guys after giving an initial 'yes'. And no,
 no ISV would certify Debian just because Debian sits and bitches.

There are diplomatic ways to say, your processes and QA are all
fucked up.

We'll just have to send someone who knows how to do that. :)

-- 
-
Ron Johnson, Jr.
Jefferson, LA USA
PGP Key ID 8834C06B I prefer encrypted mail.

If you don't know how to do something, you don't know how to do
it with a computer.
Anonymous



signature.asc
Description: This is a digitally signed message part


Re: Linux Core Consortium

2004-12-10 Thread Greg Folkert
On Thu, 2004-12-09 at 21:35 -0800, Philip Miller wrote:
 Greg Folkert wrote:
  Many reasons people come to Debian... Distributed Binaries is not one of
  them.
 
 If you think this isn't a reason to use Debian, I, as a long-time user, will 
 tell you that 
 you're dead wrong. I use Debian because there exist packages for most every 
 popular piece 
 of free software out there, and I will never have to use an untrusted binary 
 package to 
 install it conveniently. Even when it comes to installing software that's not 
 in the 
 Archive, I can safely install it from source, with the assurance that none of 
 its files 
 will be mixed in with any files installed by the package management system 
 (not the case 
 with most 3rd-party RPMS).


Should umm, clarify, Distributed Binaries == Binaries Built and shoved
into Debian by an External Entity or 3rd Party.

I rarely, RARELY compile a package with dpkg-buildpackage. When I do, it
is for a local modification to workaround a hardware, security or
performance issue before it is patched or fixed in Debian. Typically the
only 3rd Party Binaries I use are Games or Business Critical
(non-free/commercial) applications as deemed by the PHBs that be.

 I am doing some sysadmin work involving RedHat Enterprise Linux 3 systems, 
 and I will tell 
 you that they do a terrible job of maintaining a binary distribution. 
 Standing by 
 themself, let alone compared to Debian, they do no integration testing of the 
 packages 
 they release and distribute. For example, this past summer, after a new 
 server 
 installation, we had to build and install a local copy of Perl, because the 
 version they 
 distributed was completely incompatible with both mailscanner and amavisd-new 
 due to 
 module bugs. This sort of brown-paper-bag error in a release is unthinkable 
 in Debian, 
 precisely because of the QA that our exact distributed binaries go through 
 (and this 
 particular issue was actually caught in testing, as it should have been).

I have done and continue to manage RedHat AS/ES installations. I do
these primarily via ssh, one is on another continent, most are in the
US, though states away though). I can tell you first hand the terrible
fixes I have had to force onto some of those systems that just wouldn't
work with Oracle or Tuxedo or Websphere or insert other Enterprise
Application mainly becuase of this lack of QA from RedHat. Regression
testing, or integration testing as you call it, is by far the best
reason to come to Debian. When I think of Debian and Binary... those are
Binaries Built by Buildd-s in the Debian Submission and Acceptance
process. Not on lumpy.redhat.com or some other external host that I have
zero real knowledge of.

And for your Perl Issue, you could have just CPAN updated those Perl
Modules. I have had to do that many times. There are certain things I
like about RedHat... one is the rpmbuild setup. If one could employ
policies in an RPM build that are applied to DEB builds... I'd think
that 99.9% of the issues we speak about in RedHat would be solved.

So, I guess you misread what I meant. Or I wasn't as clear as I should
have been. Either way, you should now understand my position a bit
better.
-- 
greg, [EMAIL PROTECTED]
REMEMBER ED CURRY! http://www.iwethey.org/ed_curry

Onerous congratulations on your conceptual development of obliteration
concerning telephones, lobsters and fish!


signature.asc
Description: This is a digitally signed message part


Re: Linux Core Consortium

2004-12-10 Thread Greg Folkert
On Fri, 2004-12-10 at 12:50 +0100, Michael Banck wrote:
 On Thu, Dec 09, 2004 at 12:40:29PM -0500, Ian Murdock wrote:
  Let me first say unequivocally that the LCC is very interested in
  getting Debian involved. The question has always been: How do we do
  that? 
 
 I think there is one obvious answer to this question: 'Learn from
 history'.
 
 1. Unix and UnitedLinux failed. LSB party succeeded but has no practical
 importance.
 
 2. GNOME succeeded for the desktop.
 
 The reason why the above failed have already been outlined in this
 thread and one quote from Bruce sums it up pretty well: 'The members
 considered that they had proprietary value at the level at which they
 were collaborating.
 
 The reason why GNOME succeeded is because it builds a solid, predictable
 and free base for vendors and distributions to build on. Every major
 distribution which is interested (mostly RedHat, Novell and Sun) has
 people working on GNOME and collaborating with each other.
 
 The other reason why GNOME succeeded is because it spectacularly managed
 to reinvent itself to make it feasible for others to build upon it.
 Before those mentioned above used GNOME as their base, it was pretty
 much similar to what Debian is today: No proper release schedules,
 delays and not much of a broad and far-reaching vision.
 
 So I think the obvious conclusion to the above answer ('learn from history') 
 is: 
 
 
 *** The interested parties of the LCC should pick Debian as a base and
 Debian should make this possible. ***

W, that would be tough. But I like it. At least for Core. 

 Rather than everybody just throwing all their stuff in together and
 mixing it up. 
 
 Of course, this would also mean for Debian to change. Debian is free
 and solid today, but not predictable. Thus:
 
  * We should commit to strict release cycles of a base system others
(and Debian itself) can build value upon.

So we would detach Core from everything else, perhaps we should also
then also define kernels with specific patches to accommodate certain
situations or applications. IOW have flavours of kernels be something
like:

kernel-image-kern-ver-rev-arch-up/smp/mpp-db-name
kernel-image-kern-ver-rev-arch-up/smp/mpp-app-server
kernel-image-kern-ver-rev-arch-up/smp/mpp-erp-solution
kernel-image-kern-ver-rev-arch-up/smp/mpp-web-serving

kernel-image-kern-ver-rev-arch-up/smp/mpp-transaction-processing
kernel-image-kern-ver-rev-arch-up/smp/mpp-workstation
kernel-image-kern-ver-rev-arch-up/smp/mpp-gaming
kernel-image-kern-ver-rev-up/smp/mpp-generic

With those Distributions needing keeping the base-patchset up-to-date
and while the buildd machines compile for the architectures, While we as
Debian just continue on managing these patchsets to work on all arches.

This would require those other Distros to become more policy driven. How
would we split this out? Would we then have Core Only DDs? Would we
still have the ability to get security fixes out the door in time?
Should we do revision releases like say we do for Woody right now:
3.0r1/2/... would this work? Doing incremental security/major bug-fix
releases similar to the way Microsoft does it? (No not really, but the
idea is similar) Should we then have a Core Only
Stable/Testing/Sid/Experimental?

  * We should probably also commit to a set of core architectures which
*need* to be bug-free on release, while the rest *should* be, but
would not delay the release.

I disagree here, when WOULD they get worked on then? Release pending is
a good motivator. But as we see now, it is not *THE HOLY GRAIL* of
Motivators. Some of the *OLDER* not able to keep up as of now anyway
buildd machine arches might be candidates, but Dang what a way to slam
the door on them. (like 32bit Sparc, M68K, others)

  * We should look into having a reality-check with respect to licensing
and the way we treat each other.

Now this... needs to happen anyway.

 On the other hand, this would also mean: The other partners should get
 involved into Debian development, both by getting their toolchain/base
 developers into the Debian project and possibly accepting Debian
 developers into their ranks. 

Again, this should happen period.

 All this could not be done easily, but it is the only viable solution
 for a solid, free and predictable base system. There is no alternative
 to it.

Unfortunately, this I agree with you. It will make it the toughest thing
on the planet. A sub-structure of Buildd will need to make both DEB
packages as well as RPM, lest we not forget, TGZ/other packages mgmt
systems.

This is a big job, which I believe nobody will succeed on. Which is tooo
bad.
-- 
greg, [EMAIL PROTECTED]

The technology that is
Stronger, better, faster: Linux


signature.asc
Description: This is a digitally signed message part


Re: Linux Core Consortium

2004-12-10 Thread Greg Folkert
On Fri, 2004-12-10 at 06:31 -0600, Ron Johnson wrote:
 On Thu, 2004-12-09 at 23:15 -0600, Gunnar Wolf wrote:
  John Goerzen dijo [Thu, Dec 09, 2004 at 09:40:51PM -0600]:
I think that tying core Debian packages to the Red Hat boat anchor is a
horrible, horrible idea.
   
   I tend to agree with sentiments like this, but didn't Bruce mention
   that we could participate in this organization even if we didn't take
   their packages?  That is, perhaps we could influence the direction to
   a more useful one?
   
   If that is the case, it seems zero risk to me.
  
  Then we would be non-participants, we would be just bitchers, telling
  everybody how fucked-up their process and QA are. We would gain
  nothing, and we would lose as everybody would say that Debian refuses
  to play together with the guys after giving an initial 'yes'. And no,
  no ISV would certify Debian just because Debian sits and bitches.
 
 There are diplomatic ways to say, your processes and QA are all
 fucked up.
 
 We'll just have to send someone who knows how to do that. :)

And just who the he(double-toothpick) would you suggest? 
Scott James Remnant? :^)
-- 
greg, [EMAIL PROTECTED]

The technology that is
Stronger, better, faster: Linux


signature.asc
Description: This is a digitally signed message part


Re: Linux Core Consortium

2004-12-10 Thread Wouter Verhelst
Op vr, 10-12-2004 te 12:50 +0100, schreef Michael Banck:
 *** The interested parties of the LCC should pick Debian as a base and
 Debian should make this possible. ***
 
 Rather than everybody just throwing all their stuff in together and
 mixing it up. 
 
 Of course, this would also mean for Debian to change. Debian is free
 and solid today, but not predictable. Thus:
 
  * We should commit to strict release cylces of a base system others
(and Debian itself) can build value upon.

IOW, split off the release of the base system, and make it some entity
that stands by itself. Hmm, isn't that what LCC suggests we do?

  * We should proabably also commit to a set of core architectures which
*need* to be bug-free on release, while the rest *should* be, but
would not delay the release.

This isn't necessary, unless you can show me one release which was
delayed because a certain architecture wasn't in shape.

  * We should look into having a reality-check with respect to licensing
and the way we treat each other.

You'll have to explain this one to me.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune




  1   2   >