Re: Automatic Debug Packages

2009-08-17 Thread Theodore Tso
On Fri, Aug 14, 2009 at 05:50:50PM -0700, Russ Allbery wrote:
 Peter Samuelson pe...@p12n.org writes:
  [Emilio Pozuelo Monfort]
 
  We haven't agreed on whether there should be one ddeb per source or
  per binary package, so I would leave this still opened.
 
  Maybe I'm losing track of things here, but it seems to me that everyone
  except you is saying one ddeb per binary.  And then you say sure, we
  could do that if we need to.  How many times has this happened so far
  in the thread?  I haven't been keeping count.
 
 Joerg was also advocating one ddeb per source package in the summary
 message that he sent about the ftp-master approach, and Emilio has
 mentioned a few times that ftp-master needs to buy in on that decision
 (which I agree with).  I'm not sure if I'm missing some concern from the
 ftp-master side.

So if we have one ddeb per source package, which generates multiple
binary .debs for different libraries --- say, libext2fs, libcom_err,
and libss, to take a completely random example --- and the user
installs different versions of said libraries coming from different
versions of the source package, won't there be a problem if there is
only a single ddeb per source package?  I assume you can't install
multiple ddebs coming from different source packages at the same time,
since the pathnames would conflict, right?

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: net-tools future

2009-03-21 Thread Theodore Tso
On Sun, Mar 15, 2009 at 02:30:18PM -0300, Martín Ferrari wrote:
 
 About the wrapper scripts:
  * ipconfig, route: the most difficult ones, both can be replaced by
 calls to ip, maybe except for some obscure options.

Suggestion about the wrapper scripts.  It would be nice if they had a
mode enabled by an environment variable or a command-line option which
printed the equivalent ip commands which they issued, so people can
learn the new ip interface if they are so interested.

Also, I'd recommend making the scripts carefully cover 100% of the
ifconfig, route, and netstat commands, since these commands have a
very long history, and are extremely well known by many sysadmins, and
used in many, *many* shell scripts.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: -dbg packages; are they actually useful?

2009-03-03 Thread Theodore Tso
On Tue, Mar 03, 2009 at 10:12:22PM +, Steve McIntyre wrote:
 
 I'm looking at my local mirror (slowly) update at the moment, and I've
 got to wondering: are the large -dbg packages actually really useful
 to anybody? I can't imagine that more than a handful of users ever
 install (to pick an example) the amarok-dbg packages, but we have
 multiple copies of a 70MB-plus .deb taking up mirror space and
 bandwidth. I can understand this for library packages, maybe, but for
 applications?

There are people working on ways of compressing the debuginfo
information, and I've been told they might have results within a
couple of months.  Part of the problem is that depending on how the
package is built, the -dbg packages can be huge, so it makes the
cost/benefit ratio somewhat painful.

If the -dbg files were more like these sizes:

224 e2fslibs-dbg_1.41.3-1_i386.deb 52 libss2-dbg_1.41.3-1_i386.deb
452 e2fsprogs-dbg_1.41.3-1_i386.deb48 libuuid1-dbg_1.41.3-1_i386.deb
 76 libblkid1-dbg_1.41.3-1_i386.deb48 uuid-runtime-dbg_1.41.3-1_i386.deb
 44 libcomerr2-dbg_1.41.3-1_i386.deb

I doubt there's be too much concern

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Forthcoming changes in kernel-package

2009-02-26 Thread Theodore Tso
On Fri, Feb 20, 2009 at 12:56:30PM -0600, Manoj Srivastava wrote:
  BTW, I have a set of patches you might want to consider.  I'll file
  them in BTS if you're currently making make-kpkg.
 
 Please. I have been thinking about the request you made for
  debugging symbols being packaged, and now I do have some time to play
  with building kernels again, I would like to see that in Squeeze.

Sorry for the delay; I've sent you the private patches I've been using
for make-kpkg.  Some of them are quite hackish, and some of they you
may have fixed in other ways, so I won't feel bad at all if you need
to significantly rework them before you can merge them into your
master sources.

The BTS bug numbers are #517290, #517291, #517292, and #517293.

Best regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Is the FHS dead ?

2009-02-24 Thread Theodore Tso
On Tue, Feb 24, 2009 at 08:20:31AM -0600, Gunnar Wolf wrote:
 
 Interesting. And yes, illustrative of the historically (and, should I
 add, ridiculous? No, I'd better not ;-) ) rivality between Linux and
 the *BSDs, big egos included. 

Well, the last time we tried to make reasonable accomodations for
*BSD's, some of the biggest biggest whiners^H^H^H^H^H^H^H complaints
came from Debian.  In fact, some later complaints from Debianites
about the lack of /usr/libexec is largely the fault of Ian Jackson,
who ***strenuously*** opposed /usr/libexec on the mail thread which I
quoted.  In fact, as I recall, he threatened to rally all of Debian
NOT to support the FSSTND/FHS if we didn't drop /usr/libexec from the
draft spec.   Ah, history.

The painful fact of the matter is that anytime a draft like the FHS
forces any distribution or OS to change, there will be opposition.  In
some cases it will be principled and constructive.  In other cases, it
will question the spec writers' technical judgement, ethics, and even
their paternity.

 However, Linux's position WRT the commercial Unixes has radically
 shifted in the last decade. Linux is no longer considered a toy, and
 is taken seriously into account. So, even with the big inertia that
 might hamper more than one initiative, perhaps the FHS could be pushed
 in collaboration with their respective companies? At least, I'd be
 surprised if -say- the Solaris or HPUX people weren't open to
 discussion leading to better interoperability.

Last I heard, HPUX is on maintenance life-support, and they don't have
enough engineers to make sure their userspace is uptodate.

And as far as Solaris is concerned, they currently have a project to
update to a 16-year-old shell (ksh93) in their distribution.  Solaris
has constraints on their filesystem so they can be compatible with the
1986-era SVID specification.  Can you really see Linux distributions
embracing installing a 3 decade-old awk in /bin/awk, a 2 decade-old
awk in /usr/xpg4/bin/awk, with no GNU extensions allowed, just to be
compatible with Solaris?

I thought not.

Look, the proprietary Unix systems as a whole are losing market share;
Linux is gaining market share.  The *only* proprietary Unix that
gained market share last year was AIX, and it has a Linux
compatibility subsystem, AIX 5L.  Fundamentally, I just don't see
Debian, Red Hat, SuSE, Ubuntu, et. al, all making changes in were
things are installed just to accomodate Solaris --- or any other
legacy Unix system.

Do you?  Especially given the flames raised last time by folks like
Ian Jackson --- who is, I might remind you, still on the Debian
Technical Committee and is still considered one of Debian's technical
leaders?

Maybe I'm being too cynical, but I just don't see it.

   - Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Is the FHS dead ?

2009-02-20 Thread Theodore Tso
On Thu, Feb 19, 2009 at 05:24:24PM +0200, Guillem Jover wrote:
 Reiviving the FHS is great! Something that is bothering me a bit,
 though, is that historically it seemed to try to cater to Unix in
 general, not only Linux, even if most of the participants were coming
 from the Linux world. So hosting it under the LSB auspices might deter
 other Unix vendors to consider it or get involved, which would seem like
 a regression. Maybe hosting it on a more neutral place would be better?

Well, realistically we didn't have very good participation from anyone
other than one or two *BSD folks, and at the time some of the changes
that were made for compatibility with *BSD (and, to be fair, to be
closer to the rest of the Unix world) caused no small amount of
controversy.

Consider the following thread from debian-devel approximately 8 years
ago (which was about the last time we had any *BSD participation):

http://www.mail-archive.com/debian-devel@lists.debian.org/msg11462.html

Realistically, I think we will have a hard enough time dealing with
any places where the various Linux distributions have chosen different
pathname (i.e., differences between Debian and Red Hat, or Red Hat and
Novell, et. al).  If it turns out that all Linux distributions do it
one way, and OpenBSD has chosen a different hierarchy --- let's be
honest with ourselves, would we really try to engineer change at
Debian, Ubuntu, Red Hat, SLES, etc., just as a peace offerring to keep
Theo deRaadt from OpenBSD happy?   I just don't see it

And if it's not going to happen, we shouldn't try to set up
expectations that we would try to change all of Linux just because a
*BSD happened to have chosen a different pathname

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Forthcoming changes in kernel-package

2009-02-18 Thread Theodore Tso
On Mon, Feb 09, 2009 at 12:14:49AM -0600, Manoj Srivastava wrote:
 Hi,
 
 This is a heads up for a major change in kernel-package, the
  tool to create user packaged kernel images and headers; which will
  make the make-kpkg script far less error prone, and far more
  deterministic.
 
a. Every invocation of kernel-package will remove ./debian directory,
   and regenerate the changelog and control files. This will get rid
   of any remaining issues with the ./debian directory getting out of
   sync with the kernel sources; and will allow people to make small
   tweaks to the kernel sources and have  make-kpkg reflect those
   changes.

Is there going to be a way for people to replace the changelog with
one that contains useful information in that case?  I've been doing
this by doing a make-kpkg configure and then editing the
debian/changelog file afterwards...

BTW, I have a set of patches you might want to consider.  I'll file
them in BTS if you're currently making make-kpkg.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Is the FHS dead ?

2009-02-18 Thread Theodore Tso
On Mon, Feb 16, 2009 at 06:08:17PM +0200, Teodor wrote:
 On Mon, Feb 16, 2009 at 5:14 PM, Josselin Mouette j...@debian.org wrote:
  Le lundi 16 février 2009 à 14:20 +, Matthew Johnson a écrit :
  the FHS should certainly continue to exist and be coordinated between
  distros though. I agree that if it needs taking over we should do so in
  cooperation with the other big distros.
 
  Certainly. It's just that someone needs to start the work.
 
 There is no need to create another standard, FHS is being continued in
 the LSB project at linuxfoundation.org / freestandards.org. FHS was
 the starting point for LSB.
 Even if the LSB project has been criticized by the Debian project,
 this seems to become the the facto file hierarchy standard for
 Linux. It is not perfect, but is being adopted by the majority of
 Linux distros.

It's true that the FHS work group has been largely moribund.  It's
something I've been working for the last couple of weeks, actually.
I've gotten agreement form Dan Quinlan (the former chair of the FHS
workgroup) and Rusty Russell who had done most of the changes the last
time it had been updated if they had any objections having the LSB
working group take it over, and they had no problem with that.  See:

http://bugs.linuxbase.org/show_bug.cgi?id=2511

So the plan is FHS will be updated in the context of the LSB
workgroup, since the FHS mailing list has largely been taken over by
SPAM, and it seemed that most of the people who were interested in it
were active LSB work group members.

That being said, if there are things that the the folks on this thread
are interested in working on in terms of updating the FHS (which we
clearly badly in need of updating) I invite them to join us and make
proposals on the lsb-disc...@lists.linuxfoundaiton.org mailing list.
Indeed, getting more folks interested in the updating the FHS would be
really great!

Regards,

- Ted

P.S.  Or feel free to submit bugs to: http://bugs.linuxfoundation.org,
component FHS.  It may take a while for us to get set up, but we will
be paying attention to them.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Results for General Resolution: Lenny and resolving DFSG violations

2008-12-29 Thread Theodore Tso
On Sun, Dec 28, 2008 at 09:55:36PM -0800, Thomas Bushnell BSG wrote:
 
 I would prefer this.  But I am afraid of it, and so I would vote against
 it.  I am afraid that there are folks in the project who really don't
 care if Debian is 100% free--even as a goal.  I think that Ted Tso is
 even one of them.

Fear is a terrible thing to use as the basis of decisions and of
votes; consider it was fear that drove many people to vote for
Proposition 8 in California

As I said in my recent blog entry[1], I believe that 100% free is a
wonderful aspirational goal --- other things being equal.  However, I
don't believe it to be something that should be Debian's Object of
Ultimate Concern; there are other things that need to be taken into
consideration --- for example, allowing various machines owned by
Debian to be able to use their network cards might be a nice touch.

[1] http://thunk.org/tytso/blog/2008/12/28/debian-philosophy-and-people/

In other words, I believe in 100% Free as a goal; but I'm not a
fundamentalist nor a fanatic about it.

 I wish we could have in the world of GNU/Linux one, just one,
 please--just one--distribution which really took free software as of
 cardinal importance.

As others have pointed out, there is such a distribution, gNewSense; in
fact, if you look at [2], you will find that there are five others,
Ututu (the first fully free GNU/Linux distribution recognized by the
FSF), Dynebolic, Musix GNU+Linux, BLAG, and Trisquel.  So not only is
there one such distribution that takes free software of cardinal
importance, there are six in the world already.  Does Debian really
need to be the seventh such distribution?

[2] http://www.gnu.org/links/links.html#FreeGNULinuxDistributions

 In my opinion, developers who are unwilling to abide by the Social
 Contract in their Debian work should resign.  But they don't, and this
 is what has me afraid.

That would be like saying that people who don't agree with Proposition
Eight's amendment to the California constitution should leave the
state, as opposed to working to change it.  I prefer to stay within
Debian in the hopes that I can help it change to something which I
think is better; at the very release, reverting the 1.1 version of the
Social Contract, and perhaps, clarifying it.  I will note that Option
1, Reaffirm the Social Contract came in *dead* *last*:

  Option
  1 2 3 4 5 6 7
===   ===   ===   ===   ===   ===   ===
Option 1   4660727389   117
Option 2281 160   160   171   177   224
Option 325561 125   137   151   204
Option 4253   121   146 160   166   194
Option 5234   105   128   135 136   191
Option 6220   118   134   125   134 180
Option 7226   129   145   153   160   169

It was beaten by options 2 (281 - 46 = 235), 3 (255 - 60 = 195), 4
(253 - 72 = 181), 5 (234 - 73 = 161), 6 (220 - 89 = 131) and 7/FD (226
- 117 = 109).  Put another way, _very_ few people are willing to take
a fundamentalist interpretation of the Social contract (by AJ's
calculation, 9.3%) ahead of delaying Lenny.

I don't think encouraging 90% of the Debian Developers to resign would
be a particularly constructive suggestion.  Fixing the Social Contract
so it reflects our common understanding of what's best for the Debian
Community, both users and developers, is IMHO a better choice than
striving to become the Seventh Fundamentalist Linux Distribution on
the FSF's approved list.

Best regards,

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Results for General Resolution: Lenny and resolving DFSG violations

2008-12-29 Thread Theodore Tso
On Mon, Dec 29, 2008 at 03:02:41PM +1000, Anthony Towns wrote:
 Using the word software as the basis for the divide might be too much:
 we've already done a lot of work restricting main to DFSG-free docs, and
 I think it makes sense to keep that. Having main be a functioning bunch
 of free stuff with a minimal and decreasing amount of random non-free
 stuff we still need to support it works well, it seems to me.

I'm not convinced that leaving important parts of Debian undocumented
over doctrinal disputes over licensing terms is actually in the best
interests of users, but I recognize that's a position that people of
good will can (and have) disagreed upon.  If it were up to me, I would
have Debian work towards a system where packages could be tagged to
allow enable common user preferences (we won't be able to make
everyone happy) be enforced by what packages they can see/install.

Some users are OK with GFDL documentation, others are not; some users
are OK with non-free firmware, other are not.  So why can't we tag
packages appropriately, so that this can be reflected in a
configuration file so that people who are passionate about some
particular issue can decide what tradeoffs they are willing to make
with respect to usability and/or documentation based on how
Fundamentalistic they want to be with regards to the 100% Free
goal/requirement?

Separating packages into separate sections to support these sorts of
policy preferences is a hack, and with appropriate tagging, in the
long run we can allow users to be much more fined-grained about
expressing their preferences --- which would be in line with our goal
of being a Universal OS, I think.

 Back in the day, I tried writing a version of the SC that felt both
 inspiring and within the bounds of what we could actually meet. It looked
 like:

I like this a lot.  However, I do have a few nits...

We, the members of the Debian project, make the following pledge:
 
1. We will build a free operating system
 
   We will create and provide an integrated system of free software
   that anyone can use. We will make all our work publically available
   as free software.

Given how literalistic some members of our community can be about
interpreting Foundation Documents, the second sentence is a little
worrying.  I can easily imagine a Free Software Fanatic using the
second sentance as an argument that we must stop distributing the
non-free section, since non-free is, by definition, not Free Software.
And it could easily be argued that the work that Debian Developers to
package non-free packages, which is after all distributed on the
Debian FTP servers and via Debian Mirrors, would fall under the scope
of All our work.

I'm not sure what you were trying to state by the second sentence
above; one approach might be to simply strike it from the draft.  Or
were you trying to add the constraint that any work authored by DD's
on behalf of the Debian Project should be made available under a free
software license, even if in combination with other software being
packaged, the result is non-free?

2. We will build a superior operating system
 
   We will collect and distribute the best software available, and
   strive to continually improve it by making use of the best tools
   and techniques available.

I'm worried about the first clause, because of the absolutist word
best in best software available.  Again, some literally minded
DD's could view this as meaning that the best is the enemy of the
good, and use this as bludgeon to say that since we have package X, we
should not have packages Y or Z, because, X is the *best*.   

Again, I'm not sure what you intended to add by the first clause, so
my first reaction would be to strike it and make it shorter/simpler:

We will strive to continually improve the software we collect
and distribute by making use of the best tools and techniques
available.


 I don't think the community clause is terribly well worded, but
 that's what you get when you make stuff up out of whole cloth rather
 than building on previous attempts.

It's not bad.  The one thing that I noted was community wasn't
terribly well defined.  Do we mean the user community?  The developer
community?  Upstream developers?  All of the above?  Adding an initial
phrase or sentence that affirmed that everyone who touches Debian in
some way (users, developers, upstream) are considered part of the
community --- and then follow it with your formulation pledging that
we will work to ensure that members of the community shall be treated
with respect --- would be the way I would go.

 Anyway, given the last proposal I made [0] went nowhere, unless people
 want to come up with their own proposals, or want to second the above as
 a draft proposal to be improved and voted on, I suspect nothing much will
 change, and we'll have this discussion again in a few years when squeeze
 is looking like releasing.

I would 

Re: Results for General Resolution: Lenny and resolving DFSG violations

2008-12-29 Thread Theodore Tso
On Mon, Dec 29, 2008 at 04:38:25PM +0100, Romain Beauxis wrote:
 
 To me, the social contract is a very good compromise. It states first an 
 idealist acheivement, but moderates it by some pragmatism concerning the 
 users. unproductive discussions fall into the same category, when they do 
 not end as flames or trolls.

It's a claim which has never been true.  Debian shall _remain_ 100%
free?  Remain implies that at one stage Debian had reached such a
state of Nirvana.  That has never been the case!  The disputes that we
have had, at each stable release since the 1.1 revision to the Social
Contract, have been precisely because some large, and vocal, set of
developers have not been willing to be pragmatic, but who have instead
argued for a very literalistic reading of the Social Contract.

 That is mainly why I am against the notion of Code of Conduct.

I don't see the connection which leads you to be against a Code of
Conduct, but I will note that Ubuntu CoC does not use any absolute
words.  It merely asks participants to:

* Be considerate
* Be respectful
* Be collaborative
* When you disagree, consult others
* When you are unsure, ask for help
* Step down considerately

These are idealistic goals --- by your own argument, what's wrong with
having them?  We need to moderate them by an understanding that human
nature being what it is, we will occasionally fail at this ideal; and
then ecourage and remind each other to try to strive for this as an
ideal --- not throw people out of the project when they fail to live
up to such a goal. 

Maybe you don't like the name Code of Conduct, because it implies a
certain amount of inflexibility?  If so, maybe a different name would
make you more comfortable?

 Eventually, that is also the same vision that drives me in politics:
 an ideal goal moderated by pragmatism. Not the converse.

That's my vision as well; we might disagree about how far our
pragmatism might take us, but our ideals tell us which direction to
go, even if we are far from it at the moment.  The question is how
much patience do we have, and should we have?

I do feel quite strongly, that aspirational goals, if they are going
to be in Foundation Documents, must be clearly *labelled* as
aspirational goals, and not as inflexible mandates that _MUST_ be
kept.  In politics, can have aspirational ideals such as a chicken in
every pot and two cars in every garage which get used in campaign
slogans, but you don't put such things as a MUST in a country's
constitution.

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Results for General Resolution: Lenny and resolving DFSG violations

2008-12-28 Thread Theodore Tso
On Mon, Dec 29, 2008 at 12:48:24AM +, Simon Huggins wrote:
 
 I wonder how many DDs were ashamed to vote the titled Reaffirm the
 social contract lower than the choices that chose to release.
 

I'm not ashamed at all; I joined before the 1.1 revision to the Debian
Social Contract, which I objected to them, and I still object to now.
If there was a GR which chainged the Debian Social contract which
relaxed the first clause to only include __software__ running on the
Host CPU, I would enthusiastically vote for such a measure.

Also see:

 http://thunk.org/tytso/blog/2008/12/28/debian-philosophy-and-people

- Ted


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: DFSG violations in Lenny: Summarizing the choices

2008-11-09 Thread Theodore Tso
On Sat, Nov 08, 2008 at 10:24:16PM -0800, Thomas Bushnell BSG wrote:
  Neither does it (currently) contain an exception for debian.org
  machines, or very popular Dell machines with Broadcom ethernet
  firmware.  Great!  Cut them off!!  Let's see how quickly we can get
  users moving to non-official kernels and installers when the official
  ones don't work for them.  Then we can stop fighting about it.  The
  DFSG hard liners can go on using the DFSG free kernels, and everyone
  else can either move to another distribution or use an unofficially
  forked kernel package and installer.
 
 Why not just support it in non-free exactly the way we do other things?
 

Because according to you, Debian isn't allowed to ship any non-free
bits, right?  I assume that includes in the installer CD-ROM itself.
And if you need non-free bits in order to download from the non-free
section of the archive, what do you do?  From what I understand, the
Lenny installer is currently includes the non-free archive, and
automatically enable the non-free section in order to allow those
users to win.  Is that considered OK by you?

If the proposal is to delay the release to make sure that **all** bits
are moved into the non-free section, what about the debian
installation CD itself?  If it is true that __Debian__ never includes
any DFSG bits, I would think that would have to include the installer
CD/DVD image itself, no?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



DFSG violations in Lenny: Summarizing the choices

2008-11-08 Thread Theodore Tso
On Fri, Nov 07, 2008 at 12:47:01PM +, David Given wrote:
 In which case things have changed within the past couple of years ---
 after all, the whole purpose of the Atheros HAL was to inforce those FCC
 limits. Do you have any references? Like, to an FCC statement of policy
 change? If so, it would be extremely useful to have.

There are corporate lawyers who are very much afraid that the FCC
could, if they were alerted to the fact that someone had figured out
how to reverse engineer the HAL and/or the firmware to cause their
WiFi unit to become a super radio that could transmit on any
frequency, that the FCC could prohibit the *hardware* from being sold
anywhere in the US.  Given that the US is a rather large market, and
that some of these providers sell a very large number of WiFi units in
laptops (i.e., HP, Lenovo, Del, etc.), and where only a *small*
percentage of said units will ever run Linux, and even smaller, vastly
infintisimal percentage of those systems will run Debian, the reality
is that you look at the downside risk of not being able to sell, say
iwl4965 chipsets and having millions and millions of suddenly useless
pieces of silicon become the governments stop allowing said unit from
being sold, and weigh that against a very small number of Debian users
not being able to use the wireless unit out of the box, it's really a
no-brainer to guess how the WiFi manufacturers will react.

So realistically, let's be honest with ourselves.  Not supporting
devices that require non-free firmwares is not going to help make the
world a better place.  What it will probably do is that users, once
they find out that that a Debian install will result in various bits
and pieces of their hardware being non-functional until they figure
out how to download various magic firmware components, or manually
configuring the non-free repository, will probably simply switch to
another distribution, such as Fedora or Ubuntu.  At which point there
will be even *fewer* Debian users, and so Debian will have even *less*
leverage.

Now, if the majority will of Debian is that all bits distributed by
the Debian distribution must be DFSG free, even if it doesn't run on
host processor, and we should hold up the release until this can be
accomplished, that's a legitimate choice.  That choice will have
consequences; in the meantime more users will simply switch to other
distributions, and Debian can be the distribution with a tiny niche
number of users, with developers shaking their fists about how they
are Free, just as OpenBSD users can shake their fists about how they
are Secure (but have almost no users).

Another choice open to Debian is to make it easier for users to opt
into downloading firmware --- perhaps by making very easy through the
installer to select the non-free section.  That choice also has
consequences.  For one, it won't help in the cases where the non-free
firmware is needed for the system to boot, or to access the network in
order to download the non-free .debs.  (I'm assuming for the sake of
argument that it would be considered verboten to ship non-free
firmware in the Debian installer CD-ROM.)  Fortunately for us, at the
moment I am not aware of large numbers of highly popular laptops or
servers for which non-free firmware is necessary before the firmware
would be able to access the network.  This could potentially happen in
the future if there are netbooks that only have wifi networking, for
example.

Another consequence of making it easy for the users to add non-free to
the repositories so they can download firmware necessary to make their
hardware useful is that a huge number of users may end up enabling
non-free just to make their hardware work, and then they may end up
installing even more non-free packages on their system.  It's much
like the argument that the current copyright laws around downloading
music is insane, because it increases the disrespect of all laws, and
we are training an entire generation of users that breaking copyright
law so they can download their favorite music or video torrents is OK.

Yet another choice which Debian could choose is to create a new
firmware section; this would allow users to only be able to select
non-free firmware, without accidentally installing other non-free
packages.  This has the advantage of more fined-grained control of
what users might want or not want to install on their systems.  The
firmware section would be just as non-free as the non-free section,
but for people for whom the distinction of running on the host CPU or
not has meaning, it gives them a way of allowing some non-free
packages on their systems, but not others.  For people who feel
passionately that they will not abide any non-free software, they can
choose not to install from the firmware and non-free sections.

The final choice which Debian could make is to ignore the problem and
punt making one of the above decisions for yet another release.  This
seems to be the path that the 

Re: DFSG violations in Lenny: Summarizing the choices

2008-11-08 Thread Theodore Tso
On Sat, Nov 08, 2008 at 03:29:44PM -0800, Thomas Bushnell BSG wrote:
 On Sat, 2008-11-08 at 14:11 -0500, Theodore Tso wrote:
  There are corporate lawyers who are very much afraid that the FCC
  could, if they were alerted to the fact that someone had figured out
  how to reverse engineer the HAL and/or the firmware to cause their
  WiFi unit to become a super radio that could transmit on any
  frequency, that the FCC could prohibit the *hardware* from being sold
  anywhere in the US.  
 
 I've heard this claim before.  Can you substantiate it in some way?

Private conversations from a representative of a company who genuinely
whats to do the right thing, but who had to battle the lawyers who
were afraid of losing millions and millions of dollars.  I'm honor
bound not release the name of the engineer(s) and companies involved,
so you can take that as you will.

 It seems to me that, if this is really true, then the hardware
 manufacturers have been lying to the FCC for years, claiming that the
 user cannot reprogram the card, without explaining that, in fact, it's
 just that users may not know how to, but that they can do so without any
 hardware mucking.

The FCC understands that you can't make it *impossible*.  Even before
software radios, it was understood that someone posessing the skills,
say, of an amateur radio operator might be able to add a resistor or
capacitor in parallel with an RC/LC tuning circuit, and modify the
length of the antenna, etc., thus making a radio transmit outside of
the band which it was type-certified to operate on.  A radio
manufacturer is not required to dunk the entire radio in epoxy, and
make it utterly *impossible* for someone to modify the radio; on the
other hand, if all it takes is clipping a jumper or cutting a trace on
a board, the FCC does have the power to order that the radio not be
sold in the US.

So just as the GPL has never been tested on point about whether or not
a program which dynamically links against a GPL'ed library would be
infected by the GPL, and just as the FSF has appropriately pointed out
that the court system does not operate algorithmically, but can make
decisions based on intent --- similarly, the FCC has not ruled on
point on implementations that rely on software radios.  Most lawyers
seem to agree that that documenting how to modify the firmware is
roughly equivalent to providing a trace that if cut, would allow
scanners to listen in on cell phone frequences, and then leaking
tech sheets that would allow people to modify scanners to do something
which the US Congress (rightly or wrongly) has declared to be illegal.

There seems to be some disagreement about whether security by
obscurity is sufficient for the FCC, or whether you have to implement
hard crpyographic signing to prevent non-vendor-approved firmware from
being used, but that's because there's no precedent.  Given that it
seems pretty clear that the FCC has never penalized a radio
manufacturer if a skilled Ham Radio operator or Electrical Engineer
reversed engineered an analog circuit, and then modified it to
transmit on a band that the equipment wasn't type-certified, one could
argue that security by obsecurity is considered permissible by the
FCC.  But until there's precedent, we won't know for sure --- just as
we won't know whether or not a program which dynamically links against
a GPL library is really bound by the GPL until a court rules on point
on the issue --- and then we'll only know in that legal jurisdiction.

 Regardless, the DFSG doesn't say anything about unless the FCC has an
 annoying rule.  We don't distribute non-free software in Debian.  And
 that's not some sort of choice we might make--it's a a choice we have
 already made.

And as I said, I think we should let the DFSG hard-liners win.  Let's
yank all of the binaries that require a firmware, and release Lenny
as-is.  If that causes some users switch to some fork that actually
has a kernel that works for them, given their hardware, or said users
switch to Ubuntu, then so be it.  At least we'll stop flaming about
the issue.

 - Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: DFSG violations in Lenny: Summarizing the choices

2008-11-08 Thread Theodore Tso
On Sat, Nov 08, 2008 at 05:05:50PM -0800, Thomas Bushnell BSG wrote:
 
 But now we have this claim that the FCC's well-understood rule about
 hardware does not apply to software: that software modifications *are*
 traceable back to the manufacturer, even though hardware modifications
 are not.  Oddly, however, in all these conversations, we've never seen
 any indication that this is really the FCC's policy.

The analogy that has been made is between hardware modifications
assisted by information about how a jumper or removing a trace is
leaked by a manufacturer *is* traceable back to the manfacturer, and
a manufacturer documenting their hardware (or releasing firmware
source) thus enabling someone to modify said firmware.  In both cases,
the manufacturer is assisting by making the information available.  In
the former case, it was via nth generation fax/xerox copies that was
passed under the table by retailers, but it was still held to be the
manufacturers problem.

So if people think that they are going to be able to get firmware in
source form so that popular wireless chips can be driven using 100%
DFSG pure firmware, I suspect they will have a very long wait ahead of
them.  The issue is that software controlled radios are cheaper, and
that drives the mass market, so that will be what most manufacturers
will use.

 And none of this is really relevent: the DFSG and the Social Contract do
 not contain an exception for dishonest or scared hardware manufacturers,
 or stupid FCC policies.

Neither does it (currently) contain an exception for debian.org
machines, or very popular Dell machines with Broadcom ethernet
firmware.  Great!  Cut them off!!  Let's see how quickly we can get
users moving to non-official kernels and installers when the official
ones don't work for them.  Then we can stop fighting about it.  The
DFSG hard liners can go on using the DFSG free kernels, and everyone
else can either move to another distribution or use an unofficially
forked kernel package and installer.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: DFSG violations in Lenny: Summarizing the choices

2008-11-08 Thread Theodore Tso
On Sun, Nov 09, 2008 at 12:21:26PM +0900, Paul Wise wrote:
 On Sun, Nov 9, 2008 at 4:11 AM, Theodore Tso [EMAIL PROTECTED] wrote:
 
  Another choice open to Debian is to make it easier for users to opt
  into downloading firmware --- perhaps by making very easy through the
  installer to select the non-free section.
 
 For machines where non-free firmware is required, lenny d-i defaults
 to adding non-free and installing that firmware. I found this out when
 installing on my laptop, which contains an Intel 3945 wireless chip.

Oooh does that means Debian is distributing non-free bits?  My
suggestion is that we either change the DFSG, or drop it so that it
complies with the DFSG, like the DFSG hard-liners want.

  - Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: dhclient-script, hooks, and changing the environment

2008-08-08 Thread Theodore Tso
On Wed, Aug 06, 2008 at 09:10:56PM -0300, martin f krafft wrote:
 Anything else? Do you know of packages that rely on this
 functionality? Do you have scripts of your own which modify the
 environment? Would you please be so kind as to explain to me what
 they do, and help me figure out whether there isn't a better way for
 them?

In the past I've used dhclient-enter-hooks.d to work around buggy
hotel networks which advertise a gateway for the default route which
is outside the local network.  So I used a dhclient-enter-hooks.d
script to hack the netmask returned by the DHCP server so that the
default route would be accepted.  (Unfortunately, Windows allows the
default route to be outside the local network arrange of the ethernet
interface, and apparently if there is only one ethernet interface,
Windows will assume that instead of rejecting the route, or dropping
packets on the floor, to do the convenient-but-wrong thing of assuming
the packets should go out via the default route to the ethernet
interface, even though the default route is an invalid non-local
address for that interface.  As a result, trashy hotel networks have
no incentive to fix their DHCP servers, since they work just fine on
Windows laptops

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: dhclient-script, hooks, and changing the environment

2008-08-08 Thread Theodore Tso
On Fri, Aug 08, 2008 at 03:29:43PM -0300, martin f krafft wrote:
 also sprach Theodore Tso [EMAIL PROTECTED] [2008.08.08.1453 -0300]:
  In the past I've used dhclient-enter-hooks.d to work around buggy
  hotel networks which advertise a gateway for the default route
  which is outside the local network.
 
 netconf could handle this internally, and it would be fair to
 include this.
 
 So assuming yo get a 192.168.0.0/24 address and the default gateway
 is 10.0.0.1, what is the best approach? Hacking the netmask seems
 awful. Adding a route that makes 10.0.0.1 link-local seems better,
 no?

Yes, adding a specific link-local route is probably better.  In
practice, the various Awful Hotel Network implementations would give
you an IP configuration that was something like this

IP Addr: 10.0.2.67
Netmask: 255.255.255.0
Gateway: 10.0.8.1

So the simpler thing for me to do was to hack the netmask to be 255.255.0.0.

But I agree, adding IP-specific route for the gateway out to the
interface is the best thing to do.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Shouldn't tar 1.20-1 be in testing?

2008-08-04 Thread Theodore Tso
On Mon, Aug 04, 2008 at 03:43:51PM +0200, Julien Cristau wrote:
 On Mon, Aug  4, 2008 at 15:34:40 +0200, Vincent Lefevre wrote:
 
  tar 1.20-1 entered unstable on 2008-04-17, so several months before the
  freeze.
 
 The essential toolchain was frozen before the rest of the archive.

Yes, but there have been exceptions made; I'm not sure that lzma
support would be good and sufficient reasons to sway the release-team,
but Julien should try sending that note to debian-release, and see
what they say.

I'm guessing that perhaps since tar is probably used by d-i, the tar
maintainer needed to explicitly request that tar get manually pushed
to testing, and that didn't happen back in May/June?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Policy or best practices for debug packages?

2008-07-07 Thread Theodore Tso
On Mon, Jul 07, 2008 at 02:41:50PM +0200, Mike Hommey wrote:
  *) Do we dump everything into /usr/lib/debug, i.e.,
 /usr/lib/debug/sbin/e2fsck?   Or should we put it in
 /usr/lib/debug/pkg, i.e., /usr/lib/debug/e2fsprogs/sbin/e2fsck?
 Most packages I've seen seem to be doing the former.
 
 /usr/lib/debug/$pathoforiginalfile
 
 This is where gdb is going to look for these debug info.

Yep, that's what everyone else is doing, so that's what I did too with
e2fsprogs.

  *) Is it OK to include the -dbg information in a library's -dev package?
   Or should it be separated out?  Otherwise as more and more packages
   start creating -dbg packages, the number of packages we have in the
   debian archive could grow significantly.
 
 I think the problem is going to be with size way before becoming a problem
 with the number of packages.

I tried putting them in -dev, and lintian complains with a warning.
So I guess while it isn't official policy yet, it seems to be the
general practice.  The e2fsprogs source package just grew an extra 7
binary packages as a result for a total of 20 packages, though.  :-)

  *) Red Hat includes source files in their debuginfo files, which means
   that their support people can take a core file and get instant feedback
   as to the line in the source where the crash happened.  But that also
   means that their debuginfo packages are so huge they don't get included
   on any DVD's, but have to be downloaded from somebody's home directory
   at redhat.com.  (which appears not to be published, but which is very
   easy to google for.  :-)   What do we want to do?
 
 You don't need source files in the debuginfo files to get line numbers.
 You only need line tracking information, and if you build with -g, that is
 already what you get.

True, although it means there's a bit more work to actually install
the source package, and then running ./debian/rules build in order
to make sure the sources are unpacked and patches appropriately
applied.  With Red Hat all you have to do is unpack the debuginfo
package, and the sources that were used to build the binaries are made
available with no muss and no fuss in
/usr/lib/debug/usr/src/pkgname.  (And an obvious thing for Red Hat
to have done is to hack gdb to automatically figure out the location
of the source files, possibly by encoding it in the build-id ---
although I don't know if they have done it.0

Is this worth the bloat in packages, especially since the -dbg
packages are architecture specific and thus would be replicated N
times?  Probably not, but it's at least worth thinking about the
functionality and deciding whether we want to replicate it.

Speaking of the -g option, does anyone know off-hand whether or not
it's worth it to build with -g3 (to get cpp macro definitions into the
DWARF stubs)?

 Note that it would be better for our users if we could have a debug info
 server instead of having them install dbg packages, it could be nicer.
 Obviously, gdb would need to be able to talk to it.

Yep, agreed.  That would be nice.

I will say that having started to build and installing the debug debug
packages, for e2fsprogs, being able to run gdb on installed system
binaries is *definitely* very nice.  :-)

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Policy or best practices for debug packages?

2008-07-07 Thread Theodore Tso
On Mon, Jul 07, 2008 at 05:39:00PM +0200, Mike Hommey wrote:
 There are 3 kind of people who need -dbg packages.
 - Users, when they are asked to provide proper backtraces in bug reports
 - Developers, when they need to debug stuff
 - Maintainers
 
 Obviously, the latter will be able to get the sources themselves, so do
 the second, most of the time, though the debian/rules patch thing might be
 a problem, especially when you need to install cdbs or some other stuff to
 get it working (only to apply dumb patches, d'uh).

Well, maybe the dpkg-source 3.0 package formats will make this easier.
It should would be nice if there was a standardized debian/rules
target which would unpack the source tarballs and apply any necessary
patches, such that the sources were in a state usable by gdb, though.

And there was a way that something like apt-get source would
automatically run the rule, perhaps on an appropriate command-line
option, even better.

 And she won't have a core for the
 previous crash because the default is not to core (and BTW, it's uselessly
 made difficult to override this, see #487879).

I just looked at this bug, and are you sure this isn't because you
have this configured in /etc/security/limits.conf?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Policy or best practices for debug packages?

2008-07-07 Thread Theodore Tso
On Mon, Jul 07, 2008 at 05:39:05PM -0400, Daniel Jacobowitz wrote:
 Sorry, but this is either someone's uncontributed gcc patches, or
 (more likely) hearsay.  The difference between -g (same as -g2) and
 -g3 is whether .debug_macinfo is generated - debug info for C/C++
 preprocessor macros.  It's off by default because the generated data
 is huge.

Do programs like gdb take advantage of the .debug_macinfo in a useful
way if it's there?  (I guess I should try it and see how big the dbg
packages get, and how useful it is for me in practice.)

Thanks, regards,

   - Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Policy or best practices for debug packages?

2008-07-07 Thread Theodore Tso
On Mon, Jul 07, 2008 at 05:42:47PM -0400, Daniel Jacobowitz wrote:
 I think they do this, using debugedit.  We (CodeSourcery) do it for
 our libraries too.  It's incredibly useful - but very spoiling; every
 time I'm without the automatic debug sources and source paths I get
 grumpy about it.

WANT

Where can you find debugedit?  I did a google search and it looks
like Gentoo as packaged it, and it looks like it might be an auxiliary
program inside the rpm source packages?  Is that what you were
referring to?

 I wouldn't want them in the archive for everything, but it would be
 nice to be able to generate automatically usable source packages.
 Also debug packages without having to create them in debian/control
 and debian/rules.  That would enable build daemons to generate and
 stash the packages somewhere if we decide to make them available.

So in order to do this, we would need hack the debian/rules file to
create a foo-dbgsrc package which contains a copy of the source tree
after a successful build (but with all of the generated binary files
stripped out).  We would also need to agree on a standardized location
where the *-dbgsrc files would install the source trees --- something
like /usr/lib/debug/usr/src, perhaps?  And then we would need to use
the debugedit tool to edit the dbg files to point at the sources in
/usr/src/lib/usr/src (or where we decide to have the *-dbgsrc packages
install the source files).

I guess depending on the package, the *-dbgsrc file could either be
architecture independent, or if some of the generated source files
contain arch-specific information, they could end up being
architecture dependent files.

And we definitely would want to serve these up in a different archive
that our standard binary packages, since this would be a *lot* of
data.  It's probably less important that they get mirrored, though,
since it's likely that far fewer users would actually need the -dbg
and -dbgsrc packages.

Am I missing something, or is that all that's necessary to bell the
cat?  :-)

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Policy or best practices for debug packages?

2008-07-07 Thread Theodore Tso
On Mon, Jul 07, 2008 at 06:16:22PM -0400, Daniel Jacobowitz wrote:
For various reasons we don't use it at work - instead we added some GCC
command line options to relocate the debug info at compile time.  In
the end, it comes down to the same result.

Were these private hacks to GCC?  I tried looking at the gcc info
file, and I didn't see any options to force the debug info to a
different pathname; maybe I missed it, or the info file I was looking
for was too out of date (gcc 4.1.3).

 I think /usr/src/debian/ would be traditional.  It really doesn't make
 a difference, though :-)

Yeah, the only reason why I was hesitant about that is I was concerned
that some users might already be using /usr/src/debian for their own
purposes (in violation of the FHS), and would get annoyed if we
started installing stuff there.  Hence my suggestion of burying it in
the /usr/lib/debug/usr/src hierarchy.  But I don't really care what
directory name we use, as long as we all agree on some pathname to
start.

I might start experimenting with with one of my packages, just to see
how it works out.  I suppose I should send mail to the ftp-masters
directly and ask them this questions, but do people think there would
be many objections if some -dbgsrc packages started appearing in the
archive as an experiment?

I could even imagine some hacks where the rules file does a diff of
the build source directory against the orig.tar.gz file and the
-dbgsrc package only contained the diffs, and the postinstall script
searched for the orig.tar.gz file and pulled it off the network, and
then applied the diffs, in order to keep the size of the -dbgsrc file
small.  That might answer the concerns about the size of the archive
exploding, if this were to become popular.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to handle Debian patches

2008-05-17 Thread Theodore Tso
On Fri, May 16, 2008 at 03:25:11PM -0700, Russ Allbery wrote:
 In fact, despite being one of the big quilt advocates in the last round of
 this discussion, I am at this point pretty much sold on using Git due to
 its merges and branch support and have started to switch my packages over.
 However, the one thing discussed on this thread is still the thing I don't
 know how to do easily in Git.  I have each logical change on its own
 branch, so I can trivially generate patches to feed to upstream with git
 diff upstream..bug/foo, but I don't know how to maintain a detailed
 description and status other than keeping a separate file with that
 information somewhere independent of the branch, or some special file
 included in the branch.

How often is a logical change more than just a single commit?
Espeically in the context of packaging, usually the changes are pretty
trivial, and don't require multiple patches.  

Sure, a few bugs may require some new infrastructure, or making
changes that would be best done with 2-3 patches, but any more than
that and you probably want to be consulting with upstream before
submit any changes anyway?

So normally I just keep those sorts of changes in the commit header,
where it is easily and safely bundled with each patch.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: git bikeshedding (Re: triggers in dpkg, and dpkg maintenance)

2008-03-01 Thread Theodore Tso
On Mon, Feb 25, 2008 at 12:19:33PM -0300, Otavio Salvador wrote:
 Robert Collins [EMAIL PROTECTED] writes:
 
  On Sun, 2008-02-24 at 16:46 -0300, Henrique de Moraes Holschuh wrote:
  Yet, rebasing is still routinely performed in the Linux kernel
  development. 
 
  What I find interesting and rather amusing here is Linus talking
  negatively about rebase: in particular its propensity to turn tested
  code (what you actually committed) into untested code (what you
  committed + what someelse has done, in a version of a tree no human has
  ever evaluated for correctness).
 
 If people doesn't test and review the patches after rebasing, it looks
 right but everyone is suppose to test  the changes after a merging (as
 for rebasing).

I'll note that when I submit a branch, I prefer to do a rebase, and
*then* do extensive testing.  That's because for a new feature, I
generally understand it better than the upstream maintaining, and *I*
want to be the one doing the merge and testing after the fact, as
opposed to assuming the upstream will do the appropriate merge fixups
and testing.  

For big projects, this is essential, and Linus in fact does *not* test
after doing a merge.  (It doesn't scale for him to test after every
single merge from his Lieutennants.)  

But for smaller projects, it should really be up to the submitter; I
don't think there is any one Absolutely Right Way To Do It.  If
someone wants to rebase and then test before sending a pull request, I
don't think there's anything wrong with that.  Especially if the
projects have a good regression test suite.  (Both git and e2fsprogs
have good regression tests, and that makes life *much* easier to test
after doing a rebase or a merge; basically, I'll run the full
regression test to make sure that anything unanticipated hasn't
broken, and then do explicit testing about the feature being merged or
rebased.)  

BTW, because of the regression test suite, and my general policy for
e2fsprogs is that I want to make sure that make check has 0 failed
tests after every single commit, rebasing to collapse and remove
development history makes life easier to fulfill the fully git
bisectable with 0 make check failures between every commit
constraint.

So as long as the person submitting the patch makes it clear that they
have tested exactly what is being requested to be pulled, there's
nothing wrong with whether or not they do the rebase right before
sending the pull request.  My preference is to do the rebase and test,
and from a smaller project such as dpkg, I can't think of any good
reason for the maintainer to force the submitters to follow one
approach or another.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: git bikeshedding (Re: triggers in dpkg, and dpkg maintenance)

2008-03-01 Thread Theodore Tso
On Fri, Feb 29, 2008 at 12:40:55PM +, Colin Watson wrote:
  That's why you should avoid using the branch as basis to others until
  it's clean and also avoid to make it public (without a reason) too.
 
 This makes it more difficult to ask for review while the branch is in
 progress, which is a valuable property. It is ridiculous to artificially
 avoid making branches public; a branch is a useful means of
 collaboration and we should take advantage of it as such.

It's a bad idea to base work on a feature where the code is still
being under review.  Even if you keep all of the historical crap on
the branch, to be preserved for ever, it's going to cause merge
difficulties for people who base branches based on that patch which is
under review.  So you really, REALLY, don't want people basing work on
code which is still being developed, since it may be that the review
may include, why don't you totally refactoring the code *THIS* way,
which will end up breaking everyone who depends on your new function
interface anyway.

So how to solve this problem?

(a) Send patches via e-mail for review.  This actually works better,
because people can respond via e-mail much more easily than if it just
shows up in a git repository.  You can also send an explicit request
for people to review the patch when it is sent via e-mail.

(b) Put the patches on a git branch which is *guaranteed* to be
constantly rewound, and is not to be used as a base for derived
patches.  By convention the 'pu' branch in the git (and e2fsprogs)
source repository is declared to be one which is used only for people
who want to test the latest bleeding edge code, but it should not be
used as the basis of any derived or child branches.

 I have never once run into this problem with other revision control
 systems in which branching and merging are common. Somehow it just never
 seems to be a real issue. I contend that dpkg is not big enough for it
 to become a real issue.

It's not a fatal issue, but in the long run, the code is more
maintainable if the code revision history is clean.  It's like having
a few goto's in the code.  Does that make the code unmaintainable?
No.  But it does make it worse.  Or think about how much effort some
of us spend to make the code gcc -Wall free of warnings.  Does not
doing it make the code fundamentally bad?  No.  Is it still worth
doing?  Many of us believe that it is worth doing, nevertheless.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: the new style mass tirage of bugs

2008-02-25 Thread Theodore Tso
On Thu, Feb 21, 2008 at 02:15:10PM +0100, Mike Hommey wrote:
 Note that also doesn't indicate how many were actually fixed. We have
 nothing that look like bugzilla's NOTABUG or INVALID.

It would be nice if we had this, actually, and it wouldn't be hard,
right?  Just define a convention for a new tag?

   - Td


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to cope with patches sanely

2008-02-02 Thread Theodore Tso
On Sat, Feb 02, 2008 at 12:04:16PM +, Roger Leigh wrote:
 
 While the time might not be yet, DVCS systems are getting to the point
 where they could make our lives all much simpler.  Having all of
 Debian in git, where anyone can clone and hack would be (IMHO) a
 worthy goal to aim for.  Currently, there are many packages I can't
 work on--simply because I am not intimately familiar with the
 patch-system-du-jour the maintainer chose,
 upstream-tarball-in-orig.tar.gz being my greatest bane.  Having a
 single tool we all need to learn once would (again, IMHO) be useful in
 fixing this.

While I'm a big fan of DVCS systems, and in fact use git all the time
--- including git to manage quilt seriess --- I don't think DVCS
systems would necessarily be right for Debian.

The reason for that is because in the long-run, we do want to get our
changes upstream, and not end up in a merge hell where Debian packages
have diverged significantly from upstream and merging changes back is
hard.  The problem with DVCS tools is that they aren't necessarily
well suited for that.  It's too easy for people to just hack a few
changes, then commit, then hack a few more changes, and commit, .etc.

You can use tools such as git rebase --interactive to fold related
patches and patches which fix bugs introduced in patches, but it's
complicated, and not something a beginner DVCS user would find easy.

If you force people to maintain patchsets, where each patch has a
description describing *what* change was made, and why, it's much,
MUCH more convenient for upstream to understand what you've done, and
why.

And if people want to use git, it's possible using git to take set of
git comments and turn it into a patchset which is quilt-compatible.
But let that be up to each maintainer.  Different maintainers can use
whatever tools they want, as long as the output format is a
quilt-style patchset.  What's far more important is that maintainers
are strongly encouraged to maintain a high quality patchset which is
suitable for acceptance by upstream.  Otherwise, as upstream keeps
moving, it will be harder and harder to forward patch our changes, and
that way lies some of the headaches which the Ubuntu project has been
facing (albeit for different reasons).

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to cope with patches sanely

2008-02-02 Thread Theodore Tso
On Sat, Feb 02, 2008 at 10:26:52PM +0100, Pierre Habouzit wrote:
   Bah that's the worst reason I ever seen to not use a DVCS in Debian.
 Please look at the debian packages with patches, and please tell me how
 many come with a comment about what the patch do. Please start with the
 glibc if you don't know where to start.  Then please resume the
 discussion about how having patches series make them more commented.

I was't arguing that everyone *shouldn't* be forced to use a DSCM.  I
was arguing against the proposal that everyone be *forced* to use a
DSCM, and moreover a the same DSCM.  Whether it is git or hg or bzr,
forcing everyone to use the same DSCM is a *bad* idea because it makes
it harder to do the right thing.

Yes, it is quite possible to create patches that are horrible using
quilt or dpatch.  You can write Fortran code in any language.  :-)

But it requires more advanced skills using a DSCM to create good,
upstreamable patches than it would using something simple like quilt.
So again, lest I be misunderstood, using the quilt format is good.  If
people want to use a DSCM, great; they can use many different DSCM's
many different ways to maintain a patch queue.  We should not force
people to use a DSCM, and not a specific DSCM, regardless of whether
it is git, hg, bzr, or arch, just because we think DSCM's are cool.
(They are, but that's not a good reason for force everyone to
standardize on a single DSCM.  :-)

And I think it is a much better idea to encourage people to spend
their time working on good, upstreamable patches, than to tell them
that the need to learn some specific DSCM. 

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to cope with patches sanely (Was: State of the project - input needed)

2008-01-27 Thread Theodore Tso
On Sat, Jan 26, 2008 at 11:39:32PM +0100, Pierre Habouzit wrote:
   I'm less and less sure that a git-based format is a brilliant idea. I
 like git more than a lot, but it's a poor idea to base source packages
 on them. That doesn't mean that we shouldn't be able one day to upload a
 signed git source url + sha1 signed with GPG and see DAK and other tools
 git pull from there to rebuild the source package using some normalized
 ways, but a source package should remain as simple as possible, using
 _textual_ interfaces.

Something that we should be clear about is that there's a big
difference between the *interface* and the tool.  One of the reasons
why the quilt *format* is so powerful is that it can be consumed by so
many different tools.  For example, the quilt format can also be
consumed by guilt and stgit for git users, mq for mercurial
users; not just by quilt.

Just to give one example of how you can use a quilt *format* for a set
of patches without actually using the quilt tool.  For the ext4 kernel
development, we use a svn-style central repository model, but using
git at http://repo.or.cz.  Multiple poeple have access to push into
this central repository; it could have just as easily have been svn or
cvs, but a number of us like the ability of being able to have
off-line access to the repository.  What we *store* in this repository
is a quilt-style set of patches, where the series files indicate the
base version of the kernel that the patches can be applied against.  

How people manipulate the quilt series of patches depends on the
developer.  Some developers use quilt, other (like me) use guilt.  It
would even be possible for people to write a set of shell scripts to
parse the quilt series if they wanted; again, it is the format that is
important, not the tool.

One nice thing about the quilt format is they can preserve a large
amount of state.  For example, we use the following convention in the
patch header:

one line patch summary

From: Author O' The Patch [EMAIL PROTECTED]

Detailed patch description

Signed-off-by: Theodore Ts'o [EMAIL PROTECTED]


This is at the beginning of every single quilt patch, and because of
this, we can easily important the patch into either a mercurial or git
repository, while preserving the authorship information of the patch.
So when it comes time to push patches to the upstream, it is much
easier to do so, since all of the information is right there.

Since we are keeping the patch series under source control, it allows
us to refine patches, and keep a historical record of the improvements
to a particular patch before it us pushed upstream.

So for the people who were concerned because quilt is unmaintained, I
wouldn't worry, because even if there is no one maintaining the
official quilt sources, there are plenty of other quilt work-alikes
that use the same format, and many of them are integrated into other
SCM's.  Andrew Morton actually maintains is own private collection of
shell scripts to parse quilt patch series for his -mm kernel tree.
Andrew's tools is what the current quilt system is modeled upon, but
Andrew is still using his own set of shell scripts.  So that's another
example of Yet Another tool that all use the same quilt series file
format.

   So maybe what we should do isn't trying to make the Debian source
 package a complex mix between a snapshot and a SCM, but rather let it be
 simple, and only a snapshot, and have ways to encode in it where to find
 (and how to use to some extent) the SCM that was used to generate it.

Yes; all a Debian source package should be is a snapshot of a series
of patches that can be applied against a source base.  The series of
patches can be maintained in an SCM, and that's a useful thing to do;
but for the purposes of what you ship in a source package, the full
SCM data itself is not strictly necessary.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#459403: libuuid1: missing depends on non-essential package passwd

2008-01-16 Thread Theodore Tso
On Sat, Jan 12, 2008 at 03:41:01AM -0800, Steve Langasek wrote:
  I don't think but we don't want to make adduser Priority: required is
  a good enough reason to add global static IDs; and passwd doesn't need
  to be made Essential just because an Essential package depends on it
  (Essential isn't closed under dependency).
 
 The use case here was that passwd would be a dependency of a pre-dependency
 of an essential package, which does promote it into the effective essential
 set.

Well, I'm happy to make a request for a static ID, or not; do we have
a consensus here whether I should do this?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#459403: libuuid1: missing depends on non-essential package passwd

2008-01-11 Thread Theodore Tso
On Mon, Jan 07, 2008 at 11:34:07AM +0100, Bastian Blank wrote:
 On Mon, Jan 07, 2008 at 03:23:23AM -0500, Theodore Tso wrote:
 So, I am doing so now.  Any objections if I
  add a dependency on passwd for libuuid1?  The aternative would be to
  roll-my-own useradd/adduser functionality, but that would be a real
  PITA
 
 There are several other possibilities:
 - Move the user creation. It is necessary for uuid-runtime (uuidd), not
   libuuid.

It's actually needed in libuuid, because one of the modes how it can be
used is a program can be setgid libuuid.  If so, then when it calls
libuuid, it can optionally save the clock sequence number in
/var/lib/libuuid, and to detect circumstances when the clock goes
backwards.  This guarantees that uniqueness, and makes the UUID
generation compliant with RFC 4122.  Without libuuid being setgid, or
without the uuidd package installed, the time-based UUID will
*probably* be unique, but if the time gets reset, and you get
very. Very unlucky, or if you have multiple SMP threads generating
huge numbers of time-based UUID's, you could generate duplicate
UUID's.

 What is the reason for this deamon anyway? It linearises the requests
 and limits the amount of available uuids.

So there are some programs that like to use time-based UUID's because
they tend to sort much better (particularly if you byte-reverse the
UUID) when they are used as keys in a database.  One such program
generates a *huge* number of them while initializing its application
database.  It does so in multiple threads and multiple processes
running in parallel, and it was running into problems because it could
end up generating duplicate UUID's.

The uuidd linearises the requests, yes, but in a very clever way where
each particular thread can request a block of UUID's (with contiguous
times) so it actually allows a much, MUCH larger number of UUID's to
be generated, with *significantly* less CPU overhead.  I know of only
one application program that generates this many UUID's (as in tens of
thousands per second).  Unless you use this application, it's
relatively unlikely that you need it; however, it is a matter of
correctness as well --- if you want to guarantee uniqueness on an SMP
system, and guarantee uniqueness in the face of an unreliable clock
that might go backwards, implementing the requirements of RFC 4122,
it's necessary.  By default though most applications use random
UUID's, for which none of this rigamarole is necessary.

So without uuidd installed, the amount of time to generate 100,000
UUID's is 3.2 seconds.  With uuidd installed, the amount of time to
generate 100,000 UUID's drops down to 0.091 seconds.

 - Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#459403: libuuid1: missing depends on non-essential package passwd

2008-01-07 Thread Theodore Tso
On Sun, Jan 06, 2008 at 10:37:03AM +0100, Julien Cristau wrote:
 
 the libuuid1 postinst contains:
 groupadd -f -K GID_MIN=1 -K GID_MAX=999 libuuid
 if ! grep -q libuuid /etc/passwd; then
useradd -d /var/lib/libuuid -K UID_MIN=1 -K UID_MAX=499 -g libuuid libuuid
 fi
 mkdir -p /var/lib/libuuid
 chown libuuid:libuuid /var/lib/libuuid
 chmod 2775 /var/lib/libuuid
 
 The groupadd and useradd commands come from passwd, which is not
 Essential: yes, so a Depends is needed.
 
 Moreover, the postinst succeeds even if any of these commands fail,
 because 'set -e' is missing at the top of the script.

So e2fsprogs which is Essential: yes depends on libuuid1, so
libuuid1 is effectively Essential: yes, right?  So if I add a
dependency on passwd, it will effectively make passwd Essential:
yes, as well, and according to policy I should bring it up for
comment on debian-policy.  So, I am doing so now.  Any objections if I
add a dependency on passwd for libuuid1?  The aternative would be to
roll-my-own useradd/adduser functionality, but that would be a real
PITA

Thanks, regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: List of packages which should probably be Architecture: all

2008-01-03 Thread Theodore Tso
On Thu, Jan 03, 2008 at 09:50:48AM +1100, Brian May wrote:
  Raphael == Raphael Geissert [EMAIL PROTECTED] writes:
 
 Raphael Brian May [EMAIL PROTECTED]
 Raphael dar-static
 
 Raphael Theodore Y. Ts'o [EMAIL PROTECTED]
 Raphael e2fsck-static
 
 Both of these (and maybe others) are false positives.

Which would be caught if the test actually say, used the file
command for any binary that had files installed in a .../bin directory
or .../lib directory.

BTW, I recently got a complaint from someone who was still
using a 2.4 Woody system, and had been using e2fsck-static as a way of
getting the latest e2fsprogs fixups for e2fsck, given that the woody
backports effort had stopped a while ago.  Apparently the latest glibc
uses thread local storage in its locale code, so even linking
statically against glibc will result in a binary that can't be used on
a 2.4 kernel.  Because I was a nice guy, I hacked up e2fsck-static to
build against dietlibc instead, so it would work on ancient systems.
Dropped the size of the binary from over a megabyte to about 330k, a
savings of two-thirds

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: MySql broken on older 486 and other cpuid less CPUs. Does this qualify as RC?

2007-04-06 Thread Theodore Tso
On Fri, Apr 06, 2007 at 12:36:18PM +0200, Andreas Barth wrote:
 * Francesco P. Lovergine ([EMAIL PROTECTED]) [070406 12:33]:
  On Thu, Apr 05, 2007 at 04:10:19PM -0700, Steve Langasek wrote:
   Like I said, in practical terms, if a bug like this in a major server
   package goes unnoticed until 3 days before the release, we are not 
   actually
   supporting 486.  We support the i386 architecture quite well, but it 
   seems
   only honest to admit that as a project, we don't care about 486 enough to
   even get 486-specific problems marked as RC in time to do anything about
   them for a release.
  
  That could be fixed in R1, isn't it? I see no major problems on those
  regards...
 
 Well, we can fix it - but are you sure that's the only package with an
 issue on 80486? I think we should put somewhere into Lenny that our code
 should still run on 80486, but it might not be QAed enough for all
 subarches (but we can discuss that later, don't need to reach a
 consensus now).

The fact that we can't say for sure is a good reason to say that we
don't support 486.  That is to say, we can't guarantee that it will
work, because we don't have enough people who are testing on that
platform.  

There is a difference between we won't consciously break 486, and
we'll apply patches when they are called to our attention, but we
won't hold up an entire release for it, and support.  For all that
people like to beat up on Red Hat and Fedora for their supposed lack
of quality, this is a concept which Red Hat and other commercially
supported enterprise distributions understand.  If they can't test on
a platform, they don't call it supported.  (And they do get help from
major system vendors to do a lot of very serious regression testing
before they say that it is supported on a particular platform.)  

If we are going to claim that Debian is stable enterprise system for
servers, so stable in fact that we must use an obsolete version of
glibc compared to RHEL and SLES (even though we are releasing later
than RHEL and SLES), then it would be wise for us to use at _least_ as
stringent set of QA gaurantees as Red Hat and Novell.  And if that
means that we can't say that we support the 486, then so be it.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: On maintainers not responding to bugs

2007-03-03 Thread Theodore Tso
On Fri, Mar 02, 2007 at 06:57:01PM +0100, Pierre Habouzit wrote:
   You forgot a single damn point: in debian, like in many projects, the
 one that do things is often the guy that decide things because he's
 the one there. If you put people that work 5 times more as me because
 they have the time to do that, I will obviously feel they took my
 place. I'm not sure what I would do in those cases. Obviously not
 refusing the help and people that have the time to do this, but I would
 obviously lessen my implication and work for other teams where I've a
 single damn chance to see my contribution to be compareable to the
 others.

The principle you stated obviously tends to be the case in volunteer
organizations, true.  It does not have to be the case of a paid
employee, but yes, even if the maintainer team sets the general policy
and gives direction to the employee, there will be a lot of
hour-to-hour operational control which you will be ceding to the
employee (unless you want to be one of those awful micro-managing
managers --- but that's not a path to productivity, either for the
manager or the managee!  :-)

   I would feel bad to impose my views to a person that has huges amounts
 of time to work in the team. And necessarily (because of human nature)
 a decision will happen that I would not like or would have made
 differently, and at that point I guess that I would just leave.

Well, anytime we have people working on a package with team
maintenance, there are bound to be disagreements.  If we all left the
moment a decision was made that we didn't agree with, Debian would be
empty.  

I can't read your mind, of course, but it may be that the harder
psychological hurdle is the one where a DD realizes that (say) 10
hours a week of volunteer labor is no longer enough to be one of the
primary contributors on a package team.  That is a real issue, and
perhaps maybe _the_ major issue.

   Whereas in balanced teams where every contributor has the same level
 of contribution, I would have argued my point, or tried to make the
 proposal better, or discussed it or... whichever adequate behaviour in a
 team where every single member is equal to the other.

Although this may be a great platonic ideal, the reality is that no
team is going to be completely balanced.  First of all, not everyone
does _can_ contribute at the same level.  Some people have jobs that
cause them to work very long hours, for example.  (And of these, some
of them would like to be able to help Debian, and one of the ways they
might be willing and able do so is via contributing money, not time.)   

Secondly, even if assume that everyone could give the same number of
hours of contribution, the reality is that different people have
different levels of talent --- and it is still the case in the
programming world that differences in talent can account for 2+ orders
of magnitude in productivity.  So in the end, talent will probably
still dominate far more than whether someone can work 10 hours as a
volunteer versus 40 hours as a paid employee.  (This I think is one of
the ways that an examples of paid versus volunteer firemen may not be
applicable for Debian.)

   Money introduces bias. OK you were talking about bug triaging, and bug
 triaging is not necessarily a big decision making place, I agree. Though
 it will depend a lot of the kind of people you want to recruit:
   * if those are already contributors they will want to take more and
 more decisions, and won't only do bug triaging: if you do bug
 triaging you begin to know packages a lot, and become skilled
 enough to take decisions, and so on. Then commits rights are
 granted, and you take more and more responsibilities. That's good,
 it's indeed what is often suggested to newcomers. Though we end up
 in the not-so-nice situation I described.

Money introduces bias; no question.  But it also does introduce a
certain amount of control.  For example, suppose we only paid people
to do bug triaging.  If they want to do more, that's great but it will
be on their own time.  Would something like this magically make all of
the problems go away?  Of course not, but I think it shows that with
the right amount of thoughtfulness, it's possible to make the benefits
outweight the costs.  

   but please, I'm not sure there is a damn single maintainer in a big
 team that will refuse help, paid or not. I don't really understand how
 that mythical maintainer in a big team that refuses help has emerged in
 the discussions, but I'd really like names here. In fact, that seems
 pretty contradictory with the very notion of a team. Of course, there is
 teams with 1 single member in it in debian, but that's not a large
 team and is out of the scope if I'm not mistaken.

Well, Josselin has been very negative about the whole concept of
paying volunteers, and given that he was asking for help, and saying
that his GNOME team was drowning under bug reports, I couldn't help
but reply that if he 

Re: On maintainers not responding to bugs

2007-03-02 Thread Theodore Tso
On Tue, Feb 27, 2007 at 10:30:39AM +0100, Josselin Mouette wrote:
 Le mardi 27 février 2007 à 09:24 +0100, Eduard Bloch a écrit :
  And how do you help a maintainer that does not admit that he needs help?
 
 I can't believe people are thinking such crap.
 
 Please show me where a current maintainer of Mozilla, KDE, GNOME, the
 glibc, the kernel, X.org or any such big group of packages said he
 didn't need help for them.
 
 YES. WE NEED HELP. NOW.
 We are *all* *COMPLETELY UNDERSTAFFED*.
 We are drowning in bug reports and are not able to answer all of them,
 especially old ones dating from the pre-teams era.
 
 Who is not acknowledging such obvious things?

So how do you help a maintainer who refuses help if it is paid?

OK.  The large teams are *COMPLETELY UNDERSTAFFED*.  Volunteer labor
is not able to keep up.  Suppose we or some outside organization like
dunc-tank raised money to pay someone who could afford to work
full-time, 40 hours a week, doing bug triage for these large projects.

Would those projects refuse help if the people doing the work happened
if some of the people who showed up to help you from not drowning in
bug reports just happened to be paid by Debian or by an outside group
to do this work for which you have so eloquently said it's hard to get
volunteers, and for which others have said is completely unfun,
tedious work?

Even if one or two people (for reasons that I don't understand) would
stop spending maybe 10-12 hours a week on top of whatever they do
during the day to earn money to feed his family because there is now
some paid help, I would think that raising money to find someone to
work 40 hours a week would be a Good Thing

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: On management

2007-03-01 Thread Theodore Tso
On Thu, Mar 01, 2007 at 03:28:50PM +0100, Turbo Fredriksson wrote:
 Quoting Josselin Mouette [EMAIL PROTECTED]:
 
  As of now, I see it as a failure of the project. But this is also
  nothing that can't be fixed. What do you people think could be done to
  bring the skills we are lacking to the project, with its current
  structure?
 
 Since I agree (well, more than agree - I think it's absolutly vital :),
 how about actually PAYING them? Get one or two professionals and pay
 them (halftime perhaps)...
 
 I have no idea how the economy looks, but would there be room for
 such a thing?

A third party organization, dunc-tank, which included a number of
prominent Debian Developers, including AJ, did pay for two of the
release managers to work full-time for approximately a month at a
time, at the end of 2006.  This was actually highly contentious with
some claiming that it delayed the release because it demotivated
them.  I know of now hard evidence to prove that the net result was
negative, and certainly during the period when the two release
managers were paid, the RC bug count did decline quite significantly,
and you can see a significant difference in the slope and sign of the
derivitive of the RC bug count before and after that period where two
of the RM's were paid.  

Certainly the fact that we did pay them did cause a huge amount of
flaming on various debian mailing lists, which perhaps might have
delayed the release, but my personal opinion is that DD's being DD's,
there would have found some other topic to flame about, whether it was
license issues, or whether to expel a particular DD, or something
else  So I believe the paying RM's was a net positive, and there
are those who would disagree with me.  (And to be fair, I need to
disclose that I was one of the people who helped to organize
dunc-tank.)

The harder problem, though is that finding a really good project
manager.  In my day job, I can tell you have as a technical architect,
having a great project manager is like pure gold.  My project manager
defers to me on technical issues, but helps to coordinate all of the
other technical teams so that we can make all of the schedules line up
and release a coherent solution to the customer.  Not to denigrate the
efforts of the current release management team, but if we were to
augment (NOT replace!) them with a good project manager, we could make
them be far more effective.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: /foo has been mounted xx times... check forced

2007-02-21 Thread Theodore Tso
On Tue, Feb 20, 2007 at 11:36:21AM -0800, John H. Robinson, IV wrote:
 Andrei Popescu wrote:
  On Mon, 19 Feb 2007 13:29:46 +0900
  Charles Plessy [EMAIL PROTECTED] wrote:
   
   how about a I'm in a hurry boot option in GRUB, which would make the
   e2fscks skipped ?
  
  Too early. You might not know that a check is due.
 
 Perfect time: you already know you are in a hurry. It could be possible
 to use other tricks to shorten the boot cycle. I can't think of any at
 the moment, but that does not mean that they don't exist.
 
 Does XFS require fscks? Reiserfs does not. Maybe it is time to ditch
 ext3.

You don't *have* to do the periodic checks.  If you want you can
disable it using tune2fs.  tune2fs -c 0 -i 0 /dev/hdXX.  The reason
why ext3 has periodic checking is a *feature*, born out of the
recognition that hardware is not perfect, and in fact, commodity class
hardware can and does fail in various entertaining ways.  By running
e2fsck periodically, we hope to catch problems while they are small,
instead of after massive data loss.

But hey, if you know you have perfect hardware, and you do regular
backups (YOU DO REGULAR BACKUPS, **RIGHT**?), hey, feel free to
disable the periodic fsck's, or dial them back to a higher level.
(For me, since I normally use suspend to disk/ram quite a lot on my
laptop, the periodic check happens quite rarely --- except when I am
rebooting a lot due to trying out lots of different kernels, but then
I *want* to do the periodic checks just in case a kernel bug caused a
filesystem corruption problem.)

Finally, I will note that different filesystems generally get tuned to
assume different use cases.  XFS in particular fundamentally assumes
that you are using drives (i.e., RAID at high levels) in data center
conditions, and that you have a UPS to protect your system from power
failures.  (Yes it has a journal but the way it prevents security
breaches if it's not sure the data block was written before the
metadata was is to zero out the data block).

Ext3 is more often used in cheap-*ss commodity equipment or for
equipment with less-than-perfect drives (like laptop drives that tend
to get banged around a lot when people shove the laptop into their
knapsack and start walking off while the suspend-to-disk is in
process), so it has a bit more paranoia about hardware designed into
it.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: /foo has been mounted xx times... check forced

2007-02-21 Thread Theodore Tso
On Tue, Feb 20, 2007 at 11:14:21PM +0100, Josselin Mouette wrote:
 Le mardi 20 février 2007 à 20:55 +0100, Mike Hommey a écrit :
   Does XFS require fscks? Reiserfs does not. Maybe it is time to ditch
   ext3.
  
  ReiserFS requires as much fsck as ext3.
 
 But it is much faster.

In the worst case, when the filesystem is badly corrupted, ReiserFS
will require reading every single data block off the disk, at which
point it will look for every single block that *looks* like it might
be part of an Reiserfs b-tree, and stich it together.  The results if
you have multiple Reiserfs filesystem images (for use by qemu, UML,
Xen, VMware, etc.) in a resierfs filesystem, and the filesystem is
badly corrupted, I will leave to you to imagine.  (But a scene from
from your favorite frankenstien movie might not be a bad place to
start. :-)

Also, reading every single data block from disk will almost certainly
take longer than an ext3 filesystem check, which is one of the
advantages of having a fixed inode table; ext3 knows where to start.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: /foo has been mounted xx times... check forced

2007-02-17 Thread Theodore Tso
On Fri, Feb 09, 2007 at 10:55:49AM +0100, Enrico Zini wrote:
 Right.  But would it actually be officially safe to interrupt with ^C ?
 That would give the user an opportunity to decide how in a hurry they
 are, and quickly get out of a difficult situation.
 
 If the answer is yes, ^C is officially safe, then I propose to add if
 in a hurry, interrupt with ^C  to the check forced message.

It's not a great idea to do this indefinitely, and it's a matter of
whether or not you trust the person in front of the machine not to be
in a hurry and to always type ^C all the time to avoid the e2fsck run.
If the owner/administrator of the machine == the person who is
normally in front of the console during the bootup (as is the case in
a laptop and most single-owner machines), the obviously it should be
up to the owner/admninistrator.

At the moment, if you want ^C to interrupt the e2fsck and you want the
boot to continue, you actually have to set the following in
/etc/e2fsck.conf:

[options]
allow_cancellation = 1

See the e2fsck.conf(8) man page for more details.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: /foo has been mounted xx times... check forced

2007-02-08 Thread Theodore Tso
On Tue, Feb 06, 2007 at 12:50:33PM +0100, Wouter Verhelst wrote:
 On Sat, Feb 03, 2007 at 09:39:24AM +, Enrico Zini wrote:
  Hello,
  
  the feature as in the subject is nice and makes me feel safe, but
  sometimes it hits on the laptop, when booting on batteries, with people
  watching.
 
 There actually is a feature in e2fsck to double the amount of mounts
 before an fsck is done if you're running on batteries; so unless you
 boot from batteries all the time, this shouldn't happen. See #205177 and
 #242136.
 
 You do need a mounted /proc at that time, though, which may be the
 reason it's not working for you.

A mounted /proc and if ACPI has been built using modules, the ACPI
battery module needs to be installed, since that's how we tell whether
we are running on the AC mains or battery

- tED


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: update on binary upload restrictions

2007-01-28 Thread Theodore Tso
On Thu, Jan 25, 2007 at 03:44:39AM -0500, Jaldhar H. Vyas wrote:
 On Thu, 25 Jan 2007, James Troup wrote:
 
 Summary
 
 Credit where credit is due.  This is exactly the kind of informative 
 explanation I have been looking for and I hope we'll see a lot of more of 
 this sort of thing from the infrastructure teams in the future. 
 Preferably before the flames start.

Indeed, I have to second this, and state that for the next DPL
elections, I will be looking to vote for DPL's that make as part of
their campaign platform that Delegates will be appointed or removed
based on not just how long they have been working on a task, or how
good they are technically, but how well they can communicate with
others, and how well they can send out reports and rationals to
mailing lists (IRC isn't good enough!) **before** people start to
flame and start talking about GR's to replace teams that can't seem to
communicate with anyone else.  

(In fact, I'd prefer work getting delayed by 2-3 hours if that meant
getting an adequate report sent out; Lord knows Debian has a whole
lost a heck a lot more time than that due to the controversy caused by
the buildd restrictions.)

So again, I hope we can see a lot more reports like this, and not just
in reaction to massive flame-fests on IRC and mailing lists and key
individuals taking vacations out of sheer frustration.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#397939: Proposal: Packages must have a working clean target

2006-11-17 Thread Theodore Tso
On Sat, Nov 11, 2006 at 10:55:57PM -0600, Manoj Srivastava wrote:
 What you are saying, in essence, is that we have not been
  treating autoconf transitions with the care we devote to other
  transitions; and as a result people have started shipping
  intermediate files.
 
 While I recognize these past lapses, I am not sure why we
  should condone and in the future pander to them -- I am hoping that
  autotools are coming of age, and there shall be few major API changes
  in the future. Post etch, perhaps we can evaluate if overhauling our
  autotools  usage in Debian to allow treating autoconf like we do lex
  and yacc -- and building from sources -- is feasible, or not.

Why don't we wait and see if autoconf can manage to go through a half
dozen or so releases without breaking backwards compatibility, first?
And that's assuming we get some kind of commitment from the autoconf
maintainers that they care about backwards compatibility in the first
place.

I just recently had to put in special case hacks in e2fsprogs so it
could support both autoconf 2.59 and autoconf 2.60, and that's why
e2fsprogs is still shipping the generated configure script in the
upstream sources.  Fundamentally, I still don't trust autoconf to be
stable in terms of the configure.in constructs which it supports.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Debian Installer etch RC1 released

2006-11-16 Thread Theodore Tso
On Tue, Nov 14, 2006 at 04:52:57PM +0100, Andreas Barth wrote:
 
 There are good chances that Etch will contain 2.6.18, but due to some
 open bugs the Release Candidate 1 of the installer has still 2.6.17.

Because of the sysctl deprecation issue, it might be a good idea to
either consider 2.6.19, or probably a better solution would be to
backport the sysctl undeprecation patches into 2.6.18

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: RFC: behaviour of bts show command with new BTS default behaviour

2006-11-15 Thread Theodore Tso
On Sun, Nov 12, 2006 at 01:02:06AM +, Julian Gilbey wrote:
 
 Thinking of changing the default behaviour of the devscripts bts show
 (aka bts bugs) command, and want to ask for opinions before I do so.
 
 The BTS behaviour of http://bugs.debian.org/package has recently
 changed.  It used to resolve to:
 
 http://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=package
 
 which listed all open and closed bugs in the package, without any
 version tracking being taking into consideration in the listings.
 
 It now resolves to:
 
 http://bugs.debian.org/cgi-bin/pkgreport.cgi?pkg=package;dist=unstable
 
 which lists the status of all bugs in that package in respect to the
 version(s) of the package currently in unstable.

I prefer the new behaviour, myself.  If it must be changed, can it be
configurable via some kind of .btsrc file? 

Also, can we please have this functionality for bts cache?  It takes
a *long* time to download a huge whackload of bugs, many/most of which
are already fixed in unstable, and so don't matter to me at all.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: default ext3 options

2006-11-15 Thread Theodore Tso
On Tue, Nov 14, 2006 at 12:35:30PM +, Sam Morris wrote:
  As far as I know, neither the resize_inode nore the dir_index ext3
  option can be securely added after the file system is created.
 
 According to
 http://groups.google.co.uk/group/linux.debian.devel/msg/4d987ea414438e70,
 it should be perfectly safe to add dir_index to an existing filesystem.
 However, to get the benefit of the indexing for already-created
 directories, e2fsck -D should be run after dir_index has been added;
 therefore it's probably best to just document the procedure in the release
 notes.

Correct, that it's perfectly safe to add dir_index to an existing
filesystem.  You can even do it to a mounted filesystem, and any new
directories which are created and grow beyond a single block will use
the directory indexing feature.  (So yes, mkdir foo.new; mv foo/*
foo.new; rmdir foo; mv foo.new foo will work module locking/race
conditions with applications trying to read/write/create/remove files
in the foo directory.)

If you want to force all directories to be optimized, you can do an
off-line (unmounted) e2fsck -fD command. 

The resize_inode however can not be cleanly to an existing filesystem.
There is an ext2prepare command which can be used to do an offline add
of resize_inode to an unmounted filesystem; it can be found in
ext2resize package.  The reason why that functionality hasn't been
integrated into e2fsprogs is because I took one look at the source,
and decided it had to be rewritten from scratch before I would trust
it with my data and before I would be willing to be responsible for
maintaining it; it's on my TODO list.  That being said, I'm not aware
of anyone who has lost data using ext2prepare.  Each user will have to
decide on their own whether or not you are comfortable using it.

Regards,

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Proposed new POSIX sh policy

2006-11-15 Thread Theodore Tso
On Wed, Nov 15, 2006 at 11:36:44AM +0100, Gabor Gombas wrote:
 I'm not talking about the local admin. Right now Debian maintainer
 scripts are not allowed to use the enable command because that is a
 bashism, and more importantly there is _no reason_ to use the enable
 command because simply saying make /bin/sh point to dash if you want to
 go faster is much more effective and easier.

 What I'm saying is if you take away the freedom of allowing /bin/sh to
 point to dash, then people who care about shell performance will be
 forced to use other means _even in scripts shipped by Debian_ - and the
 enable command is a very powerful tool to achieve that. And at that
 moment you will have exactly the same builtins aliasing different
 external commands in different scripts problem as you have now when
 allowing different shells - so you gain nothing by restricting /bin/sh
 to bash.

We don't have to require bash as the one and only /bin/sh in order to
enable maintainer scripts to use enable.  They could test to see if
they are running under bash and only use enable bash is being used.
If that isn't allowed by policy it should be.  The rule should just be
that the script *works* on non-bash shells (and I really like the
proposal that says a script must work on bash, dash, and a shell to be
named later), not that it be free of bashisms.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why does Ubuntu have all the ideas?

2006-08-26 Thread Theodore Tso
On Thu, Aug 24, 2006 at 11:56:21AM -0500, John Goerzen wrote:
 This sort of vague anecdotal evidence has been repeated over and over.
 It may be true, but as far as I know, nobody has yet to come forth with
 reporting specific problems in Debian, only x worked out of the box in
 ubuntu but not in Debian.

OK, I have a brand-spanking new IBM/Lenovo T60p laptop with nice, fast
SATA Drives, Intel Dual Core CPU's; 1600x1200 display --- sexy
machine.  Debian stable doesn't run on it.  Ubuntu 6.06 LTS installed
out of the box on it.  So the laptop that I run when I give
presentations at conferences says Ubuntu Breeze when I fire it up, and
not Debian.

My brand-spanking new home file server with a a real hardware RAID
controller (Areca) and 16 hot-swap SATA drives (6 of them currently
populated with 500 GB SATA II drives), with two dual-core Xeon chips
is running Ubuntu Breezy 6.06 LTS, because Ubuntu supported it out of
the box.  Debian stable doesn't even have a chance of supporting this
box.  I'm not sure if Debian etch will support it, since the Areca
RAID card is an out-of-tree (although GPL) device driver, but that's
largely irrelevant, since I'm not going to run Debian unstable on a
production file server!

How many more concrete example would you like?

- Ted

P.S.  So at the moment, I'm doing my debian development work using
some crash-and-burn machines at home, and using some debian chroots
created using debootstrap.  It would be nice if I could help doing
more dogfood testing on etch, and finding and submitting bug reports
before etch shipped --- but life is short, and Ubuntu worked out of
the box on my laptop, and it would have really difficult to figure out
how to get Debian installed onto a laptop SATA drive.  (And when I say
work, I mean including making use of a built-in wireless on a PCI
express bus that requires a driver that uses restricted firmware and a
binary-only userspace daemon.  So if people want to be pure, that's
fine, but I'm not going give up using wireless just for ideological
purity; I just won't be using a Debian supplied kernel, and once
again, Ubuntu works out of the box.  Oh well)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why does Ubuntu have all the ideas?

2006-08-26 Thread Theodore Tso
On Sat, Aug 26, 2006 at 02:16:53AM -0500, John Goerzen wrote:
  OK, I have a brand-spanking new IBM/Lenovo T60p laptop with nice, fast
  SATA Drives, Intel Dual Core CPU's; 1600x1200 display --- sexy
  machine.  Debian stable doesn't run on it.  Ubuntu 6.06 LTS installed
 
 Out of curiousity, why not?

No support for: (The * are critical)

* SATA Hard Drives (*)
* IPW3945 wireless (*)
* Intel AD1981 HD Audio (*)
* 3D Graphics support on the ATI FireGL V5200 card
(propietary kernel module)
* Verizon 1xEV-DO

Pretty much all of the modern hardware on the T60 is completely
unsupported by Debian; and most of the above is supported out of the
box by Unbuntu.

 It sounds, though, that your problem could be solved if we revved the
 kernel in stable (and the installer) more often.  See my message on
 -project about that.

A lot of the problems would be solved with that, yes.  Depending on
backports so you don't have to depend on antique versions of
OpenOffice, firefox, etc.  Maybe the answer is getting modern kernels
and modern installers should be adopted by backports.org.  

Or maybe backports should be considered a 1st class part of Debian; so
in addition to old-stable, stable, testing, and unstable, we could
add, stable-useful.  The fact of the matter is that the stable
distribution today is pretty much useless for desktop users, and
useless for people who need to install on modern servers (i.e.,
anything sold in the last 6 months).

But yet, we claim that our highest goals in our social contract is to
serve our users.  Sure. and anything that stable isn't useful
for are simply not our users, but they become Ubuntu's users or
Fedora's users instead.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why does Ubuntu have all the ideas?

2006-08-26 Thread Theodore Tso
On Sat, Aug 26, 2006 at 04:02:04PM +0200, Hendrik Sattler wrote:
 Am Samstag 26 August 2006 15:15 schrieb Theodore Tso:
  No support for: (The * are critical)
 
  * SATA Hard Drives (*)
  * Intel AD1981 HD Audio (*)
 
 This stuff did not even exist when Sarge was released. Half of
 userland would not fit this hardware, so who cares.

Umm, the people owning this laptop who choose Ubuntu instead of Debian
care.

 - installer did not read in the CDs for package lists and the GUI does not 
 even support this (or for any other means of modifying /etc/apt/sources.list)

From the menubar.  System -- Administration -- Synaptic Package Manager

Wait for the package manager to come up, click on Settings -- Respositories

There is an Add CDROM button, and you just click on it.  

(No need to run vi, or emacs, or need to understand the
/etc/apt/sources.list format.)   Seems pretty user-friendly to me.

 - /etc/resolv.conf was not present but DHCP client complained about that

Hmm, I didn't notice this problem.  When the dhcp client started
during the install process, it created the /etc/resolv.conf file for
me, and subsequent dhcp clients updated the /etc/resolv.conf file
information automatically from the DHCP serve.

 - the root has no password and you must use sudo sucks for many things as 
 the access to root is not consistent (some invocation type can use su 
 programs but those cannot work).

That's a philosophical dispute, but it's easily fixed simply by
setting a root password if you really want to use a root shell.  (Or
by just doing sudo bash, of course.)  I happen to like having a root
user with a password and to su to root, so I set up my system that
way.  However, I view that as an emacs vs. vi sort of religious
dispute.

 - X ran with the wrong resolution (typical i915 problem) and with the wrong 
 dpi setting

Can't speak to that; my ATI Firegl video worked automatically out of
the box --- with 3D accelerated graphics automatically.

 - /etc/network/interfaces listed non-existant devices and because of WPA, a 
 manual setup of this file is needed

I didn't notice that problem.

 - something useful like ifplugd was not installed and the user was
 puzzled by the fact that plugging in the network cable did not
 result in network access

I agree that it would be nice if ifplugd or laptop-net were installed
by default, but last I checked Debian didn't install either by
default, either.  So what's your point?

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why does Ubuntu have all the ideas?

2006-08-26 Thread Theodore Tso
On Sat, Aug 26, 2006 at 03:26:12PM +, Sam Morris wrote:
 On Sat, 26 Aug 2006 16:02:04 +0200, Hendrik Sattler wrote:
  Am Samstag 26 August 2006 15:15 schrieb Theodore Tso:
  No support for: (The * are critical)
 
 * SATA Hard Drives (*)
 * Intel AD1981 HD Audio (*)
  
  This stuff did not even exist when Sarge was released. Half of userland 
  would 
  not fit this hardware, so who cares.
 
 How do other long-lived distributions handle this problem? How does one
 install RHEL 4 on such a machine?

RHEL4 has updated kernels/installers that have additional device
drivers added.  And of course it helps that Red Hat has new releases
somewhat more frequently than Debian does with its stable releases;
but that's one of the downsides of relying on an all-volunteer
engineering base.  Things get done... whenever.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Stuff the installer does which isn't done on upgrade....

2006-08-15 Thread Theodore Tso
On Tue, Aug 15, 2006 at 10:53:14AM +0200, Goswin von Brederlow wrote:
 Did they fix that? When I first looked into dir_index it was said that
 it would corrupt the directories since it would search the old linear
 dirs via hash and insert new entries by hash into linear dirs.

Who said that?  I was the one who merged the dir_index feature into
Linux 2.5/2.6, and that was never the case; it was always safe to
enable dir_index on the fly, by desgin.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Stuff the installer does which isn't done on upgrade....

2006-08-14 Thread Theodore Tso
On Fri, Aug 04, 2006 at 03:02:34PM +0200, Goswin von Brederlow wrote:
 For dir_index you have to take the FS offline, tune2fs and fsck it or
 you totaly corrupt it.

Actually, that's not true.  It's perfectly safe to run tune2fs on a
mounted volume to enable the dir_index feature.  All directories from
that point on which grow beyond a single disk block will use the
hashed-tree optimization.  e2fsck -fD run off-line is only needed in
order to upgrade existing large directories to use dir_tree.

(Of course, you can also do something like cd ~/Maildir; mkdir
cur.new; mv cur/* cur.new; rmdir cur; mv cur.new cur if you want, as
long as you are confident you don't screw up or confuse a currently
running mail client.)

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Stuff the installer does which isn't done on upgrade....

2006-08-14 Thread Theodore Tso
On Sat, Aug 05, 2006 at 07:58:20PM +0200, Mike Hommey wrote:
 I don't know about the installer, but all filesystems I created with
 mke2fs recently also have resize_inode, which isn't even in the tune2fs
 manpage.

It's not in the tune2fs man page because e2fsprogs doesn't currently
support adding the resize_inode feature after the fact; this is one
you have to do at mke2fs time.  (There is an ext2prepare program that
will do this, but the code has been too scary for me to just integrate
into e2fsprogs with a complete rewrite, and I haven't had time to do
this.)

The e2fsprogs-udeb package does include the mke2fs.conf file, so if
the installer is using the latest e2fsprogs-udeb, it should be
creating filesystems with the dir_index and resize_inode features.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Non-DD's in debian-legal

2006-06-12 Thread Theodore Tso
On Mon, Jun 12, 2006 at 09:35:32AM -0400, Jeremy Hankins wrote:
 This is one of the most common accusations leveled against d-l: that the
 membership of d-l is skewed and not representative of Debian as a whole.
 If that's true there's not much d-l can do about it, of course, and the
 whole process of license evaluation should perhaps be rethought.  The
 simplest solution, though, is for those who think d-l skewed to start
 participating.

 

 I think what's concerning to most (it concerns me) is that people seem
 to be _avoiding_ d-l, presumably because they see it as invalid or
 corrupted by weirdos.  That's indicative of a serious problem, because
 it means licensing issues aren't being discussed _at all_.  As saddened
 as I would be if d-l went private, if doing so is the only way to solve
 that problem it's probably a good idea.

The d-l list has a problem which is shared by many Debian mailing
lists (including debian-vote and debian-devel, and I'm sure it's not
limited to them) which is that far too many people subscribe to the
last post wins school of debate.  People don't listen, they just
assert their point of view --- back and forth, back and forth.  Foo!
Bar!  Foo!  Bar!

In addition, far too many people treat mailing lists like irc
discussions, where one-line witty reparte's provide entertainment
(perhaps) but do not necessarily further bringing an issue to closure.

As a result, I have deliberately avoided d-l, because I have better
things to do with my time.  If you want to reform d-l, it's not enough
to ask people to just participate.  It's going to be necessary to
enforce some cultural changes to how participants on Debian mailing
lists behave.  They need to be respectful of the other participant's
time, and not just use the excuse of free speach to any kind of
anti-social and self-centered behaviour.

Unfortunately, the only thing I can think of that might be useful
would be active moderation of the list, combined with summary of the
opinions (with both majority and minority opinions) that is summarized
by the moderator, and which when it is due, can be archived on some
web site or wiki.  Yes, that means that d-l won't be the home to
free-spirited, free-ranging debate; instead, there might be structured
discussion that actually leads to light being shed and work being
accomplished in an efficient manner.  But it does mean that a
moderator has to be found that can declare certain discussions as
ratholes, and be capable of fairly summarizing the positions being
espoused by various camps on the list, and holding straw polls that
are based on participants on the lists, and not by number of postings
on the list (which unfortunately leads to the last post wins abuse
and style of discourse that we see on all too many mailing lists).

If everyone participating on the list were mature and grown-up, this
wouldn't be necessary.  And I would suspect that the call to restrict
d-l to only DD's is a hope to exclude some of the more immature and
less disciplined posters.  But, as we all know, being a DD does not
guarantee social maturity, so I don't believe that is necessarily the
best way to do things.

However, I *do* believe that d-l is a cesspit, and I for one am very
glad that in its current incarnation, it is not at all binding and has
no value other than being a debating socity --- a debating socity that
I am very glad that I can avoid, thank you very much.

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [Debconf-discuss] list of valid documents for KSPs

2006-06-01 Thread Theodore Tso
On Wed, May 31, 2006 at 02:48:13PM -0500, Manoj Srivastava wrote:
 The person who I thought was Marting has apparently revealed
  that the identity documents that were preseted to the key signing
  party participants were ones that did not come out of a trusted
  process.  Typically, the identity papers are produced by official
  bodies, like governments, that have international treaties in place
  to assure a minimal conformance of identity checks.

Wrong again.  There are no international treaties, at least not
general ones, that guarantee identity checks.  (There may be
specialized ones such as those that bind countries within the European
Union, but not in general.)

  Does that mean that if someone shows up at an future keysigning
  party at OLS, for example, with an Transational Republic ID which
  has the name Manoj Srivastava, that everyone would be therefore be
  entitled to demand on debian-devel that all signatures for Manoj
  Srivastava should now be revoked?
 
 I would think that if an imposter was running around, and if
  people were no longer sure that such an imposter twas the one whose
  ID they had based their signatures on, HELL YES!!!

So if someone purchases a fake ID for oh, $20 that appears to be a
government issue ID, and successfully shows that at least one
signature was signed using said fake, apparently government issued ID,
you'd acquience to someone asserting that everyone should revoke their
signatures on your key?

I didn't say this; I think you were careless editing attributes in
your note which replied to multiple e-mails

  Had Martin never mentioned this, it would have been a non-issue.
  There is no real damage. While signatures may have been based on a
  non-offical ID, Martin did indeed own the key in question, so the
  end harm is zero. But Martin decided to publish this experiment

  A security mechanism that only works in the non-presense of
  fraudsters is no security mechanism at all.
  A KSP that depends on there being any pre-existing trust to abuse is
  *completely worthless* as a KSP whether or not that trust is abused
  or not.
 
 You just dismissed signing PKA keys by individuals.  There is
  no way that an individual with access to official records can
  determine if a particular passport is a test passport or not.

And now you are sinking deeper and deeper into paranoia.  Of course
there is no way to tell whether or not a particular passport is real.
Heck, do you know how easy it is forge even official government
records?  In most US states, it's trivial if you know what you are
doing to get an official driver's license issued with a false
identity.  And it's only a little bit more work to get a passport with
a false identity, if you are willing to be a bit dishonest about
things.

If absolute trust is the only thing you will accept, then you might as
well withdraw from Debian project, and go hide in a hole with some
paranoiod survivalists in Montana.  We can't have absolute trust; it
is impossible.  And you seem to be the one demanding it, and if you
can't have it, it's off with their signatures, or off with their
key on the keyring!

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [Debconf-discuss] list of valid documents for KSPs

2006-05-31 Thread Theodore Tso
On Tue, May 30, 2006 at 07:49:34AM -0500, Manoj Srivastava wrote:
  What Martin Krafft showed you was,
 
 How do I know that person actually was  Martin Krafft?

So if you have no idea whether or not someone was Martin Krafft, how
can you ask everyone to revoke all signatures for Martin Krafft as you
did earlier.  That is really unreasonable.

Does that mean that if someone shows up at an future keysigning party
at OLS, for example, with an Transational Republic ID which has the
name Manoj Srivastava, that everyone would be therefore be entitled
to demand on debian-devel that all signatures for Manoj Srivastava
should now be revoked?  After all, we have no idea if anyone who might
or might not have been Manoj Srivastava might or might not have
produced an identification documents that may or may not have been
false.   We don't know!

Do you see how rediculous this is?  How irrational you are being?

Let me try to spell it out another way.  Either the entity at the the
KSP who was allegedly Martin Krafft was indeed Martin Krafft, or he
was not.  It must be one or the other; you seem to be arguing things
both ways, and you don't get to do that.

If he was Martin Krafft, then he didn't carry out any attack!  No
identity was forged, and no harm was done.  Maybe he presented
identification that you wouldn't accept, but that is not intrinsically
wrong!  If the entity was indeed Martin Krafft, then that entity broke
no criminal, civil, nor moral laws.

If he was not Martin Krafft, then the real Martin Krafft was not
culpable, and your arguments that the real Martin Krafft should
therefore be censured in any way shape or form is not just.  And as
I've shown, if someone showing up with forged identity papers is
enough to demand that all signatures on a key be revoked, it would be
trivially easy for me or anyone else to arrange to have someone show
up at OLS with forged identity papers with your name, and carry out a
fairly devasting denial of service attack.

 I say people who try to trick me into signing a key based on
  an untrusted process of identity verification are evil doers.

And I say, as have others, that untrusted process of identity
verification is by definition not an absolute term.  So how can you
say that someone is an evil doer just because they present a form of
identity which happens to be untrusted by *you*.  What if someone
presents an University ID?  That isn't an government ID; does that
mean they are evil?  Quick, consign them to the Nineth Circle of Hell,
reserved for traitors and people who commit treason!  I say this is
insanity.  And obviously argument by assertion is a valid form of
argument, since you seem to use it liberally.  :-)

 A boss with no humor is like a job that's no fun.

I guess you don't see how ironic your signature line is

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [Debconf-discuss] Re: Please revoke your signatures from Martin Kraff's keys

2006-05-26 Thread Theodore Tso
On Thu, May 25, 2006 at 04:08:31PM -0400, Stephen Frost wrote:
 He didn't try to dupe people and this claim is getting rather old.
 Duping people would have actually been putting false information on the
 ID and generating a fake key and trying to get someone to sign off on
 the fake key based on completely false information.  The contents of the
 ID were accurate, as was his key, there was no duping or lying.
 Whineing that he showed a non-government ID at a KSP and saying that's
 duping someone is more than a bit of a stretch, after all, I've got
 IDs issued by my company, my university, my state, my federal gov't,
 etc.  Would I be 'duping' people if I showed them my company ID?  What
 about my university ID?  Would it have garnered this reaction?  I doubt
 it.

Indeed, duping people would have been if he had passed himself off as
AJ, and managed to get people to sign a bogus key as belonging to the
DPL.  That would have been a demonstration that would have been really
obnoxious, and would justify your reaction.   

In this particular case, he did not assert incorrect information, but
rather (to use an X.509 analogy) used a Certificate signed by an
untrusted Certification Authority.  The fact that some people were
willing to trust is about as surprising as the fact that many people
click OK when they see a certificate signed by CA not in the
browser's trusted list.  But he didn't perpetrate fraud in any way.
So this is not a surprise, and it's not what I would call an
earth-shaking result.  

But nevertheless, Manoj, I think you are over-reacting.  

Chill.  Relax.  Have a alcoholic or non-acoholic beverage of your
choice.  :-)

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#366780: ITP: summain -- compute and verify file checksums

2006-05-16 Thread Theodore Tso
On Fri, May 12, 2006 at 12:41:54AM +0300, Lars Wirzenius wrote:
  Apart from supporting more file formats, summain differs from the
  traditional md5sum and sha1sum utilities by providing progress
  reporting, and via convenience features such as automatic recursion
  into directories, and looking up files relative to the location of the
  checksum file, rather than the current working directory.

Have you looked at the cfv program, which is already packaged for
Debian?  There are a huge number of checksum programs out there; it
would be nice if there could be fewer with a greater concentration of
features

- Ted


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: 2.4.x Kernel, ECN And Problem Websites

2001-04-25 Thread Theodore Tso
On Wed, Apr 25, 2001 at 10:16:30PM +1000, Daniel Stone wrote:
 
 It may be a minor catch-22, but ECN is currently so broken, that only power
 users should be using it, as the rest will just continue flooding the
 netfilter list with Netfilter breaks all my websites!. [OK, ECN isn't
 broken, the routers are, I know, but same effect. ECN breaks stuff]. So, if
 you're smart enough to know that you want ECN, and smart enough to
 understand the consequences, you should be compiling your own kernel.

Incorrect.  ECN is not broken.  The problem is there are broken
firewalls and load balancing machines out there that incorrectly
(violating the relevant RFC's) dropping packets with the ECN bit set,
when they have no business doing that.  (The RFC's indicate that the
bit should be set to zero by the sender when it was previously
undefined, but that receivers were supposed it ignore that bit.  Be
conservative in what you send, and liberal in what you accept.)

The vendors who have broken hardware out there, such as the Cisco Load
Director, have patches out there which fix the bug; they've had the
bug fixes available for the better part of the year.  The problem is
that end-customers (i.e., sites like E-Trade) are being slow to
install the patch.

As to why install with ECN?  That's because ECN is important in terms
of helping the core internet routers deal with increasing amounts of
load.  ECN stands for Explicit Congestion Notification, and what it
means is that routers can explicitly tell end-hosts to back off
because of congestion in the internet core, as opposed to simply
dropping packets on the floor.  It improves the overall efficiency of
the network, and in the future may be important in avoiding congestive
collapse of overloaded links.

Aside from being a real Linux kernel developer (sorry, couldn't resist
:-), I also do quite a bit of work with the Internet Engineering Task
Force, the standards body for the Internet where ECN originated, and
my colleagues in this organization, which include Jamal Hadi Salim
(one of the core Linux networking kernel developers who also works
with the IETF), tell me that it's widely regarded that if it weren't
for Linux, a lot of bleeding edge protocols that may ultimately become
very important to the Internet either wouldn't have been widely
adopted, or the adoption rate would have been much, much slower.

So I think it's important that Linux distributions provide an easy way
for sites to use ECN.  Whether or not ECN should be enabled by default
is a more difficult question, and really depends on what you think is
more important.  Do you turn off something that will ultimately be
very beneficial to the entire Internet because there are some broken
sites out there that are willfully refusing to apply a bug fix which
Cisco and other vendors have had available for months, at the cost of
inconveniencing some users until they can figure out how to disable
ECN or lobby those sites to apply the bug fix?  Or do you take the
Microsoftian approach way out, and sacrifice the long-term good of the
Internet in the name of user convenience?

Ultimately, how you choose is a matter of your priorities.  But please
don't call ECN broken.  It's not ECN's fault; it's the fault of those
web sites that refuse to update their software with a bugfix release.

- Ted