Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Glenn Maynard
On Mon, Oct 18, 2004 at 08:11:24PM -0700, John H. Robinson, IV wrote:
> > If you build with different tools, you have a different package.  "X
> > built with gcc" and "X built with icc" are very different things (just
> > as "X" and "X with x.patch and x2.patch applied" are different things).
> 
> I see your point, but I disagree entirely. If I build openssh on Solaris
> with gcc, or if I use Solaris' SUNWspro, is it a different openssh? Not
> at all. The source is still the same. The only exception I will grant
> you is code that determines the compiler being used and changes its
> actual functionality (not work around bugs or other compiler features).
> 
> This is far different from applying patches to the source.
> 
> If what you say is true, then using gcc-3.3 would produce a different
> package than from gcc-3.2. I think few people would agree with you,
> modulo bugs in the code/compiler/bugs/feature-set.

Consider a major, practical reason we require that packages be buildable
with free tools: so people--both Debian and users--can make fixes to the
software in the future.

For example, suppose OpenSSL is built with ecc (Expensive C Compiler),
because it produces faster binaries, the Debian package is created with
it, and ends up in a stable release.  A security bug is found, and the
maintainer isn't available.  Can another developer fix this bug?  No:
you can't possibly make a stable update with a completely different
compiler, halving the speed and possibly introducing new bugs.  (Debian
is very conservative and cautious with stable updates; this is one of
the reasons many people use it.)

On the same token, users are similarly unable to exercise the level of
caution needed when making security updates on critical systems, unless
they subject themselves to whatever non-free license the compiler uses.

This is a fundamental reason it's required that packages be buildable
using free tools, and why I don't think "you can build a kind-of similar
package using free tools, but the one we're giving you can only be built
with non-free tools" is acceptable.

-- 
Glenn Maynard




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Manoj Srivastava
On Mon, 18 Oct 2004 21:59:36 +0200, Wesley W Terpstra <[EMAIL PROTECTED]> said: 

> On Mon, Oct 18, 2004 at 01:33:07PM -0500, John Hasler wrote:
>> Josselin Mouette writes:
>> > Main must be built with only packages from main.
>> 
>> Packages in main must be _buildable_ with only packages from main.

> Interesting.

> This slight difference in wording sounds to me like I would indeed
> be able to include prebuit object files, so long as the package
> could be built without them. Is that correct?

No.

> The actual text in policy is:
> * must not require a package outside of main for compilation or
>   execution
> (thus, the package must not declare a "Depends", "Recommends", or
> "Build-Depends" relationship on a non-main package)

> This wording appears to back up what you say (John).  The clause
> 'must not require' is fine with my case. Since the source files can
> be rebuilt with gcc, icc is not required. Execution is a non-issue.

Don't ignore the dfsg. Only free software in main.  Also,
 there is a practical reason for requiring the .deb we ship to be the
 same (or close) to the .deb people can build in their own buildd.
 Our users need to be able to debug problems -- and if the stuff they
 build in a buildd behaves significantly differently (or displays
 bugs), it is up to us to fix them.

We should not fool people into believing that things they
 believe to be free (in main) are more capable than they would be if
 built with free software. This hurts us in the long run since the use
 of non-free software scratches an itch that would otherwise may have
 provided motivation to improve the free software.

manoj
-- 
You're dead, Jim. McCoy, "Amok Time", stardate 3372.7
Manoj Srivastava   <[EMAIL PROTECTED]>  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Manoj Srivastava
On Mon, 18 Oct 2004 18:28:01 -0700, John H Robinson, IV <[EMAIL PROTECTED]> 
said: 

> I am not subscribed to debian-legal.
> Steve Langasek wrote:
>> On Tue, Oct 19, 2004 at 02:04:42AM +0200, Wouter Verhelst wrote:
>> > On Mon, Oct 18, 2004 at 07:02:19PM -0400, Glenn Maynard wrote:
>> > > it says "the package in main must be buildable with tools in
>> > > main".
>> 
>> > That is still the case. The fact that the package in main is
>> > built using non-free tools is irrelevant -- it can be rebuilt
>> > using software only in main; it can be ran using software only in
>> > main; and the difference is not noticeable except by comparing
>> > checksums, benchmarks, or to those with an intimate knowledge in
>> > compiler optimizers.
>> 
>> > A difference in optimization is not relevant to a package's
>> > freedom.
>> 
>> If compiling the program with a non-free compiler gains you users
>> who would not find the package usable otherwise, distributing
>> binaries built with such a compiler induces your users to be
>> dependant (indirectly) on non-free software.  That is a freedom
>> issue.

> I tend to agree with Wouter on this issue. The source can compile
> with gcc. Anyone with the sources, and gcc can rebuild the package
> and It Works. No difference in functionality, merely a difference in
> performance.

> Note the exact words (I am assuming that Glenn copied them
> verbatim): the package in main must be buildable with tools in main

> Note what it does not say: the package in main must have been built
> only with tools in main

That is all that policy may require, my reading, however, is
 that this violates the DFSG. We shall have things in main that
 require non-free components to display that behavior; and can't be
 readily reproduced using free tools (the maintainer is admitting the
 behaviour of free version is significantly different).

> This package is buildable by tools in main. It meets the letter of
> the law. The spirit seems a bit ambiguous. Good case in point, the
> m68k cross-compiled stuff, where the cross-compiler used was
> non-free. (I have not verified the accuracy of the non-free claim of
> the cross- compiler)

And the spirit of the DFSG is violated.

> Also, this discussion is academic as the maintainer is going to
> split the package into two: gcc build in main, and icc built in
> contrib. Given the circumstance, I felt that this action is the
> best.

Quite so.

> We could fork this into a discussion of re-building all packages
> uploaded (ala source only uploads) which neatly sidesteps the entire
> ``intent of buildable with tools in main'' issue entirely.

That is a different thread altogether.

manoj
-- 
"I am ... a woman ... and ... technically a parasitic uterine growth"
Sean Doran the Younger
Manoj Srivastava   <[EMAIL PROTECTED]>  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Maintenance of User-Mode Linux packages

2004-10-18 Thread martin f krafft
also sprach Matt Zimmerman <[EMAIL PROTECTED]> [2004.10.19.0351 +0200]:
> Is anyone (other than martin f krafft) interested in
> co-maintaining some or all of the UML-oriented packages in Debian?

Does this mean I don't qualify or that you would prefer to have
a bunch of people cooperate?

I would prefer the latter, using alioth for hosting...

-- 
Please do not CC me when replying to lists; I read them!
 
 .''`. martin f. krafft <[EMAIL PROTECTED]>
: :'  :proud Debian developer, admin, and user
`. `'`
  `-  Debian - when you have better things to do than fixing a system
 
Invalid/expired PGP subkeys? Use subkeys.pgp.net as keyserver!


signature.asc
Description: Digital signature


Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread John H. Robinson, IV
I am not subscribed to debian-legal.

Glenn Maynard wrote:
> On Mon, Oct 18, 2004 at 06:28:01PM -0700, John H. Robinson, IV wrote:
> > Note the exact words (I am assuming that Glenn copied them verbatim):
> > the package in main must be buildable with tools in main
> 
> Exact words are:
> 
>  In addition, the packages in _main_
> * must not require a package outside of _main_ for compilation or
>   execution (thus, the package must not declare a "Depends",
>   "Recommends", or "Build-Depends" relationship on a non-_main_
>   package),

Right, I don't see where a build-depends on icc would be required in
this case. Now, how the maintainer is going to get it to build with icc
if available and without if not available, I'm not too certain about
(mind you, the presuposes that he were still going with the icc-built-
in-main). Unless he was going to use a non-standard build environment
(eg: setting an environment variable).

> If you build with different tools, you have a different package.  "X
> built with gcc" and "X built with icc" are very different things (just
> as "X" and "X with x.patch and x2.patch applied" are different things).

I see your point, but I disagree entirely. If I build openssh on Solaris
with gcc, or if I use Solaris' SUNWspro, is it a different openssh? Not
at all. The source is still the same. The only exception I will grant
you is code that determines the compiler being used and changes its
actual functionality (not work around bugs or other compiler features).

This is far different from applying patches to the source.

If what you say is true, then using gcc-3.3 would produce a different
package than from gcc-3.2. I think few people would agree with you,
modulo bugs in the code/compiler/bugs/feature-set.

> > This package is buildable by tools in main. It meets the letter of the
> > law. The spirit seems a bit ambiguous.
> 
> I hope we all agree that the spirit is what matters; people who ignore
> the spirit and word-lawyer the letter are people to ignore.

Absolutely.

Only lawyers and their ilk spend a lot of time with letter vs spirit :)

-- 
John H. Robinson, IV  [EMAIL PROTECTED]
 http  
WARNING: I cannot be held responsible for the above, sbih.org ( )(:[
as apparently my cats have learned how to type.  spiders.html  




Re: RFC: common database policy/infrastracture

2004-10-18 Thread sean finney
On Mon, Oct 18, 2004 at 09:19:28AM +0200, Javier Fernández-Sanguino Peña wrote:
> [That should be http://people.debian.org/~seanius/policy/dbapp-policy.html, 
> BTW]

oops!

> I'm missing some "Best practice" on how to setup the database itself. That 
> is, how to setup the tables (indexes, whatever...) that the application 
> will use from the database and, maybe, even some initial data in some of 
> the tables.

i hadn't addressed that yet because it's very specific to the database
type, and i was starting from a more general perspective.  my approach
is to first decide on what the appropriate package behavior is, and then
sort out the technical details on how to get a package to behave
such.

> One common issue is that the application depends on that in order to work 
> and it's not done automatically. Maybe the user is prompted to do it but he 
> might be unable to do so until the installation is finished. For an example 
> of this problem see #205683 (and #219696, #265735, #265878). 

that's pretty funny, but exactly the kind of stuff we're trying to
avoid.

> It might be good to provide a common mechanism to setup the database so
> that users are not asked to run an SQL script under /usr/share/XXX (usually
> doc/package/examples). Maybe even defining a common location for these
> (/usr/share/db-setup/PACKAGE/.{mysql,pgsql}?). Notice that the SQL
> script that needs to be run might difer between RDBMS. 

in addition to the sql, the process of adding users or accessing the
database will differ too.  

On Mon, Oct 18, 2004 at 11:30:59AM +0100, Oliver Elphick wrote:
> If the database supports SQL transactions (as PostgreSQL does), SQL
> scripts should do everything inside a transaction, so that either all
> objects are successfully created and populated or else there is no
> change at all to the database.

good idea.

> It is, however, quite possible for the application installation to fail
> because of circumstances beyond the packaging system's ability to
> manage.  Therefore, the package installation scripts need to be able to
> report what further steps are needed in order for installation to be
> completed.

again, good idea.

On Mon, Oct 18, 2004 at 09:08:35AM +0100, Oliver Elphick wrote:
> > for the admin password, i agree.  for the app_user password, i think
> > most apps are storing this password in a cleartext file for the
> > application to use (php web apps, for example).  that's my opinion,
> > anyways.
> 
> That may differ per application.  I would argue that it is very bad
> security in all circumstances.

well, i suppose we could always ask the admin if they want to store
the passwords...


sean

-- 


signature.asc
Description: Digital signature


Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Manoj Srivastava
On Tue, 19 Oct 2004 00:37:45 +0200, Wouter Verhelst <[EMAIL PROTECTED]> said: 

> On Mon, Oct 18, 2004 at 07:51:00PM +0200, Josselin Mouette wrote:
>> Le lundi 18 octobre 2004 à 19:22 +0200, Wesley W. Terpstra a écrit
>> :
>> > So, when it comes time to release this and include it in a .deb,
>> > I ask myself: what would happen if I included (with the C source
>> > and ocaml compiler) some precompiled object files for i386? As
>> > long as the build target is i386, these object files could be
>> > linked in instead of using gcc to produce (slower) object
>> > files. This would mean a 2* speedup for users, which is vital in
>> > order to reach line-speed. Other platforms recompile as normal.
>> > 
>> > On the other hand, is this still open source?  Is this allowed by
>> > policy?  Can this go into main?
>> 
>> Main must be built with only packages from main.

> No, that's not true.

>  In addition, the packages in _main_
> * must not require a package outside of _main_ for
>   compilation or execution (thus, the package must not
>   declare a "Depends", "Recommends", or "Build-Depends"
>   relationship on a non-_main_ package),

> There's a difference, which is crucial. ICC may not be Free
> Software, policy does not say you must only use Free Software to
> build a package; it says you must not /require/ a package outside
> main to build it.

Policy is not the only thinkg we consult when deciding what
 fits into main.

> The difference is subtle, but crucial.

Reproducing this package _does_ require non-free software.  If
 I use just free software, I can't reproduce that .deb -- and thus
 that .deb may not be kept in main.


> Wesley's software can be built using software in main. It will not
> be as fast, but it will still do its job, flawlessly, without loss
> of features, with the ability to modify the software to better meet
> one's needs if so required.

Right. So the package may live in main -- as long as the .deb
 was actually produced from packages that live in main.

>> If you really want to distribute a package built with icc, you
>> should make a separate package in the contrib section, and have it
>> conflict with the package in main.

> That's also possible, of course, but not required IMO.

I think it is.  Any package in the archive should be
 reproducible if built on a buildd.

>> The GPL doesn't restrict anything you are describing, as long as
>> the source is available alongside.

> Nor does Policy.

I would not be too sure.

Stuff built using non-free stuff is not free -- even if a
 different version _may_ be built using free software. So,  a .deb
 built using non-free software  should not be in the main archive.

manoj

-- 
Kiss your keyboard goodbye!
Manoj Srivastava   <[EMAIL PROTECTED]>  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Glenn Maynard
On Tue, Oct 19, 2004 at 02:49:09AM +0200, Wesley W. Terpstra wrote:
> I suggest you try:
> dd if=/dev/urandom of=testing bs=16 count=1048576
> split -a 3 -b 1024 testing testing.part.
> find -name testing.part.\* -print0 | xargs -0 parchive a -n 16384 testing.par 

You're splitting into parts which are far too small.  Typically, a file is
split into 20-50 parts, with 5 or 6 PARs for the set (acting as wildcards),
but you can easily create 50 PARs and say "collect any 25".  It's not
designed for thousands of tiny parts; if people want to distribute
thousands of files, they typically archive them up first, and split the
archive.  Most PAR operations are IO-bound (judging by drive thrashing,
not from actual benchmarks).

> Perhaps I should make my program 'par' command-line compatible!
> OTOH, when you have so many small files it is not convenient.

I don't really understand the use of allowing thousands of tiny parts.
What's the intended end use?

-- 
Glenn Maynard




Bug#277193: ITP: tagtool -- tool to tag and rename MP3 and Ogg Vorbis files

2004-10-18 Thread Graham Wilson
Package: wnpp
Severity: wishlist

* Package name: tagtool
  Version : 0.10
  Upstream Author : Pedro Lopes <[EMAIL PROTECTED]>
* URL : http://pwp.netcabo.pt/paol/tagtool/
* License : GPL
  Description : tool to tag and rename MP3 and Ogg Vorbis files

Audio Tag Tool is a program to manage the information fields in MP3 and
Ogg Vorbis files (commonly called tags). Tag Tool can be used to edit
tags one by one, but the most useful features are mass tag and mass
rename. These are designed to tag or rename hundreds of files at once,
in any desired format.

-- 
gram




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Glenn Maynard
On Mon, Oct 18, 2004 at 06:28:01PM -0700, John H. Robinson, IV wrote:
> Note the exact words (I am assuming that Glenn copied them verbatim):
> the package in main must be buildable with tools in main

Exact words are:

 In addition, the packages in _main_
* must not require a package outside of _main_ for compilation or
  execution (thus, the package must not declare a "Depends",
  "Recommends", or "Build-Depends" relationship on a non-_main_
  package),

If you build with different tools, you have a different package.  "X
built with gcc" and "X built with icc" are very different things (just
as "X" and "X with x.patch and x2.patch applied" are different things).

> This package is buildable by tools in main. It meets the letter of the
> law. The spirit seems a bit ambiguous. Good case in point, the m68k

I hope we all agree that the spirit is what matters; people who ignore
the spirit and word-lawyer the letter are people to ignore.

-- 
Glenn Maynard




Virus incident

2004-10-18 Thread [HERMES] Panda Antivirus for Exchange Server
Title: Virus incident





Panda Antivirus has found the following viruses in the message:
    Server :    HERMES


    Sent by :   debian-devel@lists.debian.org
    Address :   debian-devel@lists.debian.org
    To :    [EMAIL PROTECTED]
    Subject :   Mail Delivery (failure [EMAIL PROTECTED])
    Date :  19/10/2004  04:07:15
Sent by You


File :  F:\exchsrvr\imcdata\in\4N6FWFTX
       Virus :  Exploit/iFrame - Disinfected


http://www.pandasoftware.com





Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wesley W. Terpstra
On Tue, Oct 19, 2004 at 02:49:09AM +0200, Wesley W. Terpstra wrote:
> find -name testing.part.\* -print0 | xargs -0 parchive a -n 16384 testing.par 

After taking a look in the source code for par, I found this in rs.c:
|*| Calculations over a Galois Field, GF(8)

What does that mean? It means there are only 2^8 possible values to evaluate
the polynomial at. So, in fact, the above command will not even work, should
it terminate. However, this is not a problem with RS codes, just par.

I was also wrong about them using Berlekamp-Massey.
They use Gaussian elimination to compute the inverse.
This is complexity O(R^2N) which is even worse (see rs.c:214).
R=number of input files/packets
N=total number of files that were in the output (so N>=R)[B
ie:N>=R=n -> O(n^3) vs. O(n^2) for Berlekamp-Massey or O(nlogn) for mine.

OTOH, since they have at most 2^8 possible 'packets' (including the source), 
that keeps the complexity from killing them. --> 'only' 2^24 operations =)
Also on the plus side, this is the complexity per number of packets, 
not the complexity of the data to be processed.

Although I haven't checked, I would speculate from the simplicity of the
code that they use the normal matrix product algorithm, which means (at
best) O(LR) where L is the total length of the data, R is the number of
files.

As I mentioned, my code has an alphabet around 2^62.
Actually, it's (2^31-1)^2-1 .. but that's almost the same.
Handling potentially so many more packets means you need a new algorithm.

Still, par is very cool and I will liberally lift usability from it.
... and, of course, use it for unfair comparisons in my paper. =)

-- 
Wesley W. Terpstra




Maintenance of User-Mode Linux packages

2004-10-18 Thread Matt Zimmerman
Is anyone (other than martin f krafft) interested in co-maintaining some or
all of the UML-oriented packages in Debian?  This includes the following
source packages which I currently maintain:

- user-mode-linux
- kernel-patch-uml
- uml-utilities

Things are a bit chaotic upstream at the moment, and due to real life
concerns I have fallen behind on their maintenance.

Manoj has recently added UML support to kernel-package, and user-mode-linux
should be reworked to use that, rather than its current special-case code.

-- 
 - mdz




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wesley W. Terpstra
On Tue, Oct 19, 2004 at 02:49:09AM +0200, Wesley W. Terpstra wrote:
> I would wager that par is using the Berklekamp-Masey algorithm for decoding; 
That would be Berlekamp-Massey. Appologies to both.
I should add their names to my spell checker. =)

-- 
Wesley W. Terpstra




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread John H. Robinson, IV
I am not subscribed to debian-legal.

Steve Langasek wrote:
> On Tue, Oct 19, 2004 at 02:04:42AM +0200, Wouter Verhelst wrote:
> > On Mon, Oct 18, 2004 at 07:02:19PM -0400, Glenn Maynard wrote:
> > > it says "the package in main must be buildable with tools in main".
> 
> > That is still the case. The fact that the package in main is built using
> > non-free tools is irrelevant -- it can be rebuilt using software only in
> > main; it can be ran using software only in main; and the difference is
> > not noticeable except by comparing checksums, benchmarks, or to those
> > with an intimate knowledge in compiler optimizers.
> 
> > A difference in optimization is not relevant to a package's freedom.
> 
> If compiling the program with a non-free compiler gains you users who would
> not find the package usable otherwise, distributing binaries built with
> such a compiler induces your users to be dependant (indirectly) on non-free
> software.  That is a freedom issue.

I tend to agree with Wouter on this issue. The source can compile with
gcc. Anyone with the sources, and gcc can rebuild the package and It
Works. No difference in functionality, merely a difference in
performance.

Note the exact words (I am assuming that Glenn copied them verbatim):
the package in main must be buildable with tools in main

Note what it does not say:
the package in main must have been built only with tools in main

This package is buildable by tools in main. It meets the letter of the
law. The spirit seems a bit ambiguous. Good case in point, the m68k
cross-compiled stuff, where the cross-compiler used was non-free. (I
have not verified the accuracy of the non-free claim of the cross-
compiler)

Also, this discussion is academic as the maintainer is going to split
the package into two: gcc build in main, and icc built in contrib. Given
the circumstance, I felt that this action is the best.

We could fork this into a discussion of re-building all packages
uploaded (ala source only uploads) which neatly sidesteps the entire
``intent of buildable with tools in main'' issue entirely.

-- 
John H. Robinson, IV  [EMAIL PROTECTED]
 http  
WARNING: I cannot be held responsible for the above, sbih.org ( )(:[
as apparently my cats have learned how to type.  spiders.html  




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wesley W. Terpstra
On Mon, Oct 18, 2004 at 07:45:39PM -0400, Glenn Maynard wrote:
> On Tue, Oct 19, 2004 at 12:59:42AM +0200, Wesley W. Terpstra wrote:
> > To which, I say, wtf!
> You're using it wrong.

Well thank goodness, b/c otherwise that would be really awful. :)
This gives me a great source to compare my algorithm's speed against.
Thank you again for showing me it, and straighting out my misuse.

> [instructions which I followed and worked]
> This is exactly what you describe.

Yep! That's what Reed-Solomon codes can do.
I never claimed to invent them; as I said earlier, my work was on creating
an algorithm which could decode >5GB of data broken into packets (which for
me means small packets of network size).

I suggest you try:
dd if=/dev/urandom of=testing bs=16 count=1048576
split -a 3 -b 1024 testing testing.part.
find -name testing.part.\* -print0 | xargs -0 parchive a -n 16384 testing.par 

... now, twiddle you thumbs.
After about a minute, you will see it process the first file
Another 30s will get you the second.
You have 16382 to go.

If you strace it, you will see there are no system calls being performed at
this time, so the slowness is not due to the large directory. Also, the
processor is fully loaded.

This is only for _encoding_ which is much easier than decoding.
You also are only doing it over 16MB. 
My algorithm can handle 3 orders of magnitude more data much faster.

I would wager that par is using the Berklekamp-Masey algorithm for decoding; 
this is the most popular algorithm for RS codes at the moment. 
This algorithm has time O(n^2) for n 'parts'.
My algorithm has time O(nlogn) and a not too terrible constant.

Perhaps I should make my program 'par' command-line compatible!
OTOH, when you have so many small files it is not convenient.

Thank you very very much for bringing this implementation to my attention.

-- 
Wesley W. Terpstra




Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Tomas Fasth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Steve Langasek wrote:
| On Mon, Oct 18, 2004 at 02:06:16AM -0500, Branden Robinson wrote:
|> On Fri, Oct 15, 2004 at 12:06:36PM -0700, Steve Langasek wrote:
|>> environment variables, at least, are trivial to accomplish
|>> using the pam_env module.  Properly setting a umask would
|>> call for something else yet.
|
|> Would pam_umask.so be a worthwhile exercise for some
|> enterprising person?
|
| (after discussing on IRC) Considering we already have a
| pam_limits.so for ulimits, yes.
Good thinking. I looked at the source. It (pam_limits) already
(conditionally) support things like Linux system capabilities.
Should be able to handle support for umask as well, preferably
upstream. I have sent the authors (redhat staff) a query.
// Tomas
- --
Tomas Fasth <[EMAIL PROTECTED]>
GnuPG 0x9FE8D504
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBdGVewYdzVZ/o1QQRAkwuAJ9IGMwWtNBZV6r5zfcsHe0m5WGoKgCfR1Aa
mGjyqtPZk06hr7UkwTI1GO0=
=exV9
-END PGP SIGNATURE-



Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Steve Langasek
On Tue, Oct 19, 2004 at 02:04:42AM +0200, Wouter Verhelst wrote:
> On Mon, Oct 18, 2004 at 07:02:19PM -0400, Glenn Maynard wrote:
> > You can't take the source, compile it with a proprietary compiler and
> > upload the result to main, because in order to create that package,
> > you need a non-free compiler.  The fact that you can also compile the
> > sources with a free compiler is irrelevant; non-free tools are still
> > required to create the package actually in main.  Policy doesn't say

> If that is important, then we must throw all packages which are built
> using an outdated glibc, binutils, or gcc out of the archive; because
> the tools used to build those packages are no longer in main (no,
> snapshot.debian.net doesn't count); and building something which was
> built, e.g., 4 months ago using the then-current unstable toolchain,
> will not produce the exact same package as it would today -- it would
> produce...

> > "you must be be able to build a package similar to the one in main using
> > tools in main";

> ...a package similar to the one in main.

> Your point?

In an ideal world, rebuilding the archive from scratch today would give us
an archive with byte-for-byte identical contents.  In the real world, this
is clearly impossible because nothing stays still long enough to make such
routine bootstrapping feasible -- especially on our slower archs, but even
on our fastest archs it would be a maddening exercise.

This doesn't make the *goal* of reproducible binaries irrelevant; indeed, if
pushing 11 architectures up the hill towards release at the same time
weren't already a sisyphean task, I think it would be a good idea to
actually make such a bootstrap part of the release cycle once sources are
frozen.  As it stands, we just have to draw the cutoff line somewhere more
reasonable, i.e., requiring rebuilt packages to be *functionally*
indistinguishable.

> Also, consider the fact that in the past (at least if I'm not mistaken),
> the m68k kernel maintainer used to provide m68k kernel packages built
> using a cross-compiler on his i386 system. Since the cross-compiler
> isn't in main, should those m68k kernels have been moved to the
> 'contrib' archive?

A cross-compiler and a native compiler built from the same toolchain sources
*should* give identical output for all inputs.  I'm personally willing to
assume that this is true for current gcc/binutils in the archive, and treat
such cross-compilers as another legitimate shortcut like ccache or distcc,
unless shown otherwise.  If he were using a cross-compiler that was not
kept in sync with the packages in unstable, however, I would *not* accept
the premise that the binaries are identical to those built by the current
gcc; uploading such binaries would unnecessarily complicate the debugging
process, and could even hide bugs when trying to build the sources with a
current compiler.  I would say that such binaries should not be uploaded at
all unless it can be reasonably shown that building with the official
toolchain produces the same output (which basically requires *using* that
toolchain for building, so there's no point in not uploading those binaries
directly).

> > it says "the package in main must be buildable with tools in main".

> That is still the case. The fact that the package in main is built using
> non-free tools is irrelevant -- it can be rebuilt using software only in
> main; it can be ran using software only in main; and the difference is
> not noticeable except by comparing checksums, benchmarks, or to those
> with an intimate knowledge in compiler optimizers.

> A difference in optimization is not relevant to a package's freedom.

If compiling the program with a non-free compiler gains you users who would
not find the package usable otherwise, distributing binaries built with
such a compiler induces your users to be dependant (indirectly) on non-free
software.  That is a freedom issue.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Glenn Maynard
On Tue, Oct 19, 2004 at 02:04:42AM +0200, Wouter Verhelst wrote:
> On Mon, Oct 18, 2004 at 07:02:19PM -0400, Glenn Maynard wrote:
> > You can't take the source, compile it with a proprietary compiler and
> > upload the result to main, because in order to create that package,
> > you need a non-free compiler.  The fact that you can also compile the
> > sources with a free compiler is irrelevant; non-free tools are still
> > required to create the package actually in main.  Policy doesn't say
> 
> If that is important, then we must throw all packages which are built
> using an outdated glibc, binutils, or gcc out of the archive; because
> the tools used to build those packages are no longer in main (no,
> snapshot.debian.net doesn't count); and building something which was
> built, e.g., 4 months ago using the then-current unstable toolchain,
> will not produce the exact same package as it would today -- it would
> produce...

This is a nice attempt at rationalizing parts of the Debian archive
being buildable in their "optimized" form only using non-free tools.

> Your point?

My point is obvious; if you can't understand it, I'm not going to spend
much time trying to force it into you, since I doubt others are having so
much trouble.

> Also, consider the fact that in the past (at least if I'm not mistaken),
> the m68k kernel maintainer used to provide m68k kernel packages built
> using a cross-compiler on his i386 system. Since the cross-compiler
> isn't in main, should those m68k kernels have been moved to the
> 'contrib' archive?

The cross-compiler probably should have been packaged.  I don't consider
these comparable issues, anyhow, since the cross-compiler is Free; the tools
needed to build the binaries in question are not.

> > it says "the package in main must be buildable with tools in main".
> 
> That is still the case. The fact that the package in main is built using
> non-free tools is irrelevant -- it can be rebuilt using software only in
> main; it can be ran using software only in main; and the difference is
> not noticeable except by comparing checksums, benchmarks, or to those
> with an intimate knowledge in compiler optimizers.

You're playing down "benchmarks" as if they're unimportant, yet the entire
purpose of building with proprietary tools is for speed--clearly the
benchmarks are important.  You can't have it both ways.

Being able to rebuild binaries in main is critically important (ie. for fixing
security-related bugs down the line), and the only people who can do so for
this package are the people with these proprietary tools.  I suspect you'd
have difficulty getting such an update into a stable release *at all* if the
only way you could do so required halving the speed of the program (and
possibly introducing bugs) due to a changed compiler.

-- 
Glenn Maynard




Re: USB wireless

2004-10-18 Thread Brendan
On Monday 18 October 2004 19:27, Tom Kuiper wrote:

*Supposedly*
http://www.softwareandstuff.com/NET10278.html

> Does anyone know of a USB wireless device that can be used under Linux
> without too much effort?
>
> Thanks
>
> Tom
> --
> Internet:   [EMAIL PROTECTED] (137.79.89.31)
> SnailMail:  Jet Propulsion Lab 169-506, Pasadena, CA 91109
> Phone/fax:  (818) 354-5623/8895
> WWW:http://DSNra.JPL.NASA.gov/~kuiper/
> Required disclaimer: Any opinion expressed herein is my own.




New ClamAV version uploaded, testers wanted

2004-10-18 Thread Stephen Gran
Hello all,

I have uploaded 0.80 to experimental temporarily for testing purposes
(it is also on p.d.o/~sgran).  The two main concerns I have with
releasing it into the wild at this point are false positives in the jpeg
scanning code (appears to be largely the result of a bad signature, not
the engine at this point) and the upgrade path.

With 0.80, clamd's config file changed names from clamav.conf to
clamd.conf.  I _believe_ the upgrade path works correctly and preserves
user changes appropriately, but I would really appreciate some further
testing before releasing -2.

Thanks all,
-- 
 -
|   ,''`.Stephen Gran |
|  : :' :[EMAIL PROTECTED] |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


pgp0e9ZR4BZyz.pgp
Description: PGP signature


Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wouter Verhelst
On Mon, Oct 18, 2004 at 07:02:19PM -0400, Glenn Maynard wrote:
> You can't take the source, compile it with a proprietary compiler and
> upload the result to main, because in order to create that package,
> you need a non-free compiler.  The fact that you can also compile the
> sources with a free compiler is irrelevant; non-free tools are still
> required to create the package actually in main.  Policy doesn't say

If that is important, then we must throw all packages which are built
using an outdated glibc, binutils, or gcc out of the archive; because
the tools used to build those packages are no longer in main (no,
snapshot.debian.net doesn't count); and building something which was
built, e.g., 4 months ago using the then-current unstable toolchain,
will not produce the exact same package as it would today -- it would
produce...

> "you must be be able to build a package similar to the one in main using
> tools in main";

...a package similar to the one in main.

Your point?

Also, consider the fact that in the past (at least if I'm not mistaken),
the m68k kernel maintainer used to provide m68k kernel packages built
using a cross-compiler on his i386 system. Since the cross-compiler
isn't in main, should those m68k kernels have been moved to the
'contrib' archive?

> it says "the package in main must be buildable with tools in main".

That is still the case. The fact that the package in main is built using
non-free tools is irrelevant -- it can be rebuilt using software only in
main; it can be ran using software only in main; and the difference is
not noticeable except by comparing checksums, benchmarks, or to those
with an intimate knowledge in compiler optimizers.

A difference in optimization is not relevant to a package's freedom.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


signature.asc
Description: Digital signature


Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Glenn Maynard
On Tue, Oct 19, 2004 at 12:59:42AM +0200, Wesley W. Terpstra wrote:
> To which, I say, wtf!

You're using it wrong.

> ... and yet par2 tells me I need 1909 more (I have 101).
> That means I would need 18* more information in order to recover foo.pdf!
> I have to admit, I am surprised by this. I would have expected a better
> ratio, even with very old techniques.

First off, I'm going to use PAR, since it's much simpler and in many
cases a better choice than par2.  (Par2 allows recovering files which
have internal corruption, which is useful for completing almost-complete
Bittorrented files.)

You need to split files separately for PAR to work as you want.  This
is done just about everywhere on Usenet binary groups these days.  I'm
going to use split(1); people on Usenet, unfortunately, have a habit
of using RAR.

Let's create a 16-meg file:

07:27pm [EMAIL PROTECTED]/11 [~/z] dd if=/dev/urandom of=testing bs=1048576 
count=16
16+0 records in
16+0 records out

Split it into parts, not necessarily of even size:

07:29pm [EMAIL PROTECTED]/11 [~/z] split -b 100 testing testing.part.
07:29pm [EMAIL PROTECTED]/11 [~/z] ls -l
total 32824
-rw-r--r--  1 glenn users 16777216 Oct 18 19:28 testing
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.aa
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ab
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ac
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ad
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ae
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.af
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ag
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ah
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ai
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.aj
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ak
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.al
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.am
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.an
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ao
-rw-r--r--  1 glenn users  100 Oct 18 19:29 testing.part.ap
-rw-r--r--  1 glenn users   777216 Oct 18 19:29 testing.part.aq

and then:

07:31pm [EMAIL PROTECTED]/11 [~/z] parchive a -n 5 testing.par testing.part.*
...
07:32pm [EMAIL PROTECTED]/11 [~/z] ls -l testing.p??
-rw-r--r--  1 glenn users 1001558 Oct 18 19:32 testing.p01
-rw-r--r--  1 glenn users 1001558 Oct 18 19:32 testing.p02
-rw-r--r--  1 glenn users 1001558 Oct 18 19:32 testing.p03
-rw-r--r--  1 glenn users 1001558 Oct 18 19:32 testing.p04
-rw-r--r--  1 glenn users 1001558 Oct 18 19:32 testing.p05
-rw-r--r--  1 glenn users1558 Oct 18 19:32 testing.par

Now all I need is enough parts:

07:32pm [EMAIL PROTECTED]/11 [~/z] mkdir partial
07:32pm [EMAIL PROTECTED]/11 [~/z] cp testing.part.* partial/
07:33pm [EMAIL PROTECTED]/11 [~/z] rm partial/testing.part.ah
07:33pm [EMAIL PROTECTED]/11 [~/z] du partial
16444   partial
07:33pm [EMAIL PROTECTED]/11 [~/z] cp testing.p02 partial/

and recover:

07:33pm [EMAIL PROTECTED]/11 [~/z/partial] parchive r testing.p02
Checking testing.p02
  testing.part.aa  - OK
  testing.part.ab  - OK
  testing.part.ac  - OK
  testing.part.ad  - OK
  testing.part.ae  - OK
  testing.part.af  - OK
  testing.part.ag  - OK
  testing.part.ah  - NOT FOUND
  testing.part.ai  - OK
  testing.part.aj  - OK
  testing.part.ak  - OK
  testing.part.al  - OK
  testing.part.am  - OK
  testing.part.an  - OK
  testing.part.ao  - OK
  testing.part.ap  - OK
  testing.part.aq  - OK

Restoring:
0%10%20%30%40%50%60%70%80%90%100%
  testing.part.ah  - RECOVERED

and join:

07:33pm [EMAIL PROTECTED]/11 [~/z/partial] cat testing.part.* > testing
07:34pm [EMAIL PROTECTED]/11 [~/z/partial] diff testing ../testing
07:34pm [EMAIL PROTECTED]/11 [~/z/partial]

Alternatively, I can work exclusively with PAR files (though there's rarely
any reason to do this), by creating enough PARs so their sum is large enough
to recreate the file:

07:35pm [EMAIL PROTECTED]/11 [~/z] parchive a -n 20 testing.par testing.part.*
07:35pm [EMAIL PROTECTED]/11 [~/z] mkdir testing2
07:36pm [EMAIL PROTECTED]/11 [~/z] cp testing.p?? testing2

07:36pm [EMAIL PROTECTED]/11 [~/z/testing2] parchive r testing.p13
07:36pm [EMAIL PROTECTED]/11 [~/z/testing2] cat testing.part.* > testing
07:37pm [EMAIL PROTECTED]/11 [~/z/testing2] diff testing ../testing
07:37pm [EMAIL PROTECTED]/11 [~/z/test

Re: forwarding bugs to other packages

2004-10-18 Thread Colin Watson
On Tue, Oct 19, 2004 at 08:14:25AM +1000, Brian May wrote:
> > "Wouter" == Wouter Verhelst <[EMAIL PROTECTED]> writes:
> >> How about merging those bugs with the bug reported against the correct
> >> package?
> 
> Wouter> That's not possible. You can only merge bugs if /all/
> Wouter> properties (tags, severity, package reported against, ...)
> Wouter> are the same.
> 
> Why is this? I can open one bug against multiple packages, so I think
> it should be possible to merge two or more bugs against different
> packages.

If you like, you can reassign both bugs to "foo,bar" and merge them.

-- 
Colin Watson   [EMAIL PROTECTED]




Re: forwarding bugs to other packages

2004-10-18 Thread Colin Watson
On Tue, Oct 19, 2004 at 08:16:02AM +1000, Brian May wrote:
> > "Martin" == Martin Michlmayr <[EMAIL PROTECTED]> writes:
> Martin> * Wouter Verhelst <[EMAIL PROTECTED]> [2004-10-18 13:32]:
> >> That's not possible. You can only merge bugs if /all/ properties (tags,
> >> severity, package reported against, ...) are the same.
> 
> Martin> Just for the record, tags are an exception.  They are
> Martin> merged when you merge bugs.
> 
> What happens to the tags if the two reports are split apart again? Are
> their original values restored, or do they keep their new values?

They keep their new values.

-- 
Colin Watson   [EMAIL PROTECTED]




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Glenn Maynard
On Tue, Oct 19, 2004 at 12:37:45AM +0200, Wouter Verhelst wrote:
> On Mon, Oct 18, 2004 at 07:51:00PM +0200, Josselin Mouette wrote:
> > Le lundi 18 octobre 2004 à 19:22 +0200, Wesley W. Terpstra a écrit :
> > > So, when it comes time to release this and include it in a .deb, I ask
> > > myself: what would happen if I included (with the C source and ocaml
> > > compiler) some precompiled object files for i386? As long as the build
> > > target is i386, these object files could be linked in instead of using
> > > gcc to produce (slower) object files. This would mean a 2* speedup for
> > > users, which is vital in order to reach line-speed. Other platforms 
> > > recompile as normal.
> > > 
> > > On the other hand, is this still open source?
> > > Is this allowed by policy?
> > > Can this go into main?
> > 
> > Main must be built with only packages from main.
> 
> No, that's not true.
> 
>  In addition, the packages in _main_
> * must not require a package outside of _main_ for compilation or
>   execution (thus, the package must not declare a "Depends",
>   "Recommends", or "Build-Depends" relationship on a non-_main_
>   package),
> 
> There's a difference, which is crucial. ICC may not be Free Software,
> policy does not say you must only use Free Software to build a package;
> it says you must not /require/ a package outside main to build it.
> 
> The difference is subtle, but crucial.
> 
> Wesley's software can be built using software in main. It will not be as
> fast, but it will still do its job, flawlessly, without loss of
> features, with the ability to modify the software to better meet one's
> needs if so required.

"The package must not require a outside of main for compilation".

You can't take the source, compile it with a proprietary compiler and
upload the result to main, because in order to create that package,
you need a non-free compiler.  The fact that you can also compile the
sources with a free compiler is irrelevant; non-free tools are still
required to create the package actually in main.  Policy doesn't say
"you must be be able to build a package similar to the one in main using
tools in main"; it says "the package in main must be buildable with
tools in main".

-- 
Glenn Maynard




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wesley W. Terpstra
On Mon, Oct 18, 2004 at 05:15:20PM -0400, Glenn Maynard wrote:
> Isn't this just what PAR and PAR2 do (in conjunction with a file splitter)?

Thanks for the pointer to this project; I didn't know about it.
However, to answer your question: no.

PAR2 uses Reed-Solomon codes, my project also uses a variant of
Reed-Solomon. However, the result is completely different.

PAR2 aims to correct corruption of data (as well as loss).
Most differently, it also trades disk overhead for speed.

My program does not attempt to correct corruption (though there are
algorithms which can do so, just not efficiently). If corruption 
resilience is needed, you would have to add a CRC to each packet.

[ For those who know how Reed-Solomon works, I am applying Reed-Solomon 
over the entire file as one block -- using an alphabet of size ~2^62. 
Normally Reed-Solomon breaks the file into blocks of a fixed size and 
adds the correction these small blocks. ]

For example, with par2, I have 
-rw-r--r--  1 terpstra terpstra 627103 2004-10-19 00:18 foo.pdf

I ran: par2 create foo.pdf
It generated:
-rw-r--r--  1 terpstra terpstra  40600 2004-10-19 00:19 foo.pdf.par2
-rw-r--r--  1 terpstra terpstra  40980 2004-10-19 00:19 foo.pdf.vol000+01.par2
-rw-r--r--  1 terpstra terpstra  81860 2004-10-19 00:19 foo.pdf.vol001+02.par2
-rw-r--r--  1 terpstra terpstra 123120 2004-10-19 00:19 foo.pdf.vol003+04.par2
-rw-r--r--  1 terpstra terpstra 165140 2004-10-19 00:19 foo.pdf.vol007+08.par2
-rw-r--r--  1 terpstra terpstra 208680 2004-10-19 00:19 foo.pdf.vol015+16.par2
-rw-r--r--  1 terpstra terpstra 255260 2004-10-19 00:19 foo.pdf.vol031+32.par2
-rw-r--r--  1 terpstra terpstra 257540 2004-10-19 00:19 foo.pdf.vol063+38.par2

I then removed foo.pdf and tried: par2 recover foo.pdf.par2
...
You have 0 out of 2010 data blocks available.
You have 101 recovery blocks available.
Repair is not possible.
You need 1909 more recovery blocks to be able to repair.

To which, I say, wtf!
The data I have totals
du -sb .
1173540 .

That's even more than foo.pdf!
[EMAIL PROTECTED]:~/x/y$ du -bs ../foo.pdf
627103  foo.pdf

... and yet par2 tells me I need 1909 more (I have 101).
That means I would need 18* more information in order to recover foo.pdf!
I have to admit, I am surprised by this. I would have expected a better
ratio, even with very old techniques.

My program is entirely different.
Let's say it's called rsgt (I haven't decided on a name yet).

You run: rsgt 1024  foo.pdf
You then get  files all of size 1024 (regardless of the size of foo.pdf).

As long as you have enough files that the total size is the same or larger
than the size of foo.pdf, you can recover foo.pdf.

You can create nearly 2^62 different such files, and start generating them
from any starting point. This means you could essentially send a never
ending stream of (network sized) packets which people could 'tap into'.
After getting any subsequence of packets, as long as you get enough so that
their total is the file size, you are done.

This what is different about my program: it has zero space overhead (well,
it has an 8 byte per packet header, but it doesn't depend on packet length).
What is new in terms of research is that I have an algorithm which can do
the decoding quickly.

rsgt aims not to add 'parity' as I think 'par'2 is intended to suggest.
Rather, it transforms a file into a stream of user-defined-size packets
which goes on practically forever. ANY of the packets will do to get back
the original file, as long as the sum is the same size.

-- 
Wesley W. Terpstra




Re: Some file-in-etc-not-marked-as-conffile RC bugs

2004-10-18 Thread Steve Langasek
On Tue, Oct 19, 2004 at 01:02:50AM +0200, Bill Allombert wrote:
> I stomped across

> 

> It seems several packages fail to declare conffiles as such, which is
> a serious policy violation given that user change will not be preserved
> across upgrades.

> Someone with more time than I have currently should probably check them.
> A cursory look give as 'worst offender':

> gdm: /etc/gdm/factory-gdm.conf
> gnocatan-meta-server: /etc/init.d/gnocatan-meta-server
> pimd: /etc/pimd.conf 
> ratmenu: /etc/menu-methods/ratmenu
> vsftpd:  /etc/ftpusers /etc/init.d/vsftpd
> wdm: /etc/logrotate.d/wdm

> Sorry for any false positive.
> Comments about how a non-conffile-to-conffile migration should be implement
> welcome.

If you mean to try to preserve existing changes to these files on upgrade,
that would require a bit of nasty preinst hacking.  Trivially, though,
getting the files marked as conffiles in the *current* version of the
packages is far better for our users going forward.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Some file-in-etc-not-marked-as-conffile RC bugs

2004-10-18 Thread Bill Allombert
Hello developers,

I stomped across



It seems several packages fail to declare conffiles as such, which is
a serious policy violation given that user change will not be preserved
across upgrades.

Someone with more time than I have currently should probably check them.
A cursory look give as 'worst offender':

gdm: /etc/gdm/factory-gdm.conf
gnocatan-meta-server: /etc/init.d/gnocatan-meta-server
pimd: /etc/pimd.conf 
ratmenu: /etc/menu-methods/ratmenu
vsftpd:  /etc/ftpusers /etc/init.d/vsftpd
wdm: /etc/logrotate.d/wdm

Sorry for any false positive.
Comments about how a non-conffile-to-conffile migration should be implement
welcome.

Cheers,
-- 
Bill. <[EMAIL PROTECTED]>

Imagine a large red swirl here. 




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread John Hasler
I wrote:
> Packages in main must be _buildable_ with only packages from main.

Wesley W. Terpstra writes:
> This slight difference in wording sounds to me like I would indeed be
> able to include prebuilt object files, so long as the package could be
> built without them. Is that correct?

I wouldn't want to see object files in the source package.

I must be able to download your source package to a machine with only Main
installed, do 'dpkg-buildpackage', and get a usable deb.  There is no
reason why doing likewise on a machine with icc installed and CC=icc could
not also produce a usable deb given the right debian/rules and makefiles.

I _think_ it would also be acceptable for you to build the debs you upload
with CC=icc.

> At this point my question is only academic; the pure-gcc in main,
> icc-prebuilt in contrib solution seems to solve my concerns just as well.

Would anyone use the pure-gcc version?
-- 
John Hasler 
[EMAIL PROTECTED]
Elmwood, WI USA




Re: forwarding bugs to other packages

2004-10-18 Thread Wouter Verhelst
On Tue, Oct 19, 2004 at 08:14:25AM +1000, Brian May wrote:
> > "Wouter" == Wouter Verhelst <[EMAIL PROTECTED]> writes:
> Wouter> That's not possible. You can only merge bugs if /all/
> Wouter> properties (tags, severity, package reported against, ...)
> Wouter> are the same.
> 
> Why is this?

Simple. When merging a bug, you claim that the two bugs are really the
same. They're not just related, nor are they only similar; they are the
same bug -- the problems described in both bug reports result from the
same programmer error.

Because they are the same bug, they must be equal in the BTS. The same
programmer error cannot occur in two packages at the same time, unless
they're built from the same source.

Hence, the BTS insists that their parameters are the same.

> I can open one bug against multiple packages,

Yes, but in that case the claim is that there is a bug in the
communication between, or the integration of, both packages. That's not
really the same thing. Usually, when filing bugs against multiple
packages is appropriate, the error that was made is one of communication
between two programmers.

[...no other remarks...]

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune




Re: forwarding bugs to other packages

2004-10-18 Thread Brian May
> "Adrian" == Adrian 'Dagurashibanipal' von Bidder <[EMAIL PROTECTED]> 
> writes:

>> I have a number of bugs reported against my packages which are
>> actually (already reported) bugs in other packages.

Adrian> Reading the rest of the thread, I conclude that adding an
Adrian> explanation to the bug and tagging it wontfix is probably
Adrian> the best solution.

You could be right here.

You won't get informed if the bug is closed though, and the wontfix
status may imply you don't want to fix the bug.
-- 
Brian May <[EMAIL PROTECTED]>




Re: forwarding bugs to other packages

2004-10-18 Thread Brian May
> "Adeodato" == Adeodato Simó <[EMAIL PROTECTED]> writes:

Adeodato> * Bernd Eckenfels [Mon, 18 Oct 2004 12:01:32 +0200]:
>> Perhaps we need a "read this before submitting bugs against my package"
>> function in reportbugs :)

Adeodato> such functionality exists, via the
Adeodato> /usr/share/bug//presubj file. see
Adeodato> /usr/share/doc/reportbug/README.developers.

I really don't see how this is going to help, unless you want to list
known bugs in the package. This implies uploading a new copy of the
package when bugs in other packages are fixed.

It also will only work if you use reportbug. If you are just browsing
the BTS via HTTP to find a solution to your problem, you may not find
anything.
-- 
Brian May <[EMAIL PROTECTED]>




Re: forwarding bugs to other packages

2004-10-18 Thread Brian May
> "Bernd" == Bernd Eckenfels <[EMAIL PROTECTED]> writes:

Bernd> On Mon, Oct 18, 2004 at 05:54:44PM +1000, Brian May wrote:
>> I could just close the bug against my package, but this means other
>> people will encounter the same problem and report the bug against my
>> package again (as it isn't always obvious that it isn't the fault of
>> my package).

Bernd> So you do not want to reassign them to the correct package?
Bernd> I dont think thats a good idea (even when i can understand
Bernd> where are you coming from).

Like I said in my previous post, there are times when having 2
separate bug reports is a good idea.

e.g. you might think a reported bug in your package is due to a bug in
a library, so you reassign the bug report to the library.

The library maintainer decides it isn't a bug in the library, and
prematurely closes the bug report. Or maybe the library maintainer
finds a bug, and fixes it, but it was an unrelated bug.

You never get the indication that the bug report has been closed, and
the bug submitter gets totally confused and either doesn't follow up
(perhaps assuming the problem was fixed), or follows up to the wrong
person (as the bug is still assigned to the library). As such you
don't get a chance to followup and make sure the bug, initially
reported against your package, really gets fixed.

Alternatively, when you reassign the bug to the library, the library
maintainer gets fed up because he already has 10+ bug reports on the
same issue.
-- 
Brian May <[EMAIL PROTECTED]>




Re: forwarding bugs to other packages

2004-10-18 Thread Brian May
> "Martin" == Martin Michlmayr <[EMAIL PROTECTED]> writes:

Martin> * Wouter Verhelst <[EMAIL PROTECTED]> [2004-10-18 13:32]:
>> That's not possible. You can only merge bugs if /all/ properties (tags,
>> severity, package reported against, ...) are the same.

Martin> Just for the record, tags are an exception.  They are
Martin> merged when you merge bugs.

What happens to the tags if the two reports are split apart again? Are
their original values restored, or do they keep their new values?
-- 
Brian May <[EMAIL PROTECTED]>




Re: forwarding bugs to other packages

2004-10-18 Thread Brian May
> "Wouter" == Wouter Verhelst <[EMAIL PROTECTED]> writes:

Wouter> On Mon, Oct 18, 2004 at 10:19:18AM +0200, Gergely Nagy wrote:
>> > I could just close the bug against my package, but this means other
>> > people will encounter the same problem and report the bug against my
>> > package again (as it isn't always obvious that it isn't the fault of
>> > my package).
>> 
>> How about merging those bugs with the bug reported against the correct
>> package?

Wouter> That's not possible. You can only merge bugs if /all/
Wouter> properties (tags, severity, package reported against, ...)
Wouter> are the same.

Why is this? I can open one bug against multiple packages, so I think
it should be possible to merge two or more bugs against different
packages.

Still, I can think of times when merging isn't
appropriate. e.g. consider:

bug report 1 package A: Program A segfaults.
bug report 2 package B: libb segfaults.

In which case, the 2 bug reports may highlight different aspects of
the same bug; while they are the same bug they are not the same. Bug 1
would list the details of the bug from the users perspective trying to
run A, but bug 2 is more likely to have extensive debugging
information proving the library is at fault.

Also, if I encountered a problem with A segfaulting, I would notice
the first title, but I might miss the second title (unless I knew A
was linked against libb).

When bug report 2 is closed, the maintainer of A may want to double
check to make sure that the bug really is fixed in A (maybe bug 2
wasn't the real cause after all), before 1 gets closed.
-- 
Brian May <[EMAIL PROTECTED]>




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wouter Verhelst
On Mon, Oct 18, 2004 at 07:51:00PM +0200, Josselin Mouette wrote:
> Le lundi 18 octobre 2004 à 19:22 +0200, Wesley W. Terpstra a écrit :
> > So, when it comes time to release this and include it in a .deb, I ask
> > myself: what would happen if I included (with the C source and ocaml
> > compiler) some precompiled object files for i386? As long as the build
> > target is i386, these object files could be linked in instead of using
> > gcc to produce (slower) object files. This would mean a 2* speedup for
> > users, which is vital in order to reach line-speed. Other platforms 
> > recompile as normal.
> > 
> > On the other hand, is this still open source?
> > Is this allowed by policy?
> > Can this go into main?
> 
> Main must be built with only packages from main.

No, that's not true.

 In addition, the packages in _main_
* must not require a package outside of _main_ for compilation or
  execution (thus, the package must not declare a "Depends",
  "Recommends", or "Build-Depends" relationship on a non-_main_
  package),

There's a difference, which is crucial. ICC may not be Free Software,
policy does not say you must only use Free Software to build a package;
it says you must not /require/ a package outside main to build it.

The difference is subtle, but crucial.

Wesley's software can be built using software in main. It will not be as
fast, but it will still do its job, flawlessly, without loss of
features, with the ability to modify the software to better meet one's
needs if so required.

> If you really want to distribute a package built with icc, you should
> make a separate package in the contrib section, and have it conflict
> with the package in main.

That's also possible, of course, but not required IMO.

> The GPL doesn't restrict anything you are describing, as long as the
> source is available alongside.

Nor does Policy.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


signature.asc
Description: Digital signature


Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Steve Langasek
On Mon, Oct 18, 2004 at 02:06:16AM -0500, Branden Robinson wrote:
> On Fri, Oct 15, 2004 at 12:06:36PM -0700, Steve Langasek wrote:
> > environment variables, at least, are trivial to accomplish using the
> > pam_env module.  Properly setting a umask would call for something else
> > yet.

> Would pam_umask.so be a worthwhile exercise for some enterprising person?

(after discussing on IRC) Considering we already have a pam_limits.so for
ulimits, yes.


-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Glenn Maynard
On Mon, Oct 18, 2004 at 11:49:56AM -0700, Josh Triplett wrote:
> Wesley W. Terpstra wrote:
> > I am developing a very CPU-intensive, open-source error-correcting code.
> > 
> > The intention of this code is that you can split a large (> 5GB)
> > file across multiple packets. Whenever you receive enough packets that
> > their combined size = the file size, you can decode the packets to
> > recover the file, regardless of which packets you get.
> 
> Sounds very interesting; there would be a ton of applications for that.
>  Thanks in advance for deciding to make it Open Source / Free Software.

Isn't this just what PAR and PAR2 do (in conjunction with a file splitter)?

-- 
Glenn Maynard




Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Tomas Fasth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Branden Robinson wrote:
| On Fri, Oct 15, 2004 at 12:06:36PM -0700, Steve Langasek wrote:
|
|> environment variables, at least, are trivial to accomplish
|> using the pam_env module.  Properly setting a umask would call
|> for something else yet.
|
| Would pam_umask.so be a worthwhile exercise for some enterprising
| person?
May I suggest pam_logindefs.so?
| I somehow suspect that umasks predate environment variables in
| the misty early history of Unix, else the umask would've been
| made one.
I don't think so. I think it is a result of careful and thorough
system design. Environment variables are excellent for tailoring
user space activities. Umask (ulimit, nice) is kernel business,
belong in it's data structures and therefore manipulated only
through system calls.
// Tomas
- --
Tomas Fasth <[EMAIL PROTECTED]>
GnuPG 0x9FE8D504
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBdDQ3wYdzVZ/o1QQRAtufAJ40I0q4CHw1qRlf/MJH9e3pnb1jWwCfbzie
+VTh0TnXv6qGrHgtuVZE6Ug=
=3UTD
-END PGP SIGNATURE-



Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Steve Langasek
On Mon, Oct 18, 2004 at 07:51:00PM +0200, Josselin Mouette wrote:
> Le lundi 18 octobre 2004 à 19:22 +0200, Wesley W. Terpstra a écrit :
> > So, when it comes time to release this and include it in a .deb, I ask
> > myself: what would happen if I included (with the C source and ocaml
> > compiler) some precompiled object files for i386? As long as the build
> > target is i386, these object files could be linked in instead of using
> > gcc to produce (slower) object files. This would mean a 2* speedup for
> > users, which is vital in order to reach line-speed. Other platforms 
> > recompile as normal.

> > On the other hand, is this still open source?
> > Is this allowed by policy?
> > Can this go into main?

> Main must be built with only packages from main.

> If you really want to distribute a package built with icc, you should
> make a separate package in the contrib section, and have it conflict
> with the package in main.

> The GPL doesn't restrict anything you are describing, as long as the
> source is available alongside.

I didn't think you were right about this, but upon reflection I was
surprised to find that you are:

  The source code for a work means the preferred form of the work for
  making modifications to it.  For an executable work, complete source
  code means all the source code for all modules it contains, plus any
  associated interface definition files, plus the scripts used to
  control compilation and installation of the executable.  However, as a
  special exception, the source code distributed need not include
  anything that is normally distributed (in either source or binary
  form) with the major components (compiler, kernel, and so on) of the
  operating system on which the executable runs, unless that component
  itself accompanies the executable.

This seems to say that you can compile the GPL components of your operating
system with any binary-only compiler you want to, IFF you don't ship that
compiler with your operating system.

Still, such binaries don't belong in main; you can't claim that building
with a non-free compiler is philosophically equivalent to building with gcc
while at the same time insisting to take advantage of the functional
advantages of the non-free compiler.  It's important that main be entirely
self-hosting, even when this doesn't give the best benchmark figures.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wesley W. Terpstra
On Mon, Oct 18, 2004 at 01:33:07PM -0500, John Hasler wrote:
> Josselin Mouette writes:
> > Main must be built with only packages from main.
> 
> Packages in main must be _buildable_ with only packages from main.

Interesting.

This slight difference in wording sounds to me like I would indeed be able
to include prebuilt object files, so long as the package could be built
without them. Is that correct?

The actual text in policy is:
* must not require a package outside of main for compilation or execution
(thus, the package must not declare a "Depends", "Recommends", or
"Build-Depends" relationship on a non-main package)

This wording appears to back up what you say (John).
The clause 'must not require' is fine with my case. Since the source files
can be rebuilt with gcc, icc is not required. Execution is a non-issue.

At this point my question is only academic; the pure-gcc in main,
icc-prebuilt in contrib solution seems to solve my concerns just as well.

-- 
Wesley W. Terpstra




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wesley W. Terpstra
Since there's one GPL question left, I am still posting to debian-legal.
The legal question is marked ** for those who want to skip the rest.

On Mon, Oct 18, 2004 at 11:49:56AM -0700, Josh Triplett wrote:
> Whether your university owns a license or not does not really affect
> Debian.  icc cannot be included in Debian main.

No, but debian can distribute precompiled object files (legally).
The binaries I meant were the object files.

> Keep in mind that if your algorithm is as good as it sounds, it will be
> around for a long time.  Even if a GCC-compiled version can't achieve
> line-speed right now, if all it needs is a 2x speedup, normal increases
> in computer technology will provide that soon enough.

True enough, but as processors get faster, so does bandwidth.
I expect that ultimately, it will always need to be as fast as possible.

> Consider this: any package with non-free Build-Depends that aren't
> strictly required at runtime could take this approach, by shipping
> precompiled files.  For example, this has come up several times with
> Java packages that tried to just ship a (Sun/Blackdown-compiled) .jar
> file in the source package.  The answer here is the same: you can't ship
> compiled files to avoid having a non-free build-depends (and shouldn't
> ship compiled files at all, even if they were compiled with a Free
> compiler); the package should always be built from source.

That is a good argument; thank you.

> * Upload a package to main which builds using GCC.  (As a side note, you
> might check to see if GCC 3.4/3.5 produces significantly better code.)

gcc-3.3 is not an issue; it ICEs.
gcc-3.4.2 is the version I was referring to.

> * Make it easy for people to rebuild using icc.  See the openoffice.org
> packages for an example; they contain support for rebuilding using a
> non-free JDK based on a flag in DEB_BUILD_OPTIONS.

That's a good idea.

> * Supply icc-built packages either on your people.debian.org site or in
> contrib; if the latter, you need to use a different package name and
> conflict with the gcc-built package in main.

Josselin Mouette <[EMAIL PROTECTED]> said:
> If you really want to distribute a package built with icc, you should
> make a separate package in the contrib section, and have it conflict
> with the package in main.

Yes, this sounds like a good plan.

Put the normal gcc version rsgt in main where the i386 deb has:
Recommends: rsgt-icc

rsgt-icc sits in contrib, completely built by icc (not just some .o s)
Conflicts: rsgt
Provides: rsgt
Replaces: rsgt

If an i386 user (with contrib sourced) runs 'apt-get install rsgt'
will that make apt install rsgt-icc? That's what I hope to accomplish.

(PS. rsgt is not the final name)

**
For it to sit in contrib, would I have to include the source code in contrib
as well? Or would the fact that the source code was in main already satisfy
the GPL requirement of source availability?

Clearly, it could still sit in non-free without the source, but contrib is
more accurate imo. If there's no reason to include 2* the source, I see no
reason to present 2* the load to the ftp-servers.

> it is acceptable *under the GPL* to provide binaries compiled with
> non-free compilers, unless the resulting compiled binary is somehow
> derivative of a non-free work that is not an OS component. In the end, if
> people want to exercise their rights under the GPL, they will want the
> source, not the binaries, and you are supplying that source alongside the
> binaries, which satisfies the GPL.

Then I suppose it makes sense to just supply a precompiled version (with
icc) and a source tarball as upstream. The debian version would work as
covered already above.

> > PS. I will provide the source code to anyone who requests it, but not yet
> > under the GPL. Only after I publish a paper about the algorithm will the 
> > code be released under the GPL.
> 
> Keep in mind that FFTW is GPLed, so unless you have made other
> arrangements with its copyright holders, you need to refrain from
> supplying the code or binaries to anyone unless under the GPL.

Oh, that's a good point. 
I withdrawl my offer of private pre-release.
You can only have a copy after I publish. ;)

Thank you for your detailed explanation and answer.

-- 
Wesley W. Terpstra




Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Tomas Fasth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello Branden,
Branden Robinson wrote:
| On Sat, Oct 16, 2004 at 01:28:31PM +0200, Tomas Fasth wrote:
|
|> What I don't understand is why you think the umask preference
|> should be applied differently depending on the type of
|> interface the user choose to initiate an interactive session
|> with.
|
|
| I don't.  Kindly stop putting words in my mouth, and re-read my
| original mail.  If you can discuss this subject without indulging
| yourself in straw-man attacks like this, please follow-up with a
| more reasonable message.
I have re-read your mail and I beg you for pardon. I was wrong.
| And, by the way: X-No-CC: I subscribe to this list; do not CC me
| on replies.
I'm very sorry but I'm not perfect. My earlier reply did not cc:
you. Could you please wait for it to happen a second time in the
same thread before complaining? You seem a bit touchy about it.
I found the following when I was googling for X-No-CC:
~ From: Stepan Kasal
~ Subject: Re: Paragraph indentation suppression
~ Date: Tue, 8 Apr 2003 10:00:55 +0200
[...]
~ PS: I'm not sending cc to the original poster, as I'm scared by
~ this: X-No-CC: If you CC me on this list, I will feed you to
~ Branden Robinson. (It seems that Karl has already been fed.)
I couldn't but smile. "Oh no, please have mercy, don't feed me to
Branden!" Poor Karl :)
| Please get an MUA that respects Mail-Copies-To:.
Thanks for the advice, but I prefer Firefox for the time being. I
may try to persuade the Mozilla people to accept a patch. Can you
give me a reference to a RFC-draft or something equivalent?
Live in peace, Tomas
- --
Tomas Fasth <[EMAIL PROTECTED]>
GnuPG 0x9FE8D504
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.2 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBdBw+wYdzVZ/o1QQRAuYQAJ91qTyek/fp58fX3TSGWRUmc0oUZgCfVFyA
8iz8rKvhvF3QuEX0VL4nH/c=
=cKe+
-END PGP SIGNATURE-



Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread John Hasler
Josselin Mouette writes:
> Main must be built with only packages from main.

Packages in main must be _buildable_ with only packages from main.
-- 
John Hasler




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Josh Triplett
Wesley W. Terpstra wrote:
> I am developing a very CPU-intensive, open-source error-correcting code.
> 
> The intention of this code is that you can split a large (> 5GB)
> file across multiple packets. Whenever you receive enough packets that
> their combined size = the file size, you can decode the packets to
> recover the file, regardless of which packets you get.

Sounds very interesting; there would be a ton of applications for that.
 Thanks in advance for deciding to make it Open Source / Free Software.

> This means a lot of calculation over gigabytes worth of data.
> Therefore, speed of utmost importance in this application.
> 
> The project itself includes an ocaml compiler (derived from fftw) which
> generates C code to perform various types of Fast Galois Transforms. Some
> of the output C code uses SSE2 exclusively. This C code is then compiled 
> and linked in with the other C sources that make up the application.

Interesting approach.

> Now, on to the dilemma: icc produces object files which run ~2* faster
> than the object files produced by gcc when SSE2 is used. (The non-SSE2
> versions are also faster, but so significantly) Both gcc and icc can 
> compile the generated C files. My University will shortly own a licence
> for icc which allows us to distribute binaries.

Whether your university owns a license or not does not really affect
Debian.  icc cannot be included in Debian main.

> So, when it comes time to release this and include it in a .deb, I ask
> myself: what would happen if I included (with the C source and ocaml
> compiler) some precompiled object files for i386? As long as the build
> target is i386, these object files could be linked in instead of using
> gcc to produce (slower) object files. This would mean a 2* speedup for
> users, which is vital in order to reach line-speed. Other platforms 
> recompile as normal.

Keep in mind that if your algorithm is as good as it sounds, it will be
around for a long time.  Even if a GCC-compiled version can't achieve
line-speed right now, if all it needs is a 2x speedup, normal increases
in computer technology will provide that soon enough.

Consider this: any package with non-free Build-Depends that aren't
strictly required at runtime could take this approach, by shipping
precompiled files.  For example, this has come up several times with
Java packages that tried to just ship a (Sun/Blackdown-compiled) .jar
file in the source package.  The answer here is the same: you can't ship
compiled files to avoid having a non-free build-depends (and shouldn't
ship compiled files at all, even if they were compiled with a Free
compiler); the package should always be built from source.

What you can do is this:

* Upload a package to main which builds using GCC.  (As a side note, you
might check to see if GCC 3.4/3.5 produces significantly better code.)

(Steps below this point are optional.)

* Make it easy for people to rebuild using icc.  See the openoffice.org
packages for an example; they contain support for rebuilding using a
non-free JDK based on a flag in DEB_BUILD_OPTIONS.

* Supply icc-built packages either on your people.debian.org site or in
contrib; if the latter, you need to use a different package name and
conflict with the gcc-built package in main.

> On the other hand, is this still open source?

It is not particularly meaningful to talk about compiled binaries being
Open Source or not.  Your program would obviously be Open Source, since
it would be available under the GPL; it is acceptable *under the GPL* to
provide binaries compiled with non-free compilers, unless the resulting
compiled binary is somehow derivative of a non-free work that is not an
OS component.  In the end, if people want to exercise their rights under
the GPL, they will want the source, not the binaries, and you are
supplying that source alongside the binaries, which satisfies the GPL.

> Is this allowed by policy?
> Can this go into main?

No.  Packages in main must be built using only other packages in main;
icc is not in main, so you can't ship an icc-built package in main.

> PS. I will provide the source code to anyone who requests it, but not yet
> under the GPL. Only after I publish a paper about the algorithm will the 
> code be released under the GPL.

Keep in mind that FFTW is GPLed, so unless you have made other
arrangements with its copyright holders, you need to refrain from
supplying the code or binaries to anyone unless under the GPL.

- Josh Triplett


signature.asc
Description: OpenPGP digital signature


Kernel 2.6.x real time clock hang on Dell

2004-10-18 Thread W. Borgert
Hi,

I'm bitten by a problem, that is already known to some people, but
for which I found no solution.  On a Dell server, the 2.6.7 kernel
(kernel-image-2.6.7-1-686-smp) hangs during boot at the real time
clock.  It seems, this problem is only known to Debian users :-(

Is there a solution or a workaround?

"I have problem on a Dell Precision 670.  I have installed the
kernel 2.6.8 using the debian Sarge net installation.  The boot
hangs after loading the driver for the Real Time Clock."
http://www.linuxquestions.org/questions/showthread.php?s=&postid=1188223#post1188223

"...always hang on the "Real Time Clock Driver".  I figured it a
bug in the 2.6 kernel..."
http://zacbowling.com/archives/2004/10/06/debian-dell/

Thanks in advance!

Cheers, WB




Re: Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Josselin Mouette
Le lundi 18 octobre 2004 à 19:22 +0200, Wesley W. Terpstra a écrit :
> So, when it comes time to release this and include it in a .deb, I ask
> myself: what would happen if I included (with the C source and ocaml
> compiler) some precompiled object files for i386? As long as the build
> target is i386, these object files could be linked in instead of using
> gcc to produce (slower) object files. This would mean a 2* speedup for
> users, which is vital in order to reach line-speed. Other platforms 
> recompile as normal.
> 
> On the other hand, is this still open source?
> Is this allowed by policy?
> Can this go into main?

Main must be built with only packages from main.

If you really want to distribute a package built with icc, you should
make a separate package in the contrib section, and have it conflict
with the package in main.

The GPL doesn't restrict anything you are describing, as long as the
source is available alongside.
-- 
 .''`.   Josselin Mouette/\./\
: :' :   [EMAIL PROTECTED]
`. `'[EMAIL PROTECTED]
  `-  Debian GNU/Linux -- The power of freedom


signature.asc
Description: Ceci est une partie de message	=?ISO-8859-1?Q?num=E9riquement?= =?ISO-8859-1?Q?_sign=E9e?=


Reproducible, precompiled .o files: what say policy+gpl?

2004-10-18 Thread Wesley W. Terpstra
I am developing a very CPU-intensive, open-source error-correcting code.

The intention of this code is that you can split a large (> 5GB)
file across multiple packets. Whenever you receive enough packets that
their combined size = the file size, you can decode the packets to
recover the file, regardless of which packets you get.

This means a lot of calculation over gigabytes worth of data.
Therefore, speed of utmost importance in this application.

The project itself includes an ocaml compiler (derived from fftw) which
generates C code to perform various types of Fast Galois Transforms. Some
of the output C code uses SSE2 exclusively. This C code is then compiled 
and linked in with the other C sources that make up the application.

Now, on to the dilemma: icc produces object files which run ~2* faster
than the object files produced by gcc when SSE2 is used. (The non-SSE2
versions are also faster, but so significantly) Both gcc and icc can 
compile the generated C files. My University will shortly own a licence
for icc which allows us to distribute binaries.

So, when it comes time to release this and include it in a .deb, I ask
myself: what would happen if I included (with the C source and ocaml
compiler) some precompiled object files for i386? As long as the build
target is i386, these object files could be linked in instead of using
gcc to produce (slower) object files. This would mean a 2* speedup for
users, which is vital in order to reach line-speed. Other platforms 
recompile as normal.

On the other hand, is this still open source?
Is this allowed by policy?
Can this go into main?

Some complaints and my answers below:

C: How do we know the object files aren't trojaned? 
A: Because I am both the upstream developer and (will be) the debian 
   maintainer, and I say they aren't.

C: You can't recompile the application without ICC, which is not free.
A: You can still rebuild it with gcc.

C: But you can't rebuild _exactly_ the same binary.
A: This is essentially *my* question: is this required by policy/gpl?
   Remember, you can always get ICC yourself. If there is a GPL problem, 
   then I think no MSVC application can be GPL either.

C: You're just too lazy to hand-optimize the assembler and include that.
A: You're right. Some of those auto-generated C files are > 64k of
   completely incomprehensible math. 
   I could include .S files instead of .o files, though, if that helps.

C: You're just too lazy to fix gcc.
A: I also wouldn't know where to begin, and I already file bugs.
   Even if I did know where to begin, gcc is not my responsibility.

C: A (security) bugfix won't get linked in.
A: A bug in the auto-generated C code is unlikely, and if they was one,
   changing the .c file makes it newer than the .o, which means gcc will
   rebuild it.

That's it!
What are the thoughts of GPL and policy experts?

PS. I will provide the source code to anyone who requests it, but not yet
under the GPL. Only after I publish a paper about the algorithm will the 
code be released under the GPL.

-- 
Wesley W. Terpstra <[EMAIL PROTECTED]>




Re: Right Way to make a configuration package

2004-10-18 Thread C. Gatzemeier
Am Monday 18 October 2004 02:01 schrieb Enrico Zini:

> One problem with diversion could also be that the original package's
> scripts won't probably edit the diverted conffile, but would probably
> edit the file in the traditional place instead.

Same would be the case for admins and users, and their scripts,  tools and 
utilities, right? Having only one authoritative config place is probably less 
prone to confusion.

> > I suspect the only sensible way to do this is to implement multilevel
> > configuration files in all applications we want to configure at
> > install time.

Multilevel config files would sure be nice, where the apps don't support them 
it might be sufficient to provide only multilevel *defaults*. Multilevel as 
application defaults, package defaults, system defaults (CDDs) and possibly 
admin defaults. That should be possible with any type of app configuration. 
Eeach level of authority could optionally provide a description of their 
defaults that are processed by a configuration helper as CFG that is designed 
to never interfere with user settings as described at 
http://freedesktop.org/Software/CFG

Quickly brainstorming this, customizing a running system to a template (a CDD) 
could be a two step process of copying in the meta info and then 
interactively or not "resetting" to the new defaults. In new installations 
the settings could automatically default to the customization if the meta 
data is present early enogh.

Kind Regards,
Christian




Re: about volatile.d.o/n

2004-10-18 Thread paddy
Thomas,

On Sun, Oct 17, 2004 at 11:53:03PM -0700, Thomas Bushnell BSG wrote:
> paddy <[EMAIL PROTECTED]> writes:
> 
> > 'stable even for users who are "misusing" the system.' sounds like it
> > could turn out to be a tall order, if it is intended to have wider
> > application.
> 
> It is a tall order.  It is also one that Debian has done fairly well,
> by having very strict policies about stable.

Indeed.  Kudos to the many developers who work so hard to do this.

Stable seems to achieve this by long timescales and restricting updates
to a minimum.  There will clearly be more pressure on such mechanisms
for volatile.  Perhaps it is possible, desirable and the will of the
interested parties to pursue the same lofty standards, but that it is
not as trivial as saying "these same things should apply" is, I feel,
worth pointing out.  After all it is the existing situation that leads
us here today:  these values can sometimes be at odds with other
worthwhile values.

Indeed, if you change these parameters, can you credibly claim to be
applying the same standards?

> > I think there need to be good reasons to depart from stable, and
> > clearly, in some areas at least, there are.  This may or may not
> > be one of those areas, I don't see into it that deeply.
> 
> I want to hear the *exact* reasons.  So far, it has been things like:
> 
> "Virus scanners must be updated in order to remain useful."
> "And so, we must be able to change locale information, add new command
>  line args, or fix spelling errors in output sometimes."
> 
> The first may be true.  The second does not follow.

Agreed. It doesn't necessarily follow.

But that won't stop the policy cart being put in front of the purpose horse.

WRT spelling errors, etc:  The real case seems to me less trivial than 
the simplifications that have followed, but as a general rule I'd be happy
to agree to not fixing spellings: as part of a _coherent_ policy going 
in the same direction, it makes perfect sense.  

I see no reason why, in principle, a volatile modelled on stable with
just a shorter release cycle shouldn't be viable, and to some degree
effective.  I'm happy that there seems to be a rough concensus that
volatile should have a purpose beyond "a shorter release cycle", and 
that updates into volatile could be gated (in part) by consideration
of their relevance to that purpose: that seems better to me.  I imagine
we're all a little curious or even nervous to find out what 'sufficiently 
relevant to purpose' will turn out to be.

That really strikes me, it gets to the heart of things:

the purpose of stable is ...
the purpose of volatile is ...

In the end we will have a good answer and all will be right in the world.
Form will follow from function.

It could easily be that blindly applying all of the expectations that 
people bring with them from stable could hobble volatile unduly,
but I hope I'm just imagining that.

But I'm floating away ...

I can't yet see what puts "new command line args" in the same category
as "fixing spelling errors", except if that by the former you mean
gratuitously incompatible changes to the command line interface, in which
case we are in total agreement, but that is not clear given your 
previous phrasing: (IIRC) "new command line features".

But, it's a long way by the rules, but quicker by example, so ...

Imagine a hypothetical virus scanner, let's call it foo.

foo v.1 has no command line arguments in normal use: it takes stdin,
it outputs on stdout/stderr.  It scans for windows viruses only.

Then, just suppose, foo takes on a capability to scan for a new
Mac virus that is so resource intensive to scan for that the upstream
wisely decide to make it an option: foo -m

(in this particular case it could be argued that a separate name to call
'mfoo', might be better, but is that good answer to the generic problem?)

Question:  Can foo v.2 take its '-m' into volatile with it ?

(I intend this as an example of a compatible command line change)

> I do not object to saying that some things must be updated to remain
> useful, and exempting them from the normal stable procedures.  But we
> should not exempt the whole *package*, but specific *changes to the
> package*.  Only those changes which "must happen for the package to
> remain useful" should be permitted.  Other ones should not.

I have considerable sympathy for what I perceive as your point of view,
as I've said elsewhere.

I'm still concerned that this area is considerably less black & white
than, for example, security fixes.  Most likely individual maintainers
will arrive at categorisations that in some way parallel those seen
in security: remote hole, local hole, etc. and the whole thing will
look less woolly.  Perhaps some generalisation of these categories
across volatile will be possible.  No doubt, something
like this already exists of which I am unaware.  (I am not unaware
of the grading of bugs for release management, I just can't see how
it wi

Re: Testing Large File Support (LFS)

2004-10-18 Thread Eduard Bloch
#include 
* Anand Kumria [Tue, Oct 19 2004, 12:53:45AM]:

> I'm just wondering if there is an automated way that we can test programs
> and/or packages to determine if they have working large file support?

I do not think this can be automated easily. Every program has a
different way of working with different files. One thing I can think
about is running whole user sessions in a strace call and evaluate the
strace logs later. Maybe have some modified strace tool that only logs
open calls without O_LARGEFILE, filters open calls on libraries etc.
Once a buggy (LFS-inept) package has been detected, it is added to a
report file and file operations by this programs are to be filtered.

> I've stumbled onto problems in this area, in the past, with Apache and
> Apache2 (fixed upstream but won't be making sarge) and with things like
> wget (no idea about status).

A non-LFS ready wget is a shame. There is wget-cvs which is "recommended
to use". But IMO it is almost ridiculous - we release another Debian
stable without LFS support in important programs, for weak reasons.

> I'm hoping there is some automated tool we can use rather than having to
> find and then report bugs as we go.

The problem is very subtle and not easy to be detected. You cannot even
rely on strings | grep fopen64 or something like that, programs could
implement a part of LFS method but break on some places.

Regards,
Eduard.
-- 
Wer wirklich noch einen 4.x-Browser benutzt, dem kann leider nicht mehr
geholfen werden. Die haben soviele Sicherheitsloecher, da koennten wir per
www.linuxtag.org, Exploit und etwas Scriptmagic einen neuen Browser von
Remote installieren.
  // Michael Kleinhenz, lt2k-ml




Re: Testing Large File Support (LFS)

2004-10-18 Thread Steve McIntyre
Anand Kumria writes:
>
>I'm just wondering if there is an automated way that we can test programs
>and/or packages to determine if they have working large file support?
>
>I've stumbled onto problems in this area, in the past, with Apache and
>Apache2 (fixed upstream but won't be making sarge) and with things like
>wget (no idea about status).
>
>Today I was hit but uw-imapd not working with >2G mailboxes.
>
>I'm hoping there is some automated tool we can use rather than having to
>find and then report bugs as we go.

Hmmm. That's easier said than done, surely? Maybe you could scan
binaries to see whether they're using the LFS versions of libc
functions, but that's not a guarantee that the program does the right
thing internally.

-- 
Steve McIntyre, Cambridge, UK.[EMAIL PROTECTED]
Into the distance, a ribbon of black
Stretched to the point of no turning back




Re: forwarding bugs to other packages

2004-10-18 Thread Daniel Burrows
On Monday 18 October 2004 06:01 am, Bernd Eckenfels wrote:
> Perhaps we need a "read this before submitting bugs against my package"
> function in reportbugs :)

  I've actually seen some packages do this.  For instance, try submitting a 
bug against mozilla-firefox..

  Daniel

-- 
/--- Daniel Burrows <[EMAIL PROTECTED]> --\
|  "Oh my god!  The entire map is written in GIBBERISH!"|
|  "Worse, my friend.  It's written in German!" -- Fluble   |
\ Evil Overlord, Inc: http://www.eviloverlord.com --/


pgpanh4N8kafG.pgp
Description: PGP signature


Re: Testing Large File Support (LFS)

2004-10-18 Thread Petter Reinholdtsen
[Anand Kumria]
> I'm hoping there is some automated tool we can use rather than
> having to find and then report bugs as we go.

Perhaps you can use 'nm binary' and check if it is using the 64-bit
version of the libc function calls?




Re: about volatile.d.o/n

2004-10-18 Thread paddy
On Sun, Oct 17, 2004 at 11:33:49AM +0200, Andreas Barth wrote:
> * Martin Schulze ([EMAIL PROTECTED]) [041017 11:20]:
> > Andreas Barth wrote:
> > > 
> > > I could however see the possiblity to add a new package "mozilla1.7",
> > > that users can optionally install. However, I also won't like it.
>  
> > Please be very careful with packages like these.  It may require a
> > new version of libfoo1 and libbar2g and libbaz0g etc. which people
> > may accidently install, which in turn can hurt them in other areas
> > and contribute "strange" bug reports.

Sometimes I could almost believe there is a libfoo :)

> As soon as it requires new versions of some libraries, this is a no-go.
> People who want it may go to backports.org or so. Perhaps we may add an
> news item on volatiles page about that then.

Andi,

I don't understand this.  What is the harm (in new versions of libraries)?

> The main word is "above all, do no harm". The default action is to not
> add something.

Regards,
Paddy




Testing Large File Support (LFS)

2004-10-18 Thread Anand Kumria

Hi,

I'm just wondering if there is an automated way that we can test programs
and/or packages to determine if they have working large file support?

I've stumbled onto problems in this area, in the past, with Apache and
Apache2 (fixed upstream but won't be making sarge) and with things like
wget (no idea about status).

Today I was hit but uw-imapd not working with >2G mailboxes.

I'm hoping there is some automated tool we can use rather than having to
find and then report bugs as we go.

Thanks,
Anand

-- 
linux.conf.au 2005   -  http://lca2005.linux.org.au/  -  Birthplace of Tux
April 18th to 23rd   -  http://lca2005.linux.org.au/  -   LINUX
Canberra, Australia  -  http://lca2005.linux.org.au/  -Get bitten!





Bug#277087: ITP: python-pychm -- Python bindings for CHMLIB

2004-10-18 Thread Carlos Z.F. Liu
Package: wnpp
Severity: wishlist

* Package name: python-pychm
  Version : 0.8.0
  Upstream Author : Rubens Ramos <[EMAIL PROTECTED]>
* URL : http://gnochm.sourceforge.net/pychm.html
* License : GPL
  Description : Python bindings for CHMLIB

 PyCHM is a package that provides bindings for Jed Wing's CHMLIB
 library.
 .
 The chm package contains four modules:
  * chm._chmlib: Low level wrappers around the chmlib API, generated by
SWIG.
  * chm.chmlib: Low level wrappers around the chmlib API, also generated
by SWIG;
  * chm.extra: Extra utility functions - right now, it contains a
function to perform full-text search support to extract LCID.
  * chm.chm: High-level support for CHM archives, using the functions
from the modules above.
 .
 Homepage: http://gnochm.sourceforge.net/pychm.html


-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable'), (101, 'experimental')
Architecture: i386 (i686)
Kernel: Linux 2.6.7
Locale: LANG=en_NZ.UTF-8, LC_CTYPE=zh_CN.UTF-8




Bug#277085: ITP: gnochm -- CHM file viewer for GNOME

2004-10-18 Thread Carlos Z.F. Liu
Package: wnpp
Severity: wishlist

* Package name: gnochm
  Version : 0.9.2
  Upstream Author : Rubens Ramos <[EMAIL PROTECTED]>
* URL : http://gnochm.sourceforge.net/
* License : GPL
  Description : CHM file viewer for GNOME

 Gnochm is a Compiled HTML Help (CHM) file viewer for GNOME systems.
 .
 Features are:
  * Support for external ms-its links
  * Full text search support
  * Bookmarks
  * Configurable support for HTTP links
  * Integrated with Gnome2
  * Support for multiple languages
  * Support to open multiple files at once
 .
 Homepage: http://gnochm.sourceforge.net/



-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable'), (101, 'experimental')
Architecture: i386 (i686)
Kernel: Linux 2.6.7
Locale: LANG=en_NZ.UTF-8, LC_CTYPE=zh_CN.UTF-8




RFC: Removal of old cyrus-sasl (libsasl7) from sarge and sid

2004-10-18 Thread Henrique de Moraes Holschuh
As of this writing, all packages in Debian have transitioned to the newer
libsasl2 (cyrus-sasl2).  The last user of libsasl7 (sendmail-wide) is being
removed from sid at the request of its maintainer.

Cyrus-SASL 1.5 (libsasl7) has been deprecated upstream for years.

Can we remove cyrus-sasl and libsasl7 packages from sarge and sid?  Moving
them to oldlibs is the other possibility.  However, oldlibs would require
security efforts to be maintained for sarge at the very least.  libsasl7
is an occasional target for a security upgrade, AND it almost always causes
trouble... there is real benefit on dropping it from the distribution.

I am still waiting word from the SASL maintainer ([EMAIL PROTECTED]) on the 
issue,
but assuming he has nothing against the removal, it would be good to know
what our users think of it.

It is important to note that anything still using cyrus-sasl 1.5 is way past
due time for an upgrade, and you are better off without it IMHO (I speak as
a long user of SASL-using apps, long time maintainer of SASL-using apps, and
part time co-maintainer of cyrus-sasl*).

-- 
  "One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie." -- The Silicon Valley Tarot
  Henrique Holschuh




Re: forwarding bugs to other packages

2004-10-18 Thread Adrian 'Dagurashibanipal' von Bidder
On Monday 18 October 2004 09.54, Brian May wrote:
> Hello,
>
> I have a number of bugs reported against my packages which are
> actually (already reported) bugs in other packages.

Reading the rest of the thread, I conclude that adding an explanation to the 
bug and tagging it wontfix is probably the best solution.

cheers
-- vbi

-- 
Oops


pgpYdxrEyBLOv.pgp
Description: PGP signature


Re: forwarding bugs to other packages

2004-10-18 Thread Peter Eisentraut
Bernd Eckenfels wrote:
> Perhaps we need a "read this before submitting bugs against my
> package" function in reportbugs :)

That already exists: /usr/share/bug/.  "reportbug galeon" provides a 
reasonable example run.




Re: forwarding bugs to other packages

2004-10-18 Thread Adeodato Simó
* Bernd Eckenfels [Mon, 18 Oct 2004 12:01:32 +0200]:

> Perhaps we need a "read this before submitting bugs against my package"
> function in reportbugs :)

  such functionality exists, via the /usr/share/bug//presubj
  file. see /usr/share/doc/reportbug/README.developers.

-- 
Adeodato Simó
EM: asp16 [ykwim] alu.ua.es | PK: DA6AE621
 
Nobody can be exactly like me.  Sometimes even I have trouble doing it.
-- Tallulah Bankhead




Re: forwarding bugs to other packages

2004-10-18 Thread Martin Michlmayr
* Wouter Verhelst <[EMAIL PROTECTED]> [2004-10-18 13:32]:
> That's not possible. You can only merge bugs if /all/ properties (tags,
> severity, package reported against, ...) are the same.

Just for the record, tags are an exception.  They are merged when you
merge bugs.
-- 
Martin Michlmayr
http://www.cyrius.com/




Re: forwarding bugs to other packages

2004-10-18 Thread Frank Küster
Bernd Eckenfels <[EMAIL PROTECTED]> schrieb:

> On Mon, Oct 18, 2004 at 05:54:44PM +1000, Brian May wrote:
>> I could just close the bug against my package, but this means other
>> people will encounter the same problem and report the bug against my
>> package again (as it isn't always obvious that it isn't the fault of
>> my package).
>
> So you do not want to reassign them to the correct package? I dont think
> thats a good idea (even when i can understand where are you coming from).
>
> Perhaps we need a "read this before submitting bugs against my package"
> function in reportbugs :)

Unless we have that function, I fear we sometimes need to keep a copy of
a bug open in the wrong package. E.g. we have a bug which was first
reported as #247849, which is reassigned to debconf (and merged with an
other one there. After reassigning, we got #257022 and #264982. I'd
rather keept them in our package instead of reassigning (and waiting
until the next clone comes in against tetex).

Regards, Frank
-- 
Frank Küster
Inst. f. Biochemie der Univ. Zürich
Debian Developer




Re: forwarding bugs to other packages

2004-10-18 Thread Wouter Verhelst
On Mon, Oct 18, 2004 at 10:19:18AM +0200, Gergely Nagy wrote:
> > I could just close the bug against my package, but this means other
> > people will encounter the same problem and report the bug against my
> > package again (as it isn't always obvious that it isn't the fault of
> > my package).
> 
> How about merging those bugs with the bug reported against the correct
> package?

That's not possible. You can only merge bugs if /all/ properties (tags,
severity, package reported against, ...) are the same.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune




Re: RFC: common database policy/infrastracture

2004-10-18 Thread Oliver Elphick
On Mon, 2004-10-18 at 08:19, Javier Fernández-Sanguino Peña wrote:
> 
> I'm missing some "Best practice" on how to setup the database itself. That 
> is, how to setup the tables (indexes, whatever...) that the application 
> will use from the database and, maybe, even some initial data in some of 
> the tables.

I would suggest something like this:

1. Identify the server, database type (PostgreSQL, MySQL, Firebird,
etc.) and access method (UNIX socket, TCP/IP, TCP/IP with SSL)

2. If your package needs to create a user or database, identify the
database administrator's id and password; note that this may include
doing "su -c postgres" or similar.

3. Determine and, if necessary, create the database user which will own
your package's database and other DB objects.  If your chosen server is
remote, or the server package's policy forbids application packages to
change the authentication setup, this may require manual intervention by
a database administrator.  In that case, your package will be left
installed but not yet usable - any attempt to use it should return a
message saying what steps are needed to get it working.

4. For PostgreSQL, the preferred method of supplying a password from a
script is by creating ~HOME/.pgpass (perms=0600) and specifying the
password there as described in the PostgreSQL manual.  If
password-authenticated access to the database is required, the
installation should create this file for the duration of the
installation only; if it already exists with different contents, it
should be moved aside.  The installation script should use trap
statements to ensure that everything is put back as it was at the
termination of the script.

5. If the database does not already exist,

   a. Create the database, assigning it to the ownership of the
  chosen database user.  For PostgreSQL:

 createdb -O  [-E ] 

   b. As the owner, run an SQL script (appropriate to the kind of
  database) to create the schema and populate it.  For PostgreSQL:

 psql -d  -f  -e [-h ]
 [-p ] -U 

  or

 su -  -c "psql -d 
-f  -e [-h ]
   [-p ]"

  The latter is preferable if the system user 
  exists, because it matches PostgreSQL's default authentication
  setup.

  At this point, database authentication may forbid the execution of
  the script; this again may need manual intervention by the
  database administrator.

6. If the database does exist,

   a.  As the owner, run any script necessary to update the database
   objects. (The PostgreSQL script command is as above; the same
   caveats apply, though one would expect that password access as
   database_owner would already be set up and would therefore
   succeed.)



If the database supports SQL transactions (as PostgreSQL does), SQL
scripts should do everything inside a transaction, so that either all
objects are successfully created and populated or else there is no
change at all to the database.


> One common issue is that the application depends on that in order to work 
> and it's not done automatically. Maybe the user is prompted to do it but he 
> might be unable to do so until the installation is finished. For an example 
> of this problem see #205683 (and #219696, #265735, #265878). 

The problem there is that the prompting is being done in the preinst,
which is useless, because the files referred to do not yet exist.  That
is not specifically a database-using problem; it is simply a packaging
error.  That package should hold all the information it needs in its
preinst script, or else not attempt to do things in the preinst.

It is, however, quite possible for the application installation to fail
because of circumstances beyond the packaging system's ability to
manage.  Therefore, the package installation scripts need to be able to
report what further steps are needed in order for installation to be
completed.

> It might be good to provide a common mechanism to setup the database so
> that users are not asked to run an SQL script under /usr/share/XXX (usually
> doc/package/examples). Maybe even defining a common location for these
> (/usr/share/db-setup/PACKAGE/.{mysql,pgsql}?). Notice that the SQL
> script that needs to be run might difer between RDBMS. 

Almost certainly it will.  See above for the commands to run.

-- 
Oliver Elphick  olly@lfix.co.uk
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/A54310EA  92C8 39E7 280E 3631 3F0E  1EC0 5664 7A2F A543 10EA
 
 "Delight thyself also in the LORD; and he shall give 
  thee the desires of thine heart."  Psalms 37:4




Re: Looking for Michael Brammer (grisu) / DDTP

2004-10-18 Thread Michael Meskes
On Sun, Oct 17, 2004 at 03:47:29PM -0300, Gustavo Noronha Silva wrote:
> this here in Brasil are unable to continue the work because of technical
> problems and is unable to contact grisu.

Grisu is currently in the process of moving to a new home. I guess he
will be reachable again via email pretty soon.

Michael
-- 
Michael Meskes
Email: Michael at Fam-Meskes dot De
ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED]
Go SF 49ers! Go Rhein Fire! Use Debian GNU/Linux! Use PostgreSQL!




Re: forwarding bugs to other packages

2004-10-18 Thread Bernd Eckenfels
On Mon, Oct 18, 2004 at 05:54:44PM +1000, Brian May wrote:
> I could just close the bug against my package, but this means other
> people will encounter the same problem and report the bug against my
> package again (as it isn't always obvious that it isn't the fault of
> my package).

So you do not want to reassign them to the correct package? I dont think
thats a good idea (even when i can understand where are you coming from).

Perhaps we need a "read this before submitting bugs against my package"
function in reportbugs :)

Gruss
Bernd
-- 
  (OO)  -- [EMAIL PROTECTED] --
 ( .. )  [EMAIL PROTECTED],linux.de,debian.org}  http://www.eckes.org/
  o--o 1024D/E383CD7E  [EMAIL PROTECTED]  v:+497211603874  f:+497211606754
(OO)  When cryptography is outlawed, bayl bhgynjf jvyy unir cevinpl!




Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Tollef Fog Heen
* Branden Robinson 

| On Fri, Oct 15, 2004 at 12:06:36PM -0700, Steve Langasek wrote:
| > environment variables, at least, are trivial to accomplish using the
| > pam_env module.  Properly setting a umask would call for something else
| > yet.
| 
| Would pam_umask.so be a worthwhile exercise for some enterprising person?
| 
| I somehow suspect that umasks predate environment variables in the misty
| early history of Unix, else the umask would've been made one.

Give me a decent description for it (as in, what goes into the long
description in debian/control) and it's on its way to incoming.

(Yes, I've written one just now.)

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  




Re: forwarding bugs to other packages

2004-10-18 Thread Gergely Nagy
> I could just close the bug against my package, but this means other
> people will encounter the same problem and report the bug against my
> package again (as it isn't always obvious that it isn't the fault of
> my package).

How about merging those bugs with the bug reported against the correct
package? That'd result in your buggetting automatically closed when the
bug is fixed in the other package, it would probably make filtering
easier too, and the bug would normally appear on the bug page of your
package, so users would notice it and won't report it again and again.

HTH,
-- 
Gergely Nagy




forwarding bugs to other packages

2004-10-18 Thread Brian May
Hello,

I have a number of bugs reported against my packages which are
actually (already reported) bugs in other packages.

I could just close the bug against my package, but this means other
people will encounter the same problem and report the bug against my
package again (as it isn't always obvious that it isn't the fault of
my package).

I want a way to filter out these reports, so when I get a list of
outstanding bugs in my packages, I only get to see real bugs in my
package.

Is there anyway I can do this?

I have looked at the list of tags, and not seen anything appropriate.

I could mark the bug as forwarded upstream to "[EMAIL PROTECTED]",
where n is the real bug report, but it isn't really upstream...

Also a method of automatically finding out when bug n is closed, so I
can close my bug report would be nice, too.

Is anything like this possible?

Thanks.
-- 
Brian May <[EMAIL PROTECTED]>




Re: RFC: common database policy/infrastracture

2004-10-18 Thread Oliver Elphick
On Mon, 2004-10-18 at 03:23, sean finney wrote:
...
> > Even if the server is on the local machine, I am opposed to having any
> > application package alter the database access policies.  This is OK for
> 
> what exactly do you mean by altering access policies?  granting
> privileges to a new user?

As the postgresql package is delivered, it will only accept connections
where the database user name is the same as the system user name.  So,
when I am logged in as 'olly', I can only connect to PostgreSQL as the
database user 'olly'.  This means that web-based datbase applications
cannot work, because the connection is done by the system user
'www-data', but the user wants to run it as the database user 'olly';
that connection will be rejected.

In order to get a connection under those circumstances, the
authentication set-up for the database in question needs to be changed
to 'md5' (MD5-encrypted passwords).  This is done by altering
/etc/postgresql/pg_hba.conf. 

...
> for the admin password, i agree.  for the app_user password, i think
> most apps are storing this password in a cleartext file for the
> application to use (php web apps, for example).  that's my opinion,
> anyways.

That may differ per application.  I would argue that it is very bad
security in all circumstances.

-- 
Oliver Elphick  olly@lfix.co.uk
Isle of Wight  http://www.lfix.co.uk/oliver
GPG: 1024D/A54310EA  92C8 39E7 280E 3631 3F0E  1EC0 5664 7A2F A543 10EA
 
 "Delight thyself also in the LORD; and he shall give 
  thee the desires of thine heart."  Psalms 37:4




Re: about volatile.d.o/n

2004-10-18 Thread Francesco P. Lovergine
On Sun, Oct 17, 2004 at 11:33:49AM +0200, Andreas Barth wrote:
> * Martin Schulze ([EMAIL PROTECTED]) [041017 11:20]:
> > Andreas Barth wrote:
> > > * Henning Makholm ([EMAIL PROTECTED]) [041011 18:30]:
> > > > The goal should be that I, as a user, can add volatile to my
> > > > sources.list and periodically do an apt-get upgrade - without risking
> > > > to suddenly have my web browser updated to a new major release where
> > > > it starts behaving differently, all my users' preferences get out of
> > > > kilter, etc.
> 
> > > I think this is one of the most important statements - and I think it
> > > describes our policy quite well.
> > > 
> > > I could however see the possiblity to add a new package "mozilla1.7",
> > > that users can optionally install. However, I also won't like it.
>  
> > Please be very careful with packages like these.  It may require a
> > new version of libfoo1 and libbar2g and libbaz0g etc. which people
> > may accidently install, which in turn can hurt them in other areas
> > and contribute "strange" bug reports.
> 
> As soon as it requires new versions of some libraries, this is a no-go.
> People who want it may go to backports.org or so. Perhaps we may add an
> news item on volatiles page about that then.
> 
> The main word is "above all, do no harm". The default action is to not
> add something.
> 

Indeed, I think that major interests in volatile are about beta-quality software
released in stable. A major upgrade would allow to use a 'sane' version
which in turn should not require many library changes at all. I'm thinking
to the pre-1.0 version of mozilla in woody: upgrading to a sane 1.0
version in stable by volatile could be considered, solving many
functional problems and  being a sane (and safe) possibility. 
Other major upgrades (e.g. mozilla-current) are backports.org concerns.
We have currently a few software of large use in those conditions, e.g.
firefox/thunderbird (but note that those programs are really in better
conditions in respect with the old woody mozilla)

-- 
Francesco P. Lovergine




Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Jan Nieuwenhuizen
Branden Robinson writes:

When I see complaints like these

> And, by the way:
>   X-No-CC: I subscribe to this list; do not CC me on replies.
>
> Please get an MUA that respects Mail-Copies-To:.

I wonder if filing a bug report against the offending MUA would be
more efficient?  In this case, the headers say

   X-Enigmail-Version: 0.86.1.0

and

   $ apt-cache search enigmail
   mozilla-thunderbird-enigmail - Enigmail - GPG support for Mozilla Thunderbird

so we might be seeing more of these if mozilla isn't fixed.

Jan.

-- 
Jan Nieuwenhuizen <[EMAIL PROTECTED]> | GNU LilyPond - The music typesetter
http://www.xs4all.nl/~jantien   | http://www.lilypond.org




Re: RFC: common database policy/infrastracture

2004-10-18 Thread Javier Fernández-Sanguino Peña
On Sat, Oct 16, 2004 at 07:26:10PM -0400, sean finney wrote:
> applications.  i'd greatly appreciate input, especially from the current
> maintainers of database-using or database-server applications.  the draft
> is available at:
> 
> http://people.debian.org/seanius/policy/dbapp-policy.html

[That should be http://people.debian.org/~seanius/policy/dbapp-policy.html, 
BTW]

I'm missing some "Best practice" on how to setup the database itself. That 
is, how to setup the tables (indexes, whatever...) that the application 
will use from the database and, maybe, even some initial data in some of 
the tables.

One common issue is that the application depends on that in order to work 
and it's not done automatically. Maybe the user is prompted to do it but he 
might be unable to do so until the installation is finished. For an example 
of this problem see #205683 (and #219696, #265735, #265878). 

It might be good to provide a common mechanism to setup the database so
that users are not asked to run an SQL script under /usr/share/XXX (usually
doc/package/examples). Maybe even defining a common location for these
(/usr/share/db-setup/PACKAGE/.{mysql,pgsql}?). Notice that the SQL
script that needs to be run might difer between RDBMS. 

Just my 2c

Javier


signature.asc
Description: Digital signature


Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Branden Robinson
On Sat, Oct 16, 2004 at 01:28:31PM +0200, Tomas Fasth wrote:
> However, umask is not an ordinary software configuration property,
> it's a process property initially inherited from init which, by the
> way, set it to 022 (I just checked the source of sysvinit in unstable).

Yes, we know that.

[...]
> What I don't understand is why you think the umask preference should be
> applied differently depending on the type of interface the user choose to
> initiate an interactive session with.

I don't.  Kindly stop putting words in my mouth, and re-read my original
mail.  If you can discuss this subject without indulging yourself in
straw-man attacks like this, please follow-up with a more reasonable
message.

And, by the way:
X-No-CC: I subscribe to this list; do not CC me on replies.

Please get an MUA that respects Mail-Copies-To:.

-- 
G. Branden Robinson|I must despise the world which does
Debian GNU/Linux   |not know that music is a higher
[EMAIL PROTECTED] |revelation than all wisdom and
http://people.debian.org/~branden/ |philosophy. -- Ludwig van Beethoven


signature.asc
Description: Digital signature


Re: Xsession doesn't use umask setting from /etc/login.defs

2004-10-18 Thread Branden Robinson
On Fri, Oct 15, 2004 at 12:06:36PM -0700, Steve Langasek wrote:
> environment variables, at least, are trivial to accomplish using the
> pam_env module.  Properly setting a umask would call for something else
> yet.

Would pam_umask.so be a worthwhile exercise for some enterprising person?

I somehow suspect that umasks predate environment variables in the misty
early history of Unix, else the umask would've been made one.

-- 
G. Branden Robinson|   The Bible is probably the most
Debian GNU/Linux   |   genocidal book ever written.
[EMAIL PROTECTED] |   -- Noam Chomsky
http://people.debian.org/~branden/ |


signature.asc
Description: Digital signature


Re: about volatile.d.o/n

2004-10-18 Thread Thomas Bushnell BSG
paddy <[EMAIL PROTECTED]> writes:

> 'stable even for users who are "misusing" the system.' sounds like it
> could turn out to be a tall order, if it is intended to have wider
> application.

It is a tall order.  It is also one that Debian has done fairly well,
by having very strict policies about stable.

> I think there need to be good reasons to depart from stable, and
> clearly, in some areas at least, there are.  This may or may not
> be one of those areas, I don't see into it that deeply.

I want to hear the *exact* reasons.  So far, it has been things like:

"Virus scanners must be updated in order to remain useful."
"And so, we must be able to change locale information, add new command
 line args, or fix spelling errors in output sometimes."

The first may be true.  The second does not follow.

I do not object to saying that some things must be updated to remain
useful, and exempting them from the normal stable procedures.  But we
should not exempt the whole *package*, but specific *changes to the
package*.  Only those changes which "must happen for the package to
remain useful" should be permitted.  Other ones should not.

Thomas




Virus ricevuto -- Virus received

2004-10-18 Thread info

  Questo messaggio e` stato automaticamente generato
per informarvi che un messaggio da voi inviato
(o inviato da un virus che ha falsificato il vostro
indirizzo email) e` stato bloccato dall'antivirus
installato su questo mail server. 
  Di seguito potete trovare alcuni dettagli relativi
al messaggio bloccato:
virus trovato (virus found): Worm.SomeFool.P
mittente protocollo (envelope from): debian-devel@lists.debian.org
mittente dichiarato (header from): debian-devel@lists.debian.org
destinatario (header to): [EMAIL PROTECTED]
oggetto (subject): Re: List

  This message has been automatically generated to
inform you that one of your emails has been discarded
by the antivirus installed on this mail server.
  The details above refer to the message which has
been blocked. Please verify the integrity of your system.




Re: Last call for expat maintainer before NMU

2004-10-18 Thread Raphael Bossek
> I'll take care of it this Monday or Tuesday.
Thanks a lot for your respond!

Please consider to extend the compilerflags with -pthread -D_REENTRANT.
This is required for my python-4suite package and all other
multi-threaded applications.
I'm missing also -fPIC for shared objects, but this is allready a bug
report.

There is an another issue too. What about create a 3rd library with
XML_UNICODE support called libexpatw ?

--
Raphael Bossek

pgpyrke62Uys6.pgp
Description: PGP signature