Bug#1043539: project: Forwarding of @debian.org mails to gmail broken
Greetings, * Cord Beermann (c...@debian.org) wrote: > As listmaster i can confirm that it is a big problem to deliver Mails to > gmail/outlook/yahoo. Yahoo Subscribers are mostly gone by now because they > bounced a lot, for gmail it is so much that we just ignore bounces because of > those rules. As a maintainer or some pretty big lists ... we don't have *that* much trouble delivering to gmail, or others for that matter. > | helgefjell.de descriptive text "v=spf1 ip4:142.132.201.35 mx ~all" > > so you flagged your mail has to come from that IP (or the MX) and from other > sources it should be considered suspicious. ... but if it's DKIM signed, then it'll generally get delivered properly. > SRS/ARC and so on are just dirty patches that try to fix things that were > broken before, but they will break even more things like Mail signing. ARC doesn't break DKIM signatures (unless someone's got a very broken DKIM setup which over-signs ARC headers ... but if so, then that's on them). Thanks, Stephen signature.asc Description: PGP signature
Bug#1043539: project: Forwarding of @debian.org mails to gmail broken
Greetings, * Mattia Rizzolo (mat...@debian.org) wrote: > Alternatively, I wonder if ARC nowadays is respected enough (and if > Google cares about it)... I personally don't have any system with ARC > under my care. Sadly, no, they don't seem to care one bit about ARC, except possibly if it's their own ARC sigs. If someone has some idea how to get them to care about ARC, I'd love to hear about it, as I have folks on the one hand who view DKIM/DMARC as too painful to set up but then they end up with bounces from gmail due to my forwarding of messages through my server (which are being ARC-signed by it and pass on that the SPF check was successful when they arrived to my server)... I'd encourage everyone running their own email servers to please get DKIM/DMARC/ARC/SPF set up. Yeah, it's annoying, but it's not actually all *that* bad to do. Thanks, Stephen signature.asc Description: PGP signature
Bug#963699: Fwd: PostgreSQL: WolfSSL support
Greetings, * Felix Lechner (felix.lech...@lease-up.com) wrote: > Attached please find a WIP patch for wolfSSL support in postgresql-12. Would really be best to have this off of HEAD if we're going to be looking at it rather than v12. We certainly aren't going to add new support for something new into the back-branches. Further, I'd definitely suggest seeing how this plays with the patch to add support for NSS which was posted recently to -hackers by Daniel. Thanks, Stephen signature.asc Description: PGP signature
Bug#941495: Internal compiler errror: in decode_addr_const, at varasm.c:2864
Package: gcc-6 Version: 4:6.3.0-18+deb9u1 Severity: important Greetings, gcc is failing on powerpc64le-linux-gnu with a gcc dump. This bug is preventing pgbackrest (and, likely, PostgreSQL itself as of current HEAD, though I haven't had a chance to test it myself) from being able to be built on this platform. Per the GCC folks, gcc 7, 8 and 9 don't have this issue: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91689 The build log is available here: https://pgdgbuild.dus.dg-i.net/job/pgbackrest-binaries/55/architecture=ppc64el,distribution=stretch/console Also per the GCC folks, this was a duplicate of: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87647 which included a patch to fix the issue: https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/varasm.c?r1=265341=265340=265341 No idea if that's something that could be back-patched to gcc-6, but it's at least a really small patch. Would be great to get this fixed in gcc-6 so that pgbackrest and PostgreSQL can be built on powerpc64-le-linux-gnu, otherwise we'll end up having to drop support for this platform on stretch. Thanks, Stephen signature.asc Description: PGP signature
Bug#849760: ferm cache system broken
Package: ferm Version: 2.2-3 Greetings, The ferm system allows the inclusion of other files, including those which might be outside of the /etc/ferm directory. When those files change and the user issues a 'reload', the ferm cache should be updated. This is not currently happening because the ferm cache system assumes that there is only a configuration change if a file in /etc/ferm has been changed, which is incomplete and incorrect. When a user issues a 'reload', the ferm system should regenerate the cache regardless of if it believes there are changes or not- the user asked for a reload and checking if files in /etc/ferm have changed is insufficient. A possible alternative might be to generate the output from ferm and compare it with the cache, but even in that case it's possible the user is issuing a reload because the *kernel* rules were changed and they wish for the ferm reload to correct the running kernel. In the end, if the user is asking for a reload, ferm should really just *do* it, anything else really isn't correct. Thanks! Stephen signature.asc Description: Digital signature
Bug#831234: [Pkg-postgresql-public] Bug#831234: Bug#831234: postgresql-9.5: FTBFS: Tests failures
* Christoph Berg (m...@debian.org) wrote: > Re: Stephen Frost 2016-07-14 <20160714142721.gl4...@tamriel.snowman.net> > > * Lucas Nussbaum (lu...@debian.org) wrote: > > > (This might be related to the fact that I use a "user" login on my build > > > machine) > > > > Yes, it is. > > > > We could possibly remove those tests, but I'm not really thrilled with > > that idea. I'm not sure if it'd be at all sensible to try and write > > something to check if the role "user" already exists. > > > > One thing we could do is provide an alternate expected file which has > > the results when this test is run with a login user of "user". That > > doesn't seem great either though. > > > > In any case, this should probably be sent over to the pgsql-hackers > > list. > > TBH, I'm not even sure if this is a bug. "user" is so overly generic > that it shouldn't be used as a user name in the first place. It's certainly not a bug in PG, but just an artifact of certain parts of the regression suite making assumptions about the environment. It's not really ideal that we make such assumptions, but there isn't really a very easy wasy to fix that. Still, it might be something to ask the -hackers list. > If pg_regress supported output variations that would just apply to > some parts (like patches do), we could add that, but a full _1.out > copy of the file just for this corner case sounds like overkill. (And > as there's no direct way to add comments to the _1.out file, > remembering why it was created might be difficult as well.) That is certainly true, though one might at least look at the git history for the file and understand why it was added that way. Thanks! Stephen signature.asc Description: Digital signature
Bug#831234: [Pkg-postgresql-public] Bug#831234: postgresql-9.5: FTBFS: Tests failures
* Lucas Nussbaum (lu...@debian.org) wrote: > (This might be related to the fact that I use a "user" login on my build > machine) Yes, it is. We could possibly remove those tests, but I'm not really thrilled with that idea. I'm not sure if it'd be at all sensible to try and write something to check if the role "user" already exists. One thing we could do is provide an alternate expected file which has the results when this test is run with a login user of "user". That doesn't seem great either though. In any case, this should probably be sent over to the pgsql-hackers list. Thanks! Stephen signature.asc Description: Digital signature
Bug#798773: postinst script handles comments in config file incorrectly
Christian, * Christian Hofstaedtler (z...@debian.org) wrote: > * Stephen Frost <sfr...@snowman.net> [151006 04:12]: > > That said, I do think it's worthwhile to see about fixing these > > particular install failures, and the proposed change looks like it would > > at least do that. > > I agree that it would make the situation better for some users. > > I was going to write which users are affected by this bug, but now > that I think more about it, I'm not sure which users are affected. > As you have some interest, I assume you have run into the bug? How > did your configuration look like? Yes, I did run into this bug while upgrading the hidden PowerDNS master for postgresql.org. The configuration we have is relatively simple. My recollection of how it happened is: Installed an older version (3.4.1?), modified /etc/powerdns/pdns.conf with a few simple changes- allow-axfr-ips, master=yes, slave=yes, etc. Then at some point down the road, attempted to upgrade to 3.4.6 (in jessie backports), and it blew up on that error. To fix it, I recall modifying pdns.conf *and* pdns.conf.ucf-dist. It's possible I didn't have to change both, but I think I tried changing pdns.conf first and it didn't help, so I changed pdns.conf.ucf-dist also and that got me past the issue. Apologies for not having more detailed notes, I was a bit anxious to get it fixed. :) I'll see about building a test jessie VM, installing the version in jessie, modifying the config, and then doing an upgrade, and see if I can reproduce the error. > > > > Is there anything I can do to help? > > > > > > I'm thinking of deleting most of the code in the postinst for > > > stretch. > > > > Are you thinking about simply assuming that /etc/powerdns/pdns.d is the > > PDNSDIR ... > > Yes, because the files the postinst touches are meant to be the > files that the previous version of the pdns-server package has > shipped. Right. > > and anything else is up to the user to address? > > If the user has moved or renamed the pdns.d dir, or changed the > include= dir to point to something else entirely, then we have no > business of touching (and moving!) the users config files at all. Agreed. We just need to be able to sanely detect that and act accordingly. > As the postinst doesn't have a version check right now, it > 1) is wrong for versions after jessie, > 2) we have to look at all previously released binary packages to see > the original intent and which conffiles have previously been installed. > I haven't done that check yet. Yeah, that doesn't sound like much fun. :) > > > Not sure what to do about jessie. Given that this bug has existed > > > since 2006, maybe it's not terribly important to fix in jessie. > > > > I disagree. Perhaps I'm being naive, but having the relatively simple > > case, where /etc/powerdns/pdns.d is the directory and the configuration > > has been only mildly tweaked, failure during upgrades is not a good > > position for us to be in. > > > > I have to admit that I'm not up to speed on current policy, but I'm > > happy to try and implement whatever the correct solution is. I'm sure > > there are other packages which have include directories, is there a > > clear "right way" to handle this? > > I don't think there's much policy here except for the normal "don't > touch stuff that isn't yours" - i.e. preserve user changes, > especially if they are to/in files that aren't installed by the > package. Right. > There's some other complication - include= is the old name of the > include directory setting; jessie's pdns does include-dir= instead. [1] > > > Thanks! > > Thank you for caring about this, Absolutely, I like PDNS in general and given that we're using it to run postgresql.org, we really want it to work well. :) I'm also a DD, btw, though I haven't done much Debian-related work recently. I'd be happy to help out with maintaining PowerDNS though, as we're using it. Thanks! Stephen signature.asc Description: Digital signature
Bug#798773: postinst script handles comments in config file incorrectly
Greetings, Any chance we could get this bug fixed? Looks like a pretty straight-forward change. Is there anything I can do to help? Thanks! Stephen signature.asc Description: Digital signature
Bug#798773: postinst script handles comments in config file incorrectly
Christian, Thanks for the quick reply! * Christian Hofstaedtler (z...@debian.org) wrote: > * Stephen Frost <sfr...@snowman.net> [151005 22:57]: > > Any chance we could get this bug fixed? Looks like a pretty > > straight-forward change. > > Only on the surface it's a straight-forward change. When you look > closer at the postinst code, I'm quite sure the code only works when > the config var hasn't been changed from the default. > (Or it does work, but you end up with ucf hijacking files the > package doesn't ship.) Looking at the postinst code, I think I see what you mean. That said, I do think it's worthwhile to see about fixing these particular install failures, and the proposed change looks like it would at least do that. > > Is there anything I can do to help? > > I'm thinking of deleting most of the code in the postinst for > stretch. Are you thinking about simply assuming that /etc/powerdns/pdns.d is the PDNSDIR and anything else is up to the user to address? > Not sure what to do about jessie. Given that this bug has existed > since 2006, maybe it's not terribly important to fix in jessie. I disagree. Perhaps I'm being naive, but having the relatively simple case, where /etc/powerdns/pdns.d is the directory and the configuration has been only mildly tweaked, failure during upgrades is not a good position for us to be in. I have to admit that I'm not up to speed on current policy, but I'm happy to try and implement whatever the correct solution is. I'm sure there are other packages which have include directories, is there a clear "right way" to handle this? Thanks! Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
Aaron, * Aaron Zauner (a...@azet.org) wrote: I think we should take this discussion to an appropriate PostgreSQL mailing list (please feel free to include me in a thread if you start one). But I think it's best to close this bug for now. I agree that MD5 needs to be replaced, but using plaintext instead is certainly no option. It's already being discussed on the pgsql-hackers mailing list. You are certainly welcome to join in. http://www.postgresql.org/message-id/flat/20150304020146.gd24...@momjian.us Thanks! Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
* Michael Samuel (m...@miknet.net) wrote: I think the direction upstream is going with SCRAM (or similar) is fine, but either new hashes are required or using a customized code base that uses MD5(password|username) where the password would normally be directly input is needed. For my 2c, I'm hopeful we can use the recommended storage approach instead of keeping the current hashes (except as needed during the transistion, of course). I don't have time to write any code, but I'm happy to review schemes and code (and probably will at some point anyway). Thanks, I'll keep that in mind. Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
* Christoph Berg (m...@debian.org) wrote: Re: Stephen Frost 2015-03-04 20150304145551.gu29...@tamriel.snowman.net Just to put the idea out there; PGSQL currently links to OpenSSL for TLS, right? TLS has support for SRP [0] [1]. This could be used for password based authenticated TLS sessions without client certificates. Might be less of a burden on users than deploying PKIX with client-certificates while still providing proper security. That's an excellent thought.. I wasn't aware of this. Unfortunately, I'm not sure that we could make it the default in Debian as it requires server-side certificates be configured and used properly (correct?) but I don't see a reason to not support it and encourage its use. We have the autogenerated snakeoil certificates that we use anyway. If these aren't good (why?), we could put more automation in there and generate proper certificates. That's probably more of a distribution-wide topic and not just PostgreSQL, though. They are sufficient to prevent snifffing but not man in the middle attacks because you don't verify the server side. The 'md5' and 'password' authentication mechanisms available in PG do nothing to address that either, and the proposed changes wouldn't fix that either. SCRAM (possibly with TLS channel bindings, not sure..) would address that issue, I believe. Thanks! Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
Michael, * Michael Samuel (m...@miknet.net) wrote: On 4 March 2015 at 15:22, Stephen Frost sfr...@snowman.net wrote: That really just changes it back to the 'password' case though, doesn't it? An attacker who can sniff the network would get the response from the client and be able to use it in a replay attack just as if it was the password. They can already do that if they reconnect 65k times (on average - this could be fixed by choosing the challenges sequentially instead of randomly). But yes, the intent is to make it as secure as the 'password' case. I was hoping for an option which would actually improve it, not make it the same as another mechanism that already exists.. Sure, we could store multiple responses, but given that we don't have any auto-lockout mechanism after X bad attempts or anything like that, an attacker could simply continue retrying until we pick the one which they sniffed. You'd have to store billions for this to be effective without lockout. To make it better than password and effective *with* a lockout we'd still have to store quite a few random options, no? Even with a 3-strike lockout (which is a bit agressive, imv..), wouldn't you need to store 27 or more variations to keep the probability of success inside of those 3 attempts acceptably low? I realize that's what you were getting at with your replay comment above but I wanted to re-state it to make sure I understood your suggestion correctly. While the PG community might be willing to pursue this approach, I doubt they'd want to seriously increase the size of pg_authid and, really, to make this work well, how many different stored hashes would be required for this to be effective at preventing an attacker who can sniff the network from getting in? We are clearly not going to store 4 billion entries and I doubt most people would even want to store more than, say, *10*. Perhaps if we also added an auto-lockout feature (something I've wanted for quite a while anyway...) this would work out well. As I said, you can't make this scheme safe against a network attacker. They'd be able to dictionary attack the response, or just mess with the active connection. I agree that there isn't much we can do for the active connection but that's at least more likely to be noticed. If this approach isn't effective to deter a network attacker, then what are we protecting with this? If they have access to pg_authid, it's highly likely they have access to all of the data in the database also. One advantage of this approach over password is that the attacker wouldn't be able to get the actual password very easily and so the sniffed response would only be usable for the given system, but this definitely reduces the effectivness of the challenge/response aspect, to a point where I'm not really sure it's still useful. That is correct. I'm personally in favour of just using 'password' - at-least it's honest. Server-authenticated TLS (eg. verify the server certificate only) or SSH tunnelling are the best ways to secure the network protocol as it stands. PG supports client-side certificate based authentication which would be far better than any kind of password-based authentication. If password based auth is insisted upon then TLS to verify the server-side and protect the network connection would be good and remove the need for the challenge/response protocol and lead to 'password' being an acceptable option there, but that doesn't mean it'd be a good default for Debian, imv, because we *don't* require server-authenticated TLS, or TLS at all, currently. Further, I'm not convined that 'password' there would really be all that much better than 'md5' as, as has been discussed, if you have access to pg_authid then you have access to the PG data directory. Further, at that point, you've probably got access to the backend and with password-based auth the postmaster process will see the user's actual password. In the end, I think we might move to support SCRAM and simply deprecate md5 in favor of that rather than try to fix the current mechanism without breaking things because any such fix wouldn't be a serious improvement and would just mislead users into thinking it's safe. We're currently looking at getting SCRAM support by implementing SASL, but I'm worried that we'll then create a dependency on SASL that people won't be happy with and therefore I'm very curious about how difficult it'd be to implement proper SCRAM directly. Do you know if there is BSD-licensed code (PG is entirely BSD licensed) that implements SCRAM? Thanks! Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
Aaron, * Aaron Zauner (a...@azet.org) wrote: Stephen Frost wrote: We're currently looking at getting SCRAM support by implementing SASL, but I'm worried that we'll then create a dependency on SASL that people won't be happy with and therefore I'm very curious about how difficult it'd be to implement proper SCRAM directly. Do you know if there is BSD-licensed code (PG is entirely BSD licensed) that implements SCRAM? Just to put the idea out there; PGSQL currently links to OpenSSL for TLS, right? TLS has support for SRP [0] [1]. This could be used for password based authenticated TLS sessions without client certificates. Might be less of a burden on users than deploying PKIX with client-certificates while still providing proper security. That's an excellent thought.. I wasn't aware of this. Unfortunately, I'm not sure that we could make it the default in Debian as it requires server-side certificates be configured and used properly (correct?) but I don't see a reason to not support it and encourage its use. Thanks! Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
* Michael Samuel (m...@miknet.net) wrote: On 4 March 2015 at 12:03, Aaron Zauner a...@azet.org wrote: Uh, no, using 'password' is far worse, and uniformly so, than using md5. I have no idea why anyone would think it's better to store a cleartext version of your password in the pg_authid data (note that pg_shadow is only a view now, I replaced it long ago when I rewrote the user/group system to be role-based). I was referring to the pg_hba.conf setting in my recommendation. Using password there does not change the stored hash, it only changes the network protocol. Then it's simply a trade-off between trusting the network traffic, as the password will then be sent *in-cleartext* across the wire, and trusting the data on disk (which, as discussed, if you have access to already then you hardly need the password). PG does allow you to make that trade-off, but having a challenge/response to protect the hash of the password as it goes across the network is far more useful than trying to protect something in pg_authid, which you can only get if you've already compromised the postgres account. Agreed - most enterprise or cloud deployment I've been involved with use either PKIX or kerberos. This is a good security measure. Replacing MD5 would be nice as well (scrypt, bcrypt?). But I guess a debian bug report is the wrong place to discuss this. Agree that debian bug is wrong place to discuss fixing password hashing. The current discussion in the community is about implementing SCRAM with SASL as an additional authentication method. You would certainly be welcome to provide any thoughts you have to the thread on pgsql-hackers. Thanks, Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
Aaron, * Aaron Zauner (a...@azet.org) wrote: Debian ships a set of Perl scripts to configure for PostgreSQL server configurations, these are quite outdated and are currently configuring authentication to use MD5 when 'password' should be used instead. Uh, no, using 'password' is far worse, and uniformly so, than using md5. I have no idea why anyone would think it's better to store a cleartext version of your password in the pg_authid data (note that pg_shadow is only a view now, I replaced it long ago when I rewrote the user/group system to be role-based). http://www.openwall.com/lists/oss-security/2015/03/03/12 This isn't news and the post linked by Michael is actually a discussion that I started 10 years ago. It's cute that atom has found it and claimed it to be a serious issue, but it simply isn't. I'd recommend to change this setting ASAP. Open to discuss. Absolutely no would be the answer. There is no reason to believe that having a cleartext password is better than having a hashed representation of it. I hope someone on the OSS list corrects Michael's understanding. The PG community has long been discussing the possibility of providing a new authentication mechanism to replace the md5 one, but anyone who actually cares about security will be using Kerberos or Certificate based authentication anyway, so it hasn't been a priority. Thanks, Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
* Michael Samuel (m...@miknet.net) wrote: - I don't recommend storing the password in cleartext - I *do* recommend exchanging the password in cleartext over the network And I will continue to argue that it's far worse these days to send the password in cleartext across the wire. This is because the exchange network protocol is vulnerable to pass the hash - so somebody who has your pg_shadow but can't crack your password can still use the hash to login. Where would they get the pg_authid entry from? It's not directly visible in the network traffic because PG using a challenge/response system with md5. In the thread it was pointed out that the network protocol is vulnerable to session hijacking. Additionally, the challenge-response protocol is vulnerable to extremely fast password searches. This is just another broken ad-hoc challenge-response protocol to be added to the heap. If anyone from postgres is interested in putting a network-compatible fix for password hashing in, feel free to contact me. No, it isn't a great challenge/response, but it's certainly better than just forgoing all of that and sending the password in cleartext. To be clear, I *am* from the PostgreSQL community and I'd be happy to discuss any useful suggestions about providing an alternative that doesn't break the wireline protocol, because as far as I'm aware that's not possible to do. The wireline protocol is quite clear about what it requires and we have quite a few client-side implementations to consider. Note that this is specifically why other authentication methods are available and encouraged with PG. Thanks, Stephen signature.asc Description: Digital signature
Bug#779683: [Pkg-postgresql-public] Bug#779683: postgresql: pg_hba scripts (mis)configures for MD5 authentication
Michael, * Michael Samuel (m...@miknet.net) wrote: On 4 March 2015 at 12:33, Stephen Frost sfr...@snowman.net wrote: To be clear, I *am* from the PostgreSQL community and I'd be happy to discuss any useful suggestions about providing an alternative that doesn't break the wireline protocol, because as far as I'm aware that's not possible to do. The wireline protocol is quite clear about what it requires and we have quite a few client-side implementations to consider. The way I'd do this is to store a strong hash (eg. bcrypt, scrypt) of the password pre-digested for the challenge-response protocol with a fixed challenge. The server sends the same challenge every time - this allows replays of the challenge-response protocol, but means that the stored hash is reasonably secured and disables pass-the-hash. That really just changes it back to the 'password' case though, doesn't it? An attacker who can sniff the network would get the response from the client and be able to use it in a replay attack just as if it was the password. Sure, we could store multiple responses, but given that we don't have any auto-lockout mechanism after X bad attempts or anything like that, an attacker could simply continue retrying until we pick the one which they sniffed. I realize that's what you were getting at with your replay comment above but I wanted to re-state it to make sure I understood your suggestion correctly. While the PG community might be willing to pursue this approach, I doubt they'd want to seriously increase the size of pg_authid and, really, to make this work well, how many different stored hashes would be required for this to be effective at preventing an attacker who can sniff the network from getting in? We are clearly not going to store 4 billion entries and I doubt most people would even want to store more than, say, *10*. Perhaps if we also added an auto-lockout feature (something I've wanted for quite a while anyway...) this would work out well. One advantage of this approach over password is that the attacker wouldn't be able to get the actual password very easily and so the sniffed response would only be usable for the given system, but this definitely reduces the effectivness of the challenge/response aspect, to a point where I'm not really sure it's still useful. Thoughts appreciated. Thanks! Stephen signature.asc Description: Digital signature
Bug#778850: [Pkg-postgresql-public] Bug#778850: closed by Martin Pitt mp...@debian.org (Re: Bug#778850: Acknowledgement (Missing 20-column_privilege_leak.patch file in postgresql-8.4 8.4.22-0ubuntu0
* Charlie Brady (charl...@budge.apana.org.au) wrote: On Sun, 22 Feb 2015, Debian Bug Tracking System wrote: This is an automatic notification regarding your Bug report which was filed against the postgresql package: #778850: Missing 20-column_privilege_leak.patch file in postgresql-8.4 8.4.22-0ubuntu0.10.04.1 source package It has been closed by Martin Pitt mp...@debian.org. Wouldn't it be wise to at least amend the changelog entry so that going forward it isn't incorrect? How does this privilege leak not affect Debian? I agree the patch is risky - I had a look at backporting it myself, and it's non-trivial. I wonder if someone familiar with the code will assist. I notice that RH haven't updated their postgresql84 package yet. I wasn't aware that anyone was still concerned with 8.4... Have other patches which are relevant to 8.4 been back-patched? As the original author of the patch for master through 9.0, I'd be happy to review a patch that someone sends me for 8.4. Thanks! Stephen signature.asc Description: Digital signature
Bug#773249: lists.debian.org: New list: debian-dug-washington-dc
* Aaron M. Ucko (u...@debian.org) wrote: On behalf of the Washington, DC, US Debian developer community, I would like to request an official debian-dug-* list to supplant our current teams.debian.net list. (I'm sending a copy of this request to the list to solicit seconds from other members.) It's not entirely clear if seconds are necessary, but if they are, this is seconded by me. Thanks! Stephen signature.asc Description: Digital signature
Bug#727708: Steve Langasek Must Vote
* Lisandro Damián Nicanor Pérez Meyer (lisan...@debian.org) wrote: With all the due respect to Steve, considering the fact that he is a very involved contributor of Upstart and judging from his position on this subject, I also think he should step down from participating as a TC member in this specific issue. I don't agree with this. I have no reason to doubt Steve's ability to do the right thing for Debian. Being a contributor to one means that he's in a position to actually understand the issues better than most. Additionally, I think this approach to voting (if you've ever been involed with X then you can't vote on it) would quickly run us out of competent people available to cast votes. Thanks, Stephen signature.asc Description: Digital signature
Bug#705219: pg_checksystem does not respect start.conf
Package: postgresql-common Version: 141 postgresql-common provides a way to mark clusters as auto, manual, or disabled. When a cluster is marked as disabled, however, pg_checksystem will still try to verify that the cluster is valid and will attempt to do things like check the filesystem associated with the cluster. In our environment, we have shared storage which allows us to fail over between two systems, however, we intentionally keep on the filesystems mounted for those clusters which are running on the current system, to reduce the chances of two postmasters (on different systems) trying to access/use the same database files. pg_checksystem throws an error in this case though because the 'df' doesn't work against the cluster which we have marked as 'disabled'. My suggestion would be for pg_checksystem to skip checking clusters which are marked as 'disabled' in start.conf. Another option might be for it to more cleanly handle the error case from 'df'. Thanks, Stephen signature.asc Description: Digital signature
Bug#648176: postgis: pgsql2shp/shp2pgsql no longer in $PATH
Marc, * Marc Fournier (marc.fourn...@camptocamp.com) wrote: Since 1.5.3, pgsql2shp and shp2pgsql have moved from /usr/bin to /usr/lib/postgresql/X.Y/bin and have moved from the postgis package to the postgresql-X.Y-postgis package. Wow, thanks for pointing that out- it's quite busted and incorrect and not how I originally packaged postgis.. I'd also like to hear from the current maintainer why in the world these were moved..? Thanks, Stephen signature.asc Description: Digital signature
Bug#592887: Add support for Torque to OpenMPI
Package: openmpi Version: 1.4.2-3 Severity: wishlist Please add support for Torque to the OpenMPI packages. Build-Depends: libtorque2-dev ./configure [...] --with-tm Thanks, Stephen signature.asc Description: Digital signature
Bug#529214: Bug #529214 if_ incorrectly assumes rrd input max is interface speed
* Tom Feiner (feiner@gmail.com) wrote: As matthias mentioned in the upstream ticket you filed, the if_ plugin in the 1.3 series should do the right thing now. No, it doesn't. *.max is *still* set to 1,000,000,000, which is wrong. This is a counter since *boot* (or last overflow), not since the last second. Sure, the interface can only do 1Gb per second, but a per-second amount isn't what's being sent to rrdtool. What happens, however, is that *.max is translated to a maximum value that rrdtool will accept, and so it starts throwing away values which are larger than that. Reality is that on a 64bit kernel you're not going to get an overflow until 2^64 (or so), so as soon as you've transferred over 1Gb since booting a system, the if plugin blows up. Can you check if the latest if_ plugin solves your problem? Ther latest if_ plugin from the munin trunk can be found at: http://munin.projects.linpro.no/browser/trunk/plugins/node.d.linux/if_.in As long as *.max is set to what the interface per-second speed is, and *.max translates to the max value for rrdtool, it's going to be busted. rrdtool can generally handle wrap-arounds, it might be as simple as removing the .max settings. Alternatively, have a check to figure out if it's a 32bit or 64bit machine and set the max value based on eight-times 2^32 or 2^64 respectively (since it's reported in bytes anyway by /proc/net/dev...). By the way, because of that, you've actually got around 34 seconds on a gigabit interface which is going at full gigabit speeds before you get a roll-over with 32bit counters. Over 5 minutes with a 100Mbps interface. No clue who came up with the idea to set the max value here to the interface speed, but it's just plain wrong. Thanks, Stephen signature.asc Description: Digital signature
Bug#529214: if_ incorrectly assumes rrd input max is interface speed
Package: munin Version: 1.2.6-10 Severity: normal The if_ module has problems with large values because it incorrectly tells rrdtool that the maximum size allowed is the interface speed. As the data being pulled from /proc/net/dev is the total number of bytes transferred since boot (modulo overflow), the interface speed is irrelevant. This *breaks* if_ on just about every system, which is particularly insane given that it's such a valuable plugin. On 64bit Linux-based systems (at least 2.6.22 and newer), the counters are 64bit, though obviously this will help even systems which don't have 64bit counters some. Yes, a gigabit interface could overflow a 32bit counter in around 34 seconds, but we don't need to make things worse by telling rrdtool to toss out any values 1GB when it can go up to ~4GB. if_ should be adjusted accordingly: echo 'down.min 0' echo 'down.max 18446744073709551616' echo 'up.min 0' echo 'up.max 18446744073709551616' With the 'max' settings provided by iwlist/ethtool removed. Perhaps with some adjustment/checking to see if the counter is 64bit or 32bit (if possible..). It might make sense to just always use the 64bit value- I believe rrdtool will figure things out correctly if an overflow happens near the max 32bit value even if the 'max' is set to the 64bit value. This has been submitted upstream at: http://munin.projects.linpro.no/ticket/686 Thanks, Stephen signature.asc Description: Digital signature
Bug#471717: regexp filter support disabled?
Package: libmapnik0.5 Version: 0.5.0-2 Greetings, In mapnik upstream r540, regex support appears to have been disabled for filters: http://trac.mapnik.org/changeset/540/trunk/include/mapnik/filter_parser.hpp This breaks things for people upgrading to 0.5 from previous versions where it was available. Worse, the changelog for the change doesn't seem to indicate any reason as to why it commented-out. Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#471405: TileCache using out-dated 'mapnik.rawdata'
Package: tilecache Version: 2.01-4 Greetings, mapnik.rawdata(im) has been depreceated in favor of im.tostring() in 0.5. As that's the version in Debian, it'd be nice to have tilecache updated accordingly. You can also use the first hunk from the patch posted here: http://openlayers.org/pipermail/tilecache/2008-January/000793.html Which is basically the same thing. The problem is in the Layers/Mapnik.py file. Thanks, Stephen signature.asc Description: Digital signature
Bug#400632: x11-common should not ship a SUID root binary
Package: x11-common Severity: serious tags 400632 -wontfix Greetings, The setuid usr/bin/X binary should not be shipped with x11-common because it's not *needed* for X11 clients. That by itself is a good enough reason. Put it in xserver-xorg-core or similar, not in x11-common. Additionally, x11-common gets pulled in on server for things like libgd-xpm, which isn't entirely unreasonable if someone wants to generate an X pixmap on a server. One could also have, I dunno, *xterm* installed on a server for clients to use without have an X server installed on the same server. Unless xterm *requires* usr/bin/X, it shouldn't be installed as part of something xterm depends on. Thanks, Stephen signature.asc Description: Digital signature
Bug#432740: player FTBFS
Greetings, It looks like the referenced bug in python-central (#424906) was closed out over a month ago. If that's been fixed can we get a rebuild of player/stage on affected platforms? Looks like at least amd64, mips, m68k, powerpc, s390. Thanks, Stephen signature.asc Description: Digital signature
Bug#435125: mount can Recommend nfs-common
Greetings, aiui, even the 'mount' provided previously by mount would have failed if portmap (well, lockd, I'm guessing) was unavailable. Therefore, having mount only Recommend nfs-common is the appropriate solution. Having nfs-common only Recommend portmap doesn't make much sense since 'mount.nfs' principally doesn't work w/o portmap. The same is not true for 'mount', which can do the vast majority of its regular stuff w/o 'mount.nfs'. So, people who needed NFS mounts working must already have nfs-common and portmap installed, therefore there's no disappearance of features during the upgrade path. Please to be fixing. Thanks, Stephen signature.asc Description: Digital signature
Bug#429671: exim4 username
Package: tech-ctte Severity: normal User: [EMAIL PROTECTED] Usertags: Debian-exim Greetings, As outlined in bugs: #223831, #223963, #225031, #233803, #255493, #262195, not everyone agrees with the concept of having a 'namespace' for Debian-created system accounts. It would be of great benefit to have an official ruling from the tech committee on this issue. For my part (and Marc can provide his counterpart), I don't feel it's possible, or appropriate, for Debian to try to solve the username-collision problem. The best thing to do is be consistant and clean, but also flexible and allow the user to change the username if they need to. Clearly any username picked by Debian could be used by a user (especially in a multi-server arrangment with multiple Debian systems using a common username directory) and in such situations conflict is inevitable and unavoidable, even with 'namespace' concept. Additionally, I don't feel it's appropriate for Debian to try and co-opt any part of the username 'namespace' from the user. Thanks, Stephen signature.asc Description: Digital signature
Bug#400448: libnss-ldap: Certificate verification using tls_cacertdir causes long delay
severity 400448 wishlist reassign 400448 gnutls thanks * Mitar ([EMAIL PROTECTED]) wrote: Package: libnss-ldap [...] When I configure CA directory with tls_cacertdir configuration option in /etc/libnss.conf file NSS querying (for example finger mitar) takes very long (about 20 seconds per query). With only CA file in both /etc/libnss.conf and /etc/ldap/ldap.conf it is normally fast. GNUTLS doesn't have the ability to do hash-based lookups in a directory (that I know of anyway, if it does then this should be reassigned to libldap to use it). Therefore, on every invocation all the CAs in the directory have to be loaded into GNUTLS. Other LDAP programs (ldapsearch) verify CA directory without delay. I noticed this delay only with libnss-ldap (and libpam-ldap but I have not worked on that yet so I am not sure that it is the same cause). libnss-ldap isn't doing anything particularly special with regard to TLS_CACERTDIR that I'm aware of. Your ldapsearch is probably using the more recent libldap which was compiled against openssl. That's not an option for libnss-ldap or a number of other GPL utilities. I have only default Debian CA certificates (ca-certificates) and one local self-signed for LDAP server. There's quite a few default Debian CAs. You've asked libnss-ldap, and therefore libldap, and therefore gnutls, to use all of them. If that's really your intent then you'll have to deal with the speed hit associated with it (you might consider nscd to help with that). If that's not what you want (and, really, it probably isn't, NSS lookups being rather sensitive and all...) then you shouldn't tell libnss-ldap to do that. Thanks, Stephen signature.asc Description: Digital signature
Bug#366172: libnss-ldap: abort during useradd (ber_free_buf error)
* Scott Anderson ([EMAIL PROTECTED]) wrote: Setting up nfs-common (1.0.7-12) ... Adding system user `statd' with uid 107... Adding new user `statd' (107) with group `nogroup'. useradd: /home/devel/openldap/openldap2-2.1.30/libraries/liblber/io.c:161: ber_free_buf: Assertion `((ber)-ber_opts.lbo_valid==0x2)' failed. adduser: `/usr/sbin/useradd -d /var/lib/nfs -g nogroup -s /bin/false -u 107 statd' exited from signal 6. Aborting. Can you also send us ldd /usr/sbin/useradd, and try to get a backtrace with a debug-enabled libnss-ldap package (available here: http://kenboi.snowman.net/~sfrost/libnss-ldap_251-7-debug_i386.deb) Thanks, Stephen signature.asc Description: Digital signature
Bug#396672: libnss-ldap: Fails on unreadable KerberosV cache for GSSAPI auth
* Andrew Deason ([EMAIL PROTECTED]) wrote: Suppose I want to use krb5_ccname and SASL, so I can have a host authenticate with its host principal from a keytab. However, I don't want normal users to be able to read the host principal keytab; I just want libnss-ldap to use their own kerberos credentials. If I specify krb5_ccname in /etc/libnss-ldap.conf, and the file is not readable to the user, it just fails. This patch makes libnss-ldap attempt to try authenticating again with the unchanged ccache if the modified ccache fails for whatever reason. It appears to work on a test machine. (I.e. it falls back to user credentials if the krb5_ccname credentials fail.) In general I like this idea but I'm not sure about its implementation. It strikes me as rather excessive to attempt multiple binds in this way and to cause that extra load on the server. Also, it may hide other real problems beyond permissions on the ccache. How about just attempting to open the modified ccache? If you can't open then it's not very likely to work and you can switch to the unmodified one. Thanks! Stephen signature.asc Description: Digital signature
Bug#390926: libnss-ldap: Startup scripts take ages to perform duties
* Jan Evert van Grootheest ([EMAIL PROTECTED]) wrote: Using these settings, udevd still reports about the ldap server being unreachable. Right, that's expected, and perfectly reasonable. But the timeouts are now such that bootup is basically a snap. Each search takes about 1 second. Good. I think I agree with Stephen that changing the timeouts is a better solution. It does not introduce any complexity (which the file-base solution does). However, if somebody needs longer timeouts or more retries, they're still in a bad situation. They could adjust the defaults if they need to. But don't give too much weight to my opinion. I'm no ldap or nss expert. I know just enough to get things configured to do what I want it to. Did you, or could you, try the debs I posted? You should be able to comment out those extra config options and use the defaults w/ the shorter timeout with those. Would also like to know if you have any problems upgrading to them and whatnot... Thanks, Stephen signature.asc Description: Digital signature
Bug#390926: libnss-ldap: Startup scripts take ages to perform duties
* Jan Evert van Grootheest ([EMAIL PROTECTED]) wrote: I've tried it. I'm not sure you have... During boot udevd attempts to resolve a few groups (group scanner, group scanner, group scanner, group nvram, user tss, group tss, group fuse, group rdma, group rdma), as far as I understand the logs. Those fail. Ok. Udevd attempts to resolve 9 items. These are the logs of one such attempt: INIT: version 2.86 booting [...] udevd[882]: nss_ldap: could not search LDAP server - Server is unavailable udevd[882]: lookup_group: error resolving group 'scanner': Illegal seek end of log - So this results in a delay of some 204 seconds per item. For a total of about 1800 seconds, or about 30 minutes. Right, with the defaults... The idea was to reduce those. I've played with this some in preparation of 251-6. Try: reconn_maxconntries = 2 reconn_tries = 1 reconn_sleeptime = 1 reconn_maxsleeptime = 8 Or you could try the 251-6 packages I've built w/ these defaults here: http://kenobi.snowman.net/~sfrost/libnss-ldap/ This should result in 3 attempts with a 1 second sleep between the 2nd and 3rd. I've tried changing the bind_timelimit and the timelimit in /etc/libnss-ldap.conf. Both have no influence on the behaviour. So either those do not apply to the tcp connection or this configuration file does not used. Might it be that the initrd filesystem is still in use? bind_timelimit is a parameter passed to libldap actually regarding the connection and timelimit is for a max time regarding how long to wait for search results. In nssswitch.conf I have 'files ldap' for passwd, group and shadow. So had those been present in the files, these wouldn't have been searched in ldap. This means that any system using ldap might run into those delays? With the defaults from upstream, yes. There are a couple of other options- fail immediately (bind_policy soft) or reduce the timeouts (what I've suggested above). Currently we try to allow the boot to come up cleanly and then switch the bind policy over to 'hard' once we have an expectation of LDAP services being available (ie: networking and whatnot). Unfortunately that turned out to be more fragile than I expected and more people run libnss-ldap on their LDAP servers than I would have expected, which causes additional problems. Thanks, Stephen signature.asc Description: Digital signature
Bug#390926: libnss-ldap: Startup scripts take ages to perform duties
* Stephen Frost ([EMAIL PROTECTED]) wrote: Right, with the defaults... The idea was to reduce those. I've played with this some in preparation of 251-6. Try: reconn_maxconntries = 2 reconn_tries = 1 reconn_sleeptime = 1 reconn_maxsleeptime = 8 I guess I could have been clearer on these, in your libnss-ldap.conf: nss_reconnect_tries 1 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 2 Should be the corresponding parameters. Or you could try the 251-6 packages I've built w/ these defaults here: http://kenobi.snowman.net/~sfrost/libnss-ldap/ Either way, please let me know how it goes and if it improves things. Thanks, Stephen signature.asc Description: Digital signature
Bug#375533: Assertion failure in libnss-ldap
* Damyan Ivanov ([EMAIL PROTECTED]) wrote: Stephen Frost -- 3.10.2006 22:31 --: It needs to be 600 if you want tight control on your LDAP directory such that everyone has to connect using a password and you don't want that password available to everyone. libnss-ldap.conf w/ mode 600 and nscd works quite well for this. Ah, I see. You're talking about bindbw setting (I was talking about rootpw). rootpw is only for when you're doing NSS calls *as root*. If you're doing NSS calls as root then you've got access to the appropriate files already (which is why it makes sense to have a seperate file for that which is only available to root). Can bindpw be also moved to separate file? This would make fiddling with libnss-ldap.conf permissions unnecessary and as fas as I can see would work for everybody. I don't see the point in moving it to another file. Either you're running nscd and it doesn't matter what libnss-ldap.conf looks like, or you're not and therefore bindpw must be available to everyone. At most you've moved the permission issue from libnss-ladp.conf to whatever the new file is. Enjoy, Stephen signature.asc Description: Digital signature
Bug#390926: libnss-ldap: Startup scripts take ages to perform duties
* Steinar H. Gunderson ([EMAIL PROTECTED]) wrote: On Tue, Oct 03, 2006 at 11:28:34PM -0700, Steve Langasek wrote: Can we just fix libnss-ldap already to use a sensible default bind policy, please? Sure, I could do that (removing the boot-time workarounds), assuming the maintainer doesn't object... I've already said, a few times now, what I'd prefer as the solution. I also havn't heard any reason why it's not a reasonable solution. Please, try reducing the timeouts such that it's sleeping (at most) 2s per NSS call (assuming a failure to connect to the servers) and see how that affects booting. I don't expect that it would be too bad but I'm not sure which is why I'd like to have it tested. CosmicRay on IRC was already doing some of this testing for me so you might try checking with him on what he discovered. I'm out of town and when I've been on he hasn't been around or I would have already. Thanks, Stephen signature.asc Description: Digital signature
Bug#375533: Assertion failure in libnss-ldap
* Damyan Ivanov ([EMAIL PROTECTED]) wrote: What I don't understand is why libnss-ldap.conf *needs* to be 0600 at all. A big warning in the file (todo) and debconf placing password in a separate file (done) should be enough, IMHO. It needs to be 600 if you want tight control on your LDAP directory such that everyone has to connect using a password and you don't want that password available to everyone. libnss-ldap.conf w/ mode 600 and nscd works quite well for this. Thanks, Stephen signature.asc Description: Digital signature
Bug#390957: diff for 251-5.3 NMU
* Steinar H. Gunderson ([EMAIL PROTECTED]) wrote: Please let me know ASAP if you have any objections; I'll be asking the RMs what's a reasonable delay for an NMU fixing problems introduced in my own NMU, but I guess I'll be able to upload sometime tomorrow. It fixes RC bugs and therefore should have the associated severity. I don't see that it being an NMU of an NMU changes that. I'd also rather have the reduced delay than the init script or associated patches. I'm not convinced the logic for working with the delays is entirely correct though (and does what it claims) so I wanted to test with some different settings to ensure correct behavior. In general, my goal would be a 1-2s delay per NSS call and then see how that affects the boot process. Or perhaps even no delay on an immediate connect() failure (due to closed port or something) provided at least 2 attempts were made. Again, it depends on how it impacts a normal boot process and that depends on the number of NSS calls made. Additionally, I don't think 'update-rc.d remove' will correctly preserve modifications done to the init process by the user which is incorrect behaviour. Enjoy, Stephen signature.asc Description: Digital signature
Bug#375533: Assertion failure in libnss-ldap
* Damyan Ivanov ([EMAIL PROTECTED]) wrote: Right now, if I put password in /etc/libnss-ldap.conf (and therefore protect the file with 0600 permissions), only root can access ldap via nss. Others get assertions. This makes the password-along-everything setup highly unusable (to me). It is my belief that the default configuration makes exactly the right thing - stores the password in a separate (and protected) file. Why then fiddle with libnss-ldap.conf's permissions at all and break things? The seperate file is only for when *you* are running as root and bind'ing with the rootdn. Regular users *must* be able to connect to LDAP to do NSS lookups. If your LDAP server requires a password then you need to provide it somewhere the user can get it. If you don't want that then allow anonymous binds in the server. A workaround is to run nscd to proxy user requests through a root-owned process, and that works just fine if libnss-ldap.conf is 600. Thanks, Stephen signature.asc Description: Digital signature
Bug#387467: libnss-ldap: tls_cacertfile is being ignored
* Gabor Gombas ([EMAIL PROTECTED]) wrote: On Thu, Sep 14, 2006 at 03:02:34PM -0400, Stephen Frost wrote: Certainly possible.. If that's the case then there's nothing libnss-ldap could do about it tho and this would be an issue with libldap. What happens if the ldap.conf doesn't exist? Is that something you could test? The same: the TLS negotiation fails. Looking at the code, I think I found the bug: in ldap-nss.c, the do_ssl_options() is invoked only if either ssl on or ssl start_tls is specified in the config file. But I have neither, I simply have uri ldaps://... in libnss-ldap.conf. Erm, is there some reason you don't have 'ssl on' in your config? Playing with this I think this case is also a security hole: since libldap always reads an ldaprc file in the current directory, any user can override the CA certificate file option thus potentially making set-uid programs accept data from an untrusted LDAP server. I'm curious about this, are you sure a libldap would read an ldaprc file when run from a setuid program? Or that it'd read the current-directory ldaprc in that situation? Can you provide an strace showing this happening? This would almost certainly be an issue in libldap, of course, if it's acting as you describe. Also, the user would have to have access to more than the ldaprc file, no? Since the user couldn't control what server is being connected to without more control on the system, or control over the DNS, etc. Thanks, Stephen signature.asc Description: Digital signature
Bug#387467: libnss-ldap: tls_cacertfile is being ignored
* Gabor Gombas ([EMAIL PROTECTED]) wrote: Looking at the strace output, libnss-ldap.conf is parsed before /etc/ldap/ldap.conf. Is it possible that the parsing of /etc/ldap/ldap.conf resets the TLS options configured in libnss-ldap.conf? Certainly possible.. If that's the case then there's nothing libnss-ldap could do about it tho and this would be an issue with libldap. What happens if the ldap.conf doesn't exist? Is that something you could test? Thanks, Stephen signature.asc Description: Digital signature
Bug#383446: [DebianGIS-dev] Bug#383446: postgresql-8.1-postgis: include mktemplate_gis
severity 383446 wishlist thanks * Paolo Cavallini ([EMAIL PROTECTED]) wrote: An utility, called mktemplate_gis, was included in previous unofficial releases of postgis. It can be useful, especially for new users, because it makes a template with GIS functions in the database, and saves the user to do it by himself. I would advise to put it back in, if possible. Thanks a lot. It's not included in upstream and I'm not in favor of adding it through Debian patches. If you'd like it included then I would strongly encourage you to convince upstream to include it. Debian packaging isn't the appropriate place to have utilities be added unless they're used by the packaging or very simple (and sometimes not even then). Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#381303: UUID-based root-partition setup
Package: mdadm Version: 2.5.2-9 Severity: wishlist Greetings, Using hostname, or using the super-minor, can result in some serious problems when attempting to assemble arrays. The hostname is a very poor choice as it's not uncommon for a machine which is being upgraded (ie: most of the hardware is being swapped out except, perhaps, the disks or external raid enclosures) to have the same hostname as the machine being replaced. The super-minor is also a poor choice due to potential overlaps which can happen pretty easily. Therefore, I would strongly encourage the use of UUID and thus the use of the mdadm.conf in the initrd. There is a potential that the mdadm.conf at the time of initrd creation doesn't match what the currently running system has. This may or may *not* be incorrect, however, depending on what the user is doing or what the user intends. As there would be some danger to having an incorrect mdadm.conf in the initrd, during initrd creation the contents of the mdadm.conf should be compared to the currently running system and the user notified if they differ. Acceptable options would include: Defer to the mdadm.conf with a strong warning Fail the initrd creation unless an override is provided Ignore the mdadm.conf with a strong warning, but provide override If the mdadm.conf is ignored then we can fall back to the other options which have been discussed. However, the user may *want* to change the booting root partition in which case there must exist a way to override and force mdadm.conf usage even if it differs from the currently running system. Failing the initrd may be dangerous because the user may not notice prior to reboot. Deferring to the mdadm.conf and issuing a warning may result in the warning being missed/ignored and an incorrect mdadm.conf causing problems during the initrd. Therefore, the 3rd option would probably be that of least suprise while still allowing the flexibility for those who know what they're doing to override the guess-and-pray fallback of hostname/super-minor. Thanks, Stephen signature.asc Description: Digital signature
Bug#376684: Error on upgrade: Can't locate object method host via package ldap
* Julien Danjou ([EMAIL PROTECTED]) wrote: Severity: grave Not every bug in libnss-ldap is 'grave', not even ones which make NSS start having problems. I tried to upgrade to libnss-ldap 251-5 and the upgrade failed with a beautiful error: Setting up libnss-ldap (251-5) ... Can't locate object method host via package ldap (perhaps you forgot to load ldap?) at -e line 1, line 21. dpkg: error processing libnss-ldap (--configure): subprocess post-installation script returned error exit status 9 What were you upgrading from? What kind of answers did you give? This *looks* like it might be the same issue of not putting enough escapes into the perl regexp, which is filed under another bug and planned to be fixed soon. Thanks, Stephen signature.asc Description: Digital signature
Bug#376426: libnss-ldap: Can't login even as local user
* Sjoerd Simons ([EMAIL PROTECTED]) wrote: On Sun, Jul 02, 2006 at 08:00:22PM -0400, Stephen Frost wrote: * Vedran Fura?? ([EMAIL PROTECTED]) wrote: After upgrade to version 251 I can't login as a user in ldap and even as root which is a local user. Login process dies with SIGPIPE. It only happens without nscd. What do your configs look like, what version of login is it? Can you try and strace it, or run it under gdb and see what's happening? Do you have any other LDAP libraries installed? I'm seeing almost the same behaviour. When nscd isn't running the use can log in but after the shell is launched each command gets a SIGPIPE which kills it. Everything is at the current unstable version. A strace of the shell when it tries to run /bin/echo a: Interesting. Do both of you run IPv6 systems? It *looks* like libnss-ldap already had an open socket with the LDAP server (probably from the shell) and it thinks it can't use it for some reason and goes to close/reopen the connection but doesn't actually reopen the connection, just recreates the socket and then tries to write to it. Can you check if there's a file in /var/lib/libnss-ldap ? I doubt that has anything to do with this but who knows. Any other info you could provide would certainly be helpful... Thanks, Stephen signature.asc Description: Digital signature
Bug#376426: libnss-ldap: Can't login even as local user
* Sjoerd Simons ([EMAIL PROTECTED]) wrote: I've rebooted one of the systems with the ipv6 module blacklisted. After that it still shows exactly the same behaviour (identicaly trace, just the ipv6 addresses replaced by ipv4 addresses)... Hmm, ok. Can you check if there's a file in /var/lib/libnss-ldap ? I doubt that has anything to do with this but who knows. Nope, completely empty. Good, that's how it should be. I'm not sure what info would be helpfull. But fwiw, it happens on both amd64 and x86 machines. We've got two ldap servers (both accessible via ipv4 and ipv6), one runs sarge the other one runs etch. Huh, alright. Downgrading the package to the testing version solves the problem. I can't pinpoint it closer as i've only noticed it quite recently when nscd was not running for some reason. Are your config files world-readable? Maybe nscd can see them since it's running as root but a regular user can't and that causes the problem... Do you see anything interesting in your logs? What if you enable 'debug' in your libnss-ldap.conf? Thanks, Stephen signature.asc Description: Digital signature
Bug#376426: libnss-ldap: Can't login even as local user
* Vedran Fura? ([EMAIL PROTECTED]) wrote: Stephen Frost wrote: What do your configs look like, What are the permissions on your libnss-ldap.conf? But the problem is not in login(1), if I log in with nscd and then disable it *every* started app will crash immediately with SIGPIPE, just like Sjoerd said. Right, I gathered that you were both probably seeing the same thing. Last 200 lines from strace, I'll will show complete output if necessary: http://www.inet.hr/~vfurac/libnss-ldap.strace More would be nice... I'm really curious where the original file descriptor '7' is coming from... Based on the getpeername() call it *looks* like that should be an existing connection to the LDAP server, which is then closed (perhaps because it doesn't think that connection is to the right server, or something...). Does your URI in your libnss-ldap.conf resolve to multiple different IP addresses? What if you used multiple URIs which resolve to only a single address? No ipv6 here, /var/lib/libnss-ldap/ is empty. Ok, thanks, exactly as it should be. Thanks, Stephen signature.asc Description: Digital signature
Bug#376426: libnss-ldap: Can't login even as local user
severity 376426 serious tags +moreinfo thanks * Vedran Fura?? ([EMAIL PROTECTED]) wrote: After upgrade to version 251 I can't login as a user in ldap and even as root which is a local user. Login process dies with SIGPIPE. It only happens without nscd. What do your configs look like, what version of login is it? Can you try and strace it, or run it under gdb and see what's happening? Do you have any other LDAP libraries installed? Thanks, Stephen signature.asc Description: Digital signature
Bug#375215: libnss-ldap hangs udev at startup
* Gasper Zejn ([EMAIL PROTECTED]) wrote: It does not want to time out. After each timeout, it just tries again and again. Please try 251-5 and see if it helps. Thanks, Stephen signature.asc Description: Digital signature
Bug#375077: initrd needs its own, static, nsswitch.conf
* Michael Biebl ([EMAIL PROTECTED]) wrote: case here? Also, have you tried waiting it out? Each request would end up taking about 2 minutes, but technically it *should* give up eventually.. I waited for something like 10min without success. But honestly this wouldn't be a proper solution anyways. Please try 251-5, I believe it'll help... Thanks, Stephen signature.asc Description: Digital signature
Bug#375215: libnss-ldap hangs udev at startup
* Gasper Zejn ([EMAIL PROTECTED]) wrote: Dne sobota 24 junij 2006 17:04 ste napisali: Do you have a reasonably complete /etc/passwd and /etc/shadow files for the local accounts? Yes, I am actually using local account for daily work. Also, have you actually waited it out? Eventually, it should time out (after about 2 minutes per request). If it doesn't then it's possible NSS itself is retrying, and you might change your nsswitch.conf to look like this: passwd: files ldap [UNAVAIL=return] group: files ldap [UNAVAIL=return] shadow: files ldap [UNAVAIL=return] It does not want to time out. After each timeout, it just tries again and again. Well, it's going to do a getpwnam()/getgrnam() for each device it wants to create. If the counter you're seeing displayed resets back to 0 periodically then it's (at least theoretically) answering seperate requests... It's still the same with nsswitch modified by your suggestions. Hmm, ok. Thanks, Stephen signature.asc Description: Digital signature
Bug#375077: initrd needs its own, static, nsswitch.conf
* Michael Biebl ([EMAIL PROTECTED]) wrote: Marco d'Itri wrote: On Jun 24, Stephen Frost [EMAIL PROTECTED] wrote: It's simply not possible for libnss-ldap to provide a correct answer before networking or the slapd daemon has been started. I can see about making libnss-ldap fail faster so that the boot process isn't stopped but that's really not a terrific solution either. The usual way this is handled is that an nsswitch.conf is set up with 'files ldap' and 'files' satisfies everything till things are far enough along for libnss-ldap to be able to work. So this would be a local configuration error? As I posted in my initial bug report I already use 'files ldap', so I can't see the configuration error you mention. If it is one, I'd be interested what the correct configuration is. The error we're talking about would actually be having a user or group which udev wants to create a device for not in your /etc/passwd and /etc/group local files respectively. Do you think that might be the case here? Also, have you tried waiting it out? Each request would end up taking about 2 minutes, but technically it *should* give up eventually.. Thanks, Stephen signature.asc Description: Digital signature
Bug#375077: Bug#375215: libnss-ldap hangs udev at startup
* Gasper Zejn ([EMAIL PROTECTED]) wrote: [pid 5186] connect(7, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr(10.10.7.99)}, 16) = -1 ENETUNREACH (Network is unreachable) clearly means network is not setup properly to be able to reach LDAP server, since there's no matching route. The 'getent passwd' command also blocks, while waiting for response from unavailable LDAP server. The old libnss-ldap returned immediately. That's why I think this is a bug in libnss-ldap, not in udev. The problem is that the LDAP library doesn't come back with Network is unreachable, it comes back with LDAP_UNAVAILABLE or LDAP_SERVER_DOWN, in either case it might be a transient error (LDAP server is being restarted, temporary network hiccup, etc) and you wouldn't want to fail right away for that (if you do, configure libnss-ldap to have 'bind_policy soft'). I've been looking into a way for libnss-ldap to be able to tell if the error was 'network unreachable' but it's not as easy as one might hope. Additionally, when the local server *is* the LDAP server in question you're not going to get a 'network unreachable' but rather a 'port closed' or similar error and I'm not sure how you'd differentiate that from someone restarting the server and it being down for a second. I'm starting to think it might make sense to essentially check for 'boot-still-in-progress' and just fail requests until the system is fully booted to a point where most daemons have been started (in case the LDAP server is the local slapd) and networking should be available (if it's going to end up being available at all). This would essentially *force* anything during boot to be available via files, but I don't really think that's unreasonable. Thoughts? Thanks, Stephen signature.asc Description: Digital signature
Bug#375077: initrd needs its own, static, nsswitch.conf
* Marco d'Itri ([EMAIL PROTECTED]) wrote: On Jun 24, maximilian attems [EMAIL PROTECTED] wrote: udev does not even know about LDAP, it just uses the libc interface. What do you think it should do? Possibly without horrible layering violations. the errors happen when the sysvinit scripts call the udev scripts. it seems udev with libnss expects an running ldap. Huh, alright, I thought udevd was started in the initrd but apparently it's just udev that's run during initrd and udevd (which is the source of the issues) isn't run till after initrd. All udev expects is working getpwnam(3) and getgrnam(3) functions. If libnss_ldap cannot guarantee them to work at boot time then I think libnss_ldap is the problem. It's simply not possible for libnss-ldap to provide a correct answer before networking or the slapd daemon has been started. I can see about making libnss-ldap fail faster so that the boot process isn't stopped but that's really not a terrific solution either. The usual way this is handled is that an nsswitch.conf is set up with 'files ldap' and 'files' satisfies everything till things are far enough along for libnss-ldap to be able to work. This issue is exposed by udev because it's the first complex program run at boot time, but it would affect other programs too. Generally, things asking for NSS can be satisifed by 'files' until networking and other things are available. Actually, is there any particular reason why udevd might want something beyond what would be in local files (ie: system accounts)? Another possible approach would be to have a way of not installing 'ldap' as an option in the nsswitch.conf until it can be expected to be working. Actually, if that could be inverted during shutdown then we could close the can't unmount /usr problem when shutting down with 'ldap' in nsswitch.conf (libnss-ldap uses the LDAP libraries which are in /usr/lib, correctly). What I've seen other distros do has been the 'fail faster' work-around. I can probably do that but it'd be really nice to have a good solution... Thanks, Stephen signature.asc Description: Digital signature
Bug#375215: libnss-ldap hangs udev at startup
* Gasper Zejn ([EMAIL PROTECTED]) wrote: After I installed libnss-ldap, udevd stalled at startup, waiting for libnss-ldap, which can't connect to a remote LDAP server, since networking is not yet set up at that time. This is being discussed in #375077. libnss-ldap can be configured to give up when a request comes in, so there's a work-around. In general, it's expected that requests for NSS information early on would be satisfied by 'files'. What's your nsswitch.conf look like, and what rules do you have configured for udev (specifically, do you have any rules which are looking for users/groups which aren't in your local files?)? Thanks, Stephen signature.asc Description: Digital signature
Bug#375215: libnss-ldap hangs udev at startup
* Gasper Zejn ([EMAIL PROTECTED]) wrote: My nsswitch.conf: passwd: compat ldap group: compat ldap shadow: compat ldap Other lines don't have ldap. I have no custom udev rules. Do you have a reasonably complete /etc/passwd and /etc/shadow files for the local accounts? Also, have you actually waited it out? Eventually, it should time out (after about 2 minutes per request). If it doesn't then it's possible NSS itself is retrying, and you might change your nsswitch.conf to look like this: passwd: files ldap [UNAVAIL=return] group: files ldap [UNAVAIL=return] shadow: files ldap [UNAVAIL=return] Please let me know, Thanks, Stephen signature.asc Description: Digital signature
Bug#367962: Please don't ship a /lib64 symlink in the package on amd64
* Aurelien Jarno ([EMAIL PROTECTED]) wrote: The FHS is actually not very clear, as it says 64-bit libraries should be in (/usr)/lib64, whereas system libraries should be in (/usr)/lib. This is a contradiction for a pure 64-bit system. The FHS is very clear about the path to the 64bit linker, and that goes through /lib64, getting rid of that isn't an option. - I am not sure that creating the link in postinst will work. Creating it in preinst looks safer to me. I'd be a little nervous about creating it in postinst too, honestly. - If you can install files in (/usr)/lib64, the files will end up in (/usr)/lib. And dpkg won't know anything about them. dpkg -S and other tools won't work correctly. Yeah, I'm not sure it really makes sense to need to install into both... It would have been much more useful for you to include the *reasoning* behind Goswin's request rather than just your reasons for not wanting to do what he's asking. - If you have two packages providing the same files in (/usr)/lib and (/usr)/lib64, then the files will be overwritten without warning. This is IMHO not acceptable. My guess is that his intent was actually to allow *seperate* packages to install into either /lib or /lib64 on a package-by-package basis. This might resolve some bugs in packages which, when they detect they're being compiled for amd64, default to installing into /lib64 instead of /lib. Personally I think that's something that just needs to be dealt with and those packages should be fixed but that's my guess as to where the question came from. It's also possible a given package wants to install some things in /lib64 (say, actual libraries) and other things in /lib (say, helper programs, ala blah-config). Could you please give me your opinion on that, so that I can take a decision? The link itself certainly can't go away. I'd be more inclined to say progams on a pure 64bit platform shouldn't install into /lib64 than to have some things installing into /lib and others into /lib64. Part of this comes from the concern that this will bring out other bugs in packages where having this distinction might cause overlaps as mentioned above. Thanks, Stephen signature.asc Description: Digital signature
Bug#295260: Bug reproducible?
* Christian Perrier ([EMAIL PROTECTED]) wrote: Stephen, is there any chance that you could be able to reproduce this bug (Samba AD join failing (Memory fault))? I think that we can expect upstream to look at it only if it is reproduced with the last upstream version, namely 3.0.22 as in testing currently. Ugh. I don't actually have control over the AD here at work. I might be able to convince someone to let me test this (again) but no guarentee since I'm not going to do it with my real server (damn if I'm gonna break the stupid thing again...). It seems like this is a pretty well-defined case tho.. Someone familiar with the code which handles this (Steve?) really shouldn't have too much trouble running it down, I would have thought... Thanks, Stephen signature.asc Description: Digital signature
Bug#321008: Acknowledgement (multipathd ignores prio_callout)
Greetings, It looks like now multipath is ignoring the prio_callout. This is definitely very annoying considering 'multipath', at least, *used* to work correctly with it. ===# multipath -v3 | grep prio sda: getprio = /sbin/find_prio.sh %n (config file default) sda: prio = 0 sdb: getprio = /sbin/find_prio.sh %n (config file default) sdb: prio = 2 sdc: getprio = /sbin/find_prio.sh %n (config file default) sdc: prio = 1 sdd: getprio = /sbin/find_prio.sh %n (config file default) sdd: prio = 1 sde: getprio = /sbin/find_prio.sh %n (config file default) sde: prio = 2 sda: prio = 0 sdb: prio = 2 sdd: prio = 1 sdc: prio = 1 sde: prio = 2 All looks correct here, except that: ===# multipath -l sarnwal (3600a0b800016029a088a42e92bb8) [size=108 GB][features=0][hwhandler=0] \_ round-robin 0 [prio=0][enabled] \_ 5:0:1:1 sde 8:64 [active][undef] \_ round-robin 0 [prio=0][active] \_ 5:0:0:1 sdc 8:32 [active][undef] sarndata (3600a0b800016057c176c42e92517) [size=892 GB][features=0][hwhandler=0] \_ round-robin 0 [prio=0][active] \_ 5:0:1:0 sdd 8:48 [active][undef] \_ round-robin 0 [prio=0][enabled] \_ 5:0:0:0 sdb 8:16 [active][undef] The wrong devices are being used, and multipath -l doesn't even show the priorities anymore (or call the find_prio.sh script at all). multipath won't change the devices around either even tho it shows that it knows what the correct priorities are. From multipath -v3 -l: = paths list = uuid hcildev dev_t pri dm_st chk_st vend/prod/rev 0:0:0:0 sda 8:0 0 [undef][undef] FUJITSU /MAU3036NC /0102 5:0:0:0 sdb 8:16 0 [undef][undef] IBM /1722-600/0520 5:0:0:1 sdc 8:32 0 [undef][undef] IBM /1722-600/0520 5:0:1:0 sdd 8:48 0 [undef][undef] IBM /1722-600/0520 5:0:1:1 sde 8:64 0 [undef][undef] IBM /1722-600/0520 params = 0 0 2 2 round-robin 0 1 1 8:64 1000 round-robin 0 1 1 8:32 1000 status = 1 0 0 2 2 E 0 1 0 8:64 A 0 A 0 1 0 8:32 A 0 [...] params = 0 0 2 1 round-robin 0 1 1 8:48 1000 round-robin 0 1 1 8:16 1000 status = 1 0 0 2 1 A 0 1 0 8:48 A 0 E 0 1 0 8:16 A 0 The 'params' are, of course, wrong here because with -l multipath doesn't check the priorities. This is really getting very old, can we please have these simple things fixed and a process put in place to make sure that they're working before a package is uploaded to the archive? Thanks, Stephen signature.asc Description: Digital signature
Bug#357098: DECLARE/FETCH completely busted for Postgres
Package: php-db Version: 1.7.6-2 Severity: Important Greetings, DECLARE/FETCH doesn't work in PHP-DB because it is foolishly assumed that only 'select', 'explain' and 'show' return records. This is certainly not the case as 'fetch' (like, from a cursor) can also return rows. The way to fix this is pretty straight-forward: /usr/share/php/db/pgsql.php, line 1097: } elseif (preg_match('/^\s*\(*\s*(SELECT|EXPLAIN|SHOW)\s/si', $query)) { changes to: } elseif (preg_match('/^\s*\(*\s*(SELECT|EXPLAIN|SHOW|FETCH)\s/si', $query)) { Please to be fixing. Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#354144: mdadm in initramfs-tools (instead of mdrun)
Package: mdadm Version: 1.12.0-1 Severity: wishlist Greetings, Please find attached 'hook.md' and 'scripts.md'. These are scripts which should be placed in the '/etc/mkinitramfs/hooks' and '/etc/mkinitramfs/scripts/local-top' directories, respectively. These will cause initrd images created by initramfs-tools to run mdadm to find/start raid arrays instead of using mdrun. One alternative to consider is not actually copying the mdadm.conf into place but instead doing echo DEVICE partitions ${DESTDIR}/etc/mdadm/mdadm.conf and letting mdadm figure it out. This would probably be better (and I'll test this out) as /proc is properly mounted at that stage of the initrd. Thanks! Stephen #!/bin/sh PREREQ= prereqs() { echo $PREREQ } case $1 in prereqs) prereqs exit 0 ;; esac if [ ! -x /sbin/mdadm ]; then exit 0 fi . /usr/share/initramfs-tools/hook-functions mkdir ${DESTDIR}/etc/mdadm cp /etc/mdadm/mdadm.conf ${DESTDIR}/etc/mdadm copy_exec /sbin/mdadm /sbin for x in md raid0 raid1 raid5 raid6; do manual_add_modules ${x} done #!/bin/sh PREREQ= prereqs() { echo $PREREQ } case $1 in # get pre-requisites prereqs) prereqs exit 0 ;; esac unset raidlvl gotraid=n # Detect raid level for x in /dev/hd[a-z][0-9]* /dev/sd[a-z][0-9]*; do if [ ! -e ${x} ]; then continue fi raidlvl=$(mdadm --examine ${x} 2/dev/null | grep Level | sed -e 's/.*Raid Level : \(.*\)/\1/') if [ $raidlvl ]; then modprobe -q ${raidlvl} 2/dev/null gotraid=y fi done [ ${gotraid} = y ] || exit /sbin/mdadm -A -s -a signature.asc Description: Digital signature
Bug#354144: Follow-up
Greetings, Following up on the idea in the prior post, I've hacked up a script which can go in /etc/mkinitramfs/scripts/local-top/ (called 'md', attached) which will work without depending on the local mdadm.conf file. This means the initrd image will work even if the mdadm.conf on the root partition of the system becomes corrupt or out of date. This is a much better/cleaner solution than using mdrun. Note that this works with the existing 'md' hook script in the initramfs-tools package. If that is removed then the mdadm package would also need to install a 'hook' script into /etc/mkinitramfs/hooks to do what the initramfs-tools script is doing now (namely, copying the mdadm binary and the md modules into the appropriate places). Thanks! Stephen #!/bin/sh PREREQ= prereqs() { echo $PREREQ } case $1 in # get pre-requisites prereqs) prereqs exit 0 ;; esac unset raidlvl gotraid=n # Detect raid level for x in /dev/hd[a-z][0-9]* /dev/sd[a-z][0-9]*; do if [ ! -e ${x} ]; then continue fi raidlvl=$(mdadm --examine ${x} 2/dev/null | grep Level | sed -e 's/.*Raid Level : \(.*\)/\1/') if [ $raidlvl ]; then modprobe -q ${raidlvl} 2/dev/null gotraid=y fi done [ ${gotraid} = y ] || exit mkdir /etc/mdadm echo DEVICE partitions /etc/mdadm/mdadm.conf /sbin/mdadm --examine --scan | sed -e 's: /dev/\.tmp\.: /dev/:' /etc/mdadm/mdadm.conf /sbin/mdadm -A -s -a signature.asc Description: Digital signature
Bug#351571: Hooks for pg_upgradecluster, or support to use existing cluster
Hey Martin, * Martin Pitt ([EMAIL PROTECTED]) wrote: Stephen Frost [2006-02-06 20:52 -0500]: Looks good. Is there any need to pass the username to use to connect to the database as superuser? I'm guessing no but thought I might mention it... The owner can be figured out easily with pg_lsclusters or some ls command, but indeed just passing it to the hook scripts is easier. So: old version cluster name new version superuser phase Excellent, looks good to me. This looks good, the only thing I think we need to double-check is how data is handled (ie: is the data portion a seperate 'object' which needs to be filtered somehow, or is it associated with the base 'object', ie: table, that the data is from?). Assuming the above handles the data as well as the actual object itself then it should work well. The definition and content is just one object in the list. but that should be a mere implementation detail that shouldn't affect the spec wrt. hook scripts. Good, that does make things easier (and means you don't need to dump the data out to get the data parts in the list to check if they're already in the new database, etc). From my point of view, I'd think a per-database hook would probably better. How should that look like then? The db's are certainly in place in the 'finish' phase, but in the 'init' phase there are just the usual template[01] and postgres databases. The destination databases are created in the pg_restore stage. It might be possible to pre-create empty destination databases right before calling the hooks with phase 'init' and change the hooks to be per-database. But that would become much more complicated and would involve a lot of hacking. If it's really necessary to make hooks for e. g. PostGIS work properly, then I'm willing to do that. But I'm not familiar with the steps to set up PostGIS, what needs to be done roughly? Basically it's two things: install the .so into the right place (this can be done before any databases are created and will be done when the package is installed) and then run a .sql script which creates all the functions and the tables, this can be done against template1 and then be propogated to the other databases through the templating. PostGIS also provides an 'upgrade' script of their own but which expects to work from a dump file directly. I'm reasonably confident that with the init and finish hooks I'll be able to properly implement the upgrade using their script as a reference for what needs to be changed, etc. That's something which will need to be done when upgrading PostGIS alone anyway. An interesting question would be if there's some way to detect if PostGIS is installed in a given database before running the install routine on it. I guess an associated question is how template1 is handled... Since the hooks can access the old cluster, they should be able to figure that out on their own, right? Yup. That should work just fine. Is 'init' above after all databases have been created, or after each one? As I expect you understand, if template1 is created and then the init scripts run (which add PostGIS to template1) then it wouldn't be necessary to run the init script on the 'regular' databases. In that case the init scripts could be run per-cluster, provided it's run after template1 is created but before any other databases are created. Know what I mean? Yes, I think that's the question I raised above. If you could get away with just modifying template1, I would be happy. :) Yeah, I think I can just modify template1. The 'finish' script will end up having the majority of the complications in it but that's ok. Will I still be able to access the old cluster in the 'finish' script? I might need to (for example, if a user adds something to the spatial_ref_sys table then I may need to add it in during the 'finish' stage by looking at what was in the old cluster and comparing it to what's in the new cluster). I was thinking about this and about upgrades. It's not uncommon for an upgrade to obsolete one table such that it wouldn't exist in a new install.. I'm not sure that'd actually be a problem though. Ok, then let's add that functionality when we really need it. Sure. Thanks! Stephen signature.asc Description: Digital signature
Bug#351571: Hooks for pg_upgradecluster, or support to use existing cluster
Hey Martin, * Martin Pitt ([EMAIL PROTECTED]) wrote: Thanks for today's IRC discussion, I'll try to summarize and create a mini-spec: Very nice, thanks. :) -- snip --- 1. Add and ship /etc/postgresql/pg_upgradecluster.d/. Scripts in that directory will be called with the following arguments: old version cluster name new version phase phase is 'init' right after creating a virgin cluster of newversion, and 'finish' after all the data from the old version cluster has been dumped/reloaded into the new one. Looks good. Is there any need to pass the username to use to connect to the database as superuser? I'm guessing no but thought I might mention it... 2. Change pg_upgradecluster to not restore objects which are already present in the new cluster. - after initializing the new cluster and calling pg_upgradecluster.d scripts with phase init, record already existing objects and databases: pg_dump -Fc --schema-only $db | pg_restore -l /tmpdir/ignore-$db - determine set of objects that need to be restored from the old cluster: pg_dump -Fc --schema-only $db | pg_restore -l | sortmagic | /tmpdir/restore-$db sortmagic is some perl code snippet which ignores all objects that occur in ignore-$db. - add -L /tmpdir/restore-$db argument to pg_restore call This looks good, the only thing I think we need to double-check is how data is handled (ie: is the data portion a seperate 'object' which needs to be filtered somehow, or is it associated with the base 'object', ie: table, that the data is from?). Assuming the above handles the data as well as the actual object itself then it should work well. - Does that per-cluster .d hook directory accomodate the needs of PostGIS? Would a per-database hook be better? From my point of view, I'd think a per-database hook would probably better. An interesting question would be if there's some way to detect if PostGIS is installed in a given database before running the install routine on it. I guess an associated question is how template1 is handled... Is 'init' above after all databases have been created, or after each one? As I expect you understand, if template1 is created and then the init scripts run (which add PostGIS to template1) then it wouldn't be necessary to run the init script on the 'regular' databases. In that case the init scripts could be run per-cluster, provided it's run after template1 is created but before any other databases are created. Know what I mean? - Is the ignore scheme sketched above enough, or do we need additional 'no dump' blacklists? I was thinking about this and about upgrades. It's not uncommon for an upgrade to obsolete one table such that it wouldn't exist in a new install.. I'm not sure that'd actually be a problem though. Thanks! Stephen signature.asc Description: Digital signature
Bug#351571: Hooks for pg_upgradecluster, or support to use existing cluster
Package: postgresql-common Version: 41 Severity: Wishlist Greetings, When using pg_upgradecluster to migrate from 8.0 to 8.1 I ran into some problems. Mainly this was that I had PostGIS installed in the 8.0 database and had some tables with geometry columns which depended upon PostGIS. When pg_upgradecluster created the 8.1 cluster PostGIS wasn't installed (and there doesn't seem to be any way to have PostGIS installed between when the cluster is created and the restore is started). Therefore, I propose that pg_upgradecluster should either have a hook where things can be run between the cluster being created and the restore starting, or it should be able to be run against an existing cluster (so the user could install the necessary components). An associated fun an interesting question is how one might exclude things from the restore. I don't think this is a PostGIS-specific issue but really applies to any module where an object in the database depends on the module being installed. Thanks! Stephen signature.asc Description: Digital signature
Bug#351281: Duplicate uids not allowed, even with --non-unique
Package: passwd Version: 4.0.14-4 Severity: normal Greetings, Looks like something broke -o/--non-unique. useradd now refuses to create a user with a duplicate uid even with -o or --non-unique are supplied. Thanks, Stephen signature.asc Description: Digital signature
Bug#349244: Postfix device creation
Package: postfix Version: 2.2.8-6 Severity: important Greetings, The latest version of postfix assumes that it can create device nodes (which it tries to do for /dev/random and /dev/urandom using tar). It is not always the case that this is permitted in the environment in which postfix is installed. fe: When installed under a Linux VServer. It would be nice for there to be a note about this change in requirements for Postfix's chroot creation. Additionally, in order to allow the admin to deal with the situation, the Postfix init.d should first check if the devices exist and if so then not try to create them. Additionally, it should survive not being able to create them (at the moment it fails out rather nastily). Thanks! Stephen signature.asc Description: Digital signature
Bug#346623: Intention to NMU
* Luk Claes ([EMAIL PROTECTED]) wrote: Attached the patch for the version I intend to upload. Please respond if you don't want this NMU to happen, if you are working yourself on a patch or if you think that the attached patch won't work. Geez, seems like we just did this. Oh well, looks alright to me, though I'm amused that configure/et al. are changing yet again. Would you be interested in taking over maintaining this? Thanks, Stephen signature.asc Description: Digital signature
Bug#335120: Intention to NMU
* Luk Claes ([EMAIL PROTECTED]) wrote: Attached the patch for the version I intend to upload. Please respond if you don't want this NMU to happen, if you are working yourself on a patch or if you think that the attached patch won't work. I don't see any obvious reason why it won't work. I trust you've tested it yourself some at least. I do wonder if the specific references to 1.9 are actually necessary anymore though. Thanks, Stephen signature.asc Description: Digital signature
Bug#342455: tech-ctte: Ownership and permissions of device mapper block devices
* Bastian Blank ([EMAIL PROTECTED]) wrote: Is there some reason you can't have implement your personally preferred policy of root.root 600 on just your own system? Is there some reason for projecting your personal policies incompletely onto an arbitrary subset of debian's users? Hu? 10 people are an arbitrary subset? I expect it's more than 10. I know that I'm one of the folks following along here and trying to understand why you can't manage to do what every other block-device-creating maintainer does. I never said, that they should run as root. [...] Many tools have additional checks to never do anything as root. Now you have just another user with the same rights. You're contradicting yourself here. Disk block devices have a specific, standard permission setup in Debian. Packages which create disk block devices need to follow this standard. There really isn't anything else to discuss. I don't particularly care that you don't like amanda, you're wrong to think that making it be 600 is any more secure by default, no one seems to be jumping to bolster your claims, and depending on a check to make sure one isn't running as root to enforce security sounds like a rather serious problem to begin with. Consider that there are *many* other users whom it would be bad to run as mistakenly (someone in the shadow group? Or the Postgres group on your primary database server?). Thanks, Stephen signature.asc Description: Digital signature
Bug#342369: [GENERAL] PostgreSQL 8.1.0 RHEL / Debian incompatible
* Stephan Szabo ([EMAIL PROTECTED]) wrote: On Tue, 13 Dec 2005, Anand Kumria wrote: On Mon, Dec 12, 2005 at 09:41:47AM +0100, Richard van den Berg wrote: Tom Lane wrote: You've got that 100% backwards: you should be complaining to Debian that it's not their business to editorialize on the default setting. [...] If Tom could present an actual reason why it shouldn't be enabled, I'm sure Martin (Pitt) would be interested. But Stephen Frost and Peter Eisentraut as well as others seem to be suggesting that Debian default is sane. I think what Peter said is probably correct- it defaults to double because some compilers don't support 64bit integers, or it'd be lots slower because the architecture doesn't support it and so there's alot of overhead. In and of itself it's a good option. However, choosing that option means that Debian is saying that compatibility of data files with default compiled PostgreSQL is not one of its primary concerns, which is a reasonable statement, but it's still not the community's problem when people can't move data to it. Honestly, in the end I think the default should be changed. It could fall-back to double with a warning (if it doesn't already) if the compiler doesn't support 64bit integers. It sure seems like the general feeling is that, given the choice, 64bit integer timestamps are preferred. As the situations where you wouldn't want to use 64bit integer timestamps are alot, lot smaller than the cases where you would it's more sensible to have the default be to use them if possible. Of course, another thought would be to have the rpms default to having it enabled since the rpms provided on postgresql.org are for architectures which should deal quite happily with 64bit integers (i686 and x86_64, though I don't actually see any rpms for x86_64, just the directories, kind of odd). I'm reasonably confident that most rpm-based architectures supported today support 64bit integers quite well, in fact... This really *should* be backwards, funny enough; Debian with support for things like m68k (which doesn't have hardware 64bit integer support, afaik) could be argued to use the default while rpm-based distros could probably move to 64bit integer timestamps without any concerns over using inefficient datatypes for the architectures those rpms would likely be used on. I don't think the Debian default should be changed though. If, say, an m68k user actually complained about the default not being the right option for them then I'd say we should consider having configure options be different for those architectures and not that we should move everyone to using doubles. Thanks, Stephen signature.asc Description: Digital signature
Bug#278810: ITP: slony-i
Greetings, Kind of amusing that you decided to do this now, I've just recently been working with Slony upstream to get things cleaned up so Debian packages can be easily built from Slony. I've been working from CVS but I've been told that the fixes will be back-patched into the stable branch and released with the next stable release. A couple comments: Probably best to use Build-Depends-Indep in this case since building the docs adds alot of dependencies that aren't otherwise necessary. This is what I had: Build-Depends-Indep: postgresql-autodoc, libjpeg-progs, groff, gs-common, netpbm, imagemagick, opensp, openjade, docbook-utils Build-Depends: postgresql-server-dev-8.0, libpq-dev, cdbs, flex, bison According to upstream though, technically flex and bison shouldn't be necessary when building from a release (I was building from CVS). You should be able to remove the flex/bison parts of yoour clean:: too, really. Was there some reason you needed to rebuild them? Additionally, we need to figure out the best way to build both 8.0 debs and 8.1 debs. I like that you split it out into different debs, I hadn't gotten to that yet myself but had planned to. I'm not sure slony1-bin should Recommend postgresql-8.0-slony1 (a Suggests would be better, imv). Additionally, ntp-server should be a Recommends in postgresql-8.0-slony1, not slony1-bin. Here's what I have for my configure arguments (based on CVS, where these actually all work now...): DEB_CONFIGURE_EXTRA_FLAGS += --with-pgsharedir=/usr/share/slony1 DEB_CONFIGURE_EXTRA_FLAGS += --with-pglibdir=/usr/share/slony1 DEB_CONFIGURE_EXTRA_FLAGS += --with-pgpkglibdir=/usr/lib/postgresql/8.0/lib DEB_CONFIGURE_EXTRA_FLAGS += --with-perltools DEB_CONFIGURE_EXTRA_FLAGS += --with-docs --with-docdir=/usr/share/doc/slony1/ DEB_CONFIGURE_EXTRA_FLAGS += --disable-rpath I've also got DEB_AUTO_UPDATE_AUTOCONF = true, but again that shouldn't be necessary when building from a release. A couple of other things to note about the above: --disable-rpath doesn't work except in recent CVS. --pgsharedir, --pgpkglibdir, and --pglibdir have to be overridden because the configure script checks for things which aren't there in those directories (unless you've got the actual postgresql server installed, which really isn't an option on a buildd). Also, make install in the docs directory ignores the alternate root (again, except for in recent CVS). --pglibdir is actually still broken in CVS, talking w/ X-Fade in #slony on OPN about getting that fixed (or, even better, changing tools/altperl to use something else like --perlsharedir for the install path for slon-tools.pm). Thanks for working on this, I really should have posted an update when I started working on this stuff again last month, sorry about that. Thanks again! Stephen signature.asc Description: Digital signature
Bug#96694: wnpp: majordomo/majordomo2
* Drew Scott Daniels ([EMAIL PROTECTED]) wrote: I'm just wondering if Majordomo/Majordomo2 would still be a useful package in Debian given that there's things like mailman, fml, ecartis and potentially other alternatives. Erm, yes, of course it'd be a useful package to have in Debian. Stephen signature.asc Description: Digital signature
Bug#330187: tar 1.15.1 fails creating multivol archive (long filenames)
Package: tar Version: 1.15.1-2 Severity: normal Tags: patch Greetings, When creating a multivolume tar archive, if the file which ends up being split across two tapes has a filename longer than 100 characters, tar exits with a fatal error. Reading through this thread: http://lists.gnu.org/archive/html/bug-tar/2005-04/msg00012.html It appears that this isn't really an error, and it appears to have been changed to a warning in upstream cvs: http://savannah.gnu.org/cgi-bin/viewcvs/tar/tar/src/buffer.c.diff?r1=1.81r2=1.82 In fact, according to the thread, it seems that the file would end up with the correct name anyway in the end. Please apply the patch from the thread (or the second half of the patch applied to CVS) to reduce this from a fatal error to a warning. This is seriously hampering my backups. Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#321498: New versions available
* Russ Allbery ([EMAIL PROTECTED]) wrote: Stephen Frost [EMAIL PROTECTED] writes: I just compiled Debian's 4.1p1 ssh w/ Simon's latest gssapi-keyx patch, and everything appears to have worked reasonably well, so, please update the packages to the more recent versions... The Kerberos patches have now been incorporated into the main Debian openssh package, so the openssh-krb5 package will hopefully be going away rather than moving to the latest version. Please give the current openssh packages in unstable a try and see if they do everything you need. I'm trying to see if openssh-krb5 is going to need one final security release or if it can just be retired at this point. (And also want to make sure that the basic openssh packages now cover everything.) This is kind of amusing. I was the one who pushed getting Simon's gssapi-keyx patch into the main Debian openssh package. :) Yes, the new packages work quite nicely. The only exception to that is that when using a recent release of OpenSSH (so this would apply to openssh-krb5 if it was ever updated) libpam-krb5 is unable to generate the host tickets in the appropriate spot. My understanding is that someone (dilinger I think) is working on improving libpam-krb5 and hopefully fixing this issue. A workaround for this issue is to just ask users to kinit after they log in. Not exactly perfect but certainly a workable solution till libpam-krb5 gets fixed. Thanks, Stephen signature.asc Description: Digital signature
Bug#329226: udevstart breaks when device not responding
Package: udev Version: 0.070-2 Severity: Important Greetings, When installing udev for the first time on a system which has access to snapshots of other partitions (on my SAN), and those snapshots were disabled (which means IO failures when trying to read them), udev failed to install rather badly. Populating the new /dev filesystem temporarily mounted on /tmp/udev.vrFxvY/... ... [587655.084649] SCSI error : 5 0 0 0 return code = 0x802 [587655.084682] sdc: Current: sense key: Hardware Error [587655.084713] vendor ASC=0x84 ASCQ=0x0ASC=0x84 ASCQ=0x0 [587655.084750] end_request: I/O error, dev sdc, sector 6 ... root 8075 0.0 0.0 2944 1424 pts/1S+ 11:19 0:00 \ /bin/sh -e /var/lib/dpkg/info/udev.postinst configure root 8111 0.2 0.0 2200 1144 pts/1S+ 11:19 0:00 \ /sbin/udevstart root 8480 0.0 0.0 1820 444 pts/1D+ 11:19 0:00 \ /sbin/vol_id --export /tmp/udev.gzcWTS/.tmp-8-32 ... dpkg: error processing udev (--configure): subprocess post-installation script returned error exit status 1 ... I enabled the snapshots for the moment to get udev installed. I'm curious as to what would happen if I rebooted while the snapshots were disabled though. I do need udev, unfortunately, because multipath depends on it. I'm pretty sure this is a situation udev needs to be able to handle though. Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#329226: udevstart breaks when device not responding
* Marco d'Itri ([EMAIL PROTECTED]) wrote: On Sep 20, Stephen Frost [EMAIL PROTECTED] wrote: dpkg: error processing udev (--configure): subprocess post-installation script returned error exit status 1 Did you try to kill some process? I can't see how an error could be propagated from vol_id to postinst. Nope, I didn't kill any processes. It took it a while to fail. In case it gets lost in irc: 12:10 Snow-Man Md: It's in the bug log 12:11 Snow-Man Md: Basically, vol_id was failing. 12:11 Snow-Man 'cause it was trying to read the device, and the device was, like, 'uh, no.' 12:11 Snow-Man I didn't kill off any processes, no... 12:12 Snow-Man Md: Well, alright, vol_id was the process which was running when I was seeing stuff in dmesg show up (the stuff that's in the bug log) 12:12 Snow-Man Md: It's possible there was some other issue making udevstart fail.. 12:13 Snow-Man Md: But when I enabled the snapshots, I saw vol_id go past the first set of things it was being run on. 12:13 Snow-Man (--export /tmp/udev.gzcWTS/.tmp-8-32 went to, like, 8-16, etc, iirc) 12:14 Snow-Man It looked like udevstart was running vol_id multiple times, and checking if it worked or not each time, and if it didn't, was exiting unhappily. It seems to me that vol_id errors should probably be non-fatal, but I'm not entirely sure what it's doing. In this instance I would still want the appropriate scripts to be run for the device (to properly set up the multipath stuff for the device) even if the device can't be read at the moment. Thanks, Stephen signature.asc Description: Digital signature
Bug#327562: autolog should not depends on debhelper in Depends but in Build-depends
severity 327562 normal done hahahahhahaha Enjoy, Stephen signature.asc Description: Digital signature
Bug#275472: Kerberos keyex in ssh
Greetings, I'd like to follow-up on the idea of maintaining the key exchange patch as a local Debian patch to openssh. The current key exchange patch does not introduce any new config options, is much smaller than the older GSSAPI patches, and patches cleanly against current Debian sources (4.1p1-6). Debian 4.1p1-6+keyex also plays nicely with current ssh-krb5 (I've yet to run into any problems running a mixed environment). The current keyex patch is available here: http://www.sxw.org.uk/computing/patches/openssh-4.0p1-gssapikex.patch (From: http://www.sxw.org.uk/computing/patches/openssh.html) Many thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#321498: New versions available
Package: ssh-krb5 Version: 3.8.1p1-5 Severity: wishlist Greetings, I just compiled Debian's 4.1p1 ssh w/ Simon's latest gssapi-keyx patch, and everything appears to have worked reasonably well, so, please update the packages to the more recent versions... Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#320997: started too early during boot
Package: multipath-tools Version: 0.4.2.4-2 Severity: important Greetings, multipathd is currently set up to run before modules are loaded. This doesn't make much sense to me, and causes problems for any setup which uses modules (and not initrd), which seems like it'd be a pretty common setup. Please change multipathd to start after modules are loaded (but before other partitions are mounted), following the example set by mdadm-raid and lvm. Thanks, Stephen
Bug#320996: new version (0.4.4) available
Package: multipath-tools Version: 0.4.2.4-2 Severity: wishlist Please upgrade to 0.4.4, it seems to have a number of improvements. Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#321002: Breakage due to /var dependency
Package: multipath-tools Version: 0.4.2.4-2 Severity: important Greetings, multipathd needs to be run prior to 'mountall.sh', which handles the second-pass filesystem mounting. This means that it can't depend on /var being available. Unfortunately, it appears that it's expecting to be able to use /var/cache/multipathd as a ramdisk and when it's unable to (because /var/cache doesn't exist), it dies. This is a rather serious problem as it's needed for booting (I believe...). An interesting alternative would be to consider just running multipath during boot and then running multipathd later on. This appears to be what the initrd setup w/ multipath is doing actually. Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#321008: multipathd ignores prio_callout
Package: multipath-tools Version: 0.4.2.4-2 Severity: normal Greetings, For some reason that I havn't been able to identify as yet, multipathd appears to just outright ignore the prio_callout setting. Looking in /var/cache/multipathd, it's empty, so perhaps that's the problem. There's no errors in the logs though. multipath directly works fine though. ie: default_prio_callout/sbin/path_prio.sh %d [This is correct...] sauron:/etc/rcS.d# multipath -v2 -S create: sarndata (3600a0b800016057c176c42e92517) [size=892 GB][features=0][hwhandler=0] \_ round-robin 0 [first] \_ 1:0:0:0 sdb 8:16[ready ] \_ round-robin 0 \_ 1:0:1:0 sdd 8:48[ready ] create: sarnwal (3600a0b800016029a088a42e92bb8) [size=108 GB][features=0][hwhandler=0] \_ round-robin 0 \_ 1:0:0:1 sdc 8:32[ready ] \_ round-robin 0 [first] \_ 1:0:1:1 sde 8:64[ready ] [This is incorrect...] sauron:/etc/rcS.d# /etc/init.d/multipath-tools start Starting multipath daemon: multipathd. sauron:/etc/rcS.d# multipath -v2 -S -l sarnwal (3600a0b800016029a088a42e92bb8) [size=108 GB][features=0][hwhandler=0] \_ round-robin 0 [enabled][first] \_ 1:0:0:1 sdc 8:32[ready ][active] \_ round-robin 0 [enabled] \_ 1:0:1:1 sde 8:64[ready ][active] sarndata (3600a0b800016057c176c42e92517) [size=892 GB][features=0][hwhandler=0] \_ round-robin 0 [active][first] \_ 1:0:0:0 sdb 8:16[ready ][active] \_ round-robin 0 [enabled] \_ 1:0:1:0 sdd 8:48[ready ][active] As you may notice, in the 'correct' setup, sdb and sde are marked as 'first', and my SAN picks up that the correct controllers are being used for those LUs. In the 'incorrect' setup, sdc and sdb are marked as 'first', which ends up having both LUs accessed via the same controller, a less than optimal solution. Additionally, these really shouldn't ever be different, aiui, they should both be making use of path_prio.sh, which I've configured to return the correct values (as can be seen by the fact that multipath picks up the correct solution). Thanks, Stephen
Bug#319817: ftp.debian.org: Please remove openmosix and kernel-patch-openmosix
* Jeroen van Wolffelaar ([EMAIL PROTECTED]) wrote: Can you please comment on the below? The OpenMosix packages were intentionally kept out of sarge because upstream's not terribly interested in the 2.4 patches and they're working on a 2.6 version. The 2.6 version is expected to be somewhat better in a number of regards, and is available in an alpha-state currently. The kernel patch itself might actually be workable enough for unstable but I'm afraid the userspace is pretty much missing still and some things have been moved to userspace that are reasonably important (such as load balancing across the cluster). I'd rather it not be removed from unstable (the only thing it's in, after all), but mainly because I'd rather not have to go through the NEW queue again when I feel the 2.6 patches/userspace are suitable to package for unstable. I suppose if Micah just can't stand it I could package and upload the 2.6 stuff soon, but I'd rather wait till it's a bit more developed/stable. Micah, it's appreciated when making such type of requests to X-Debbugs-Cc the maintainer in question. Aww, but then I'd get to comment on it and I might ruin his little vendetta/pity party. Thanks, Stephen signature.asc Description: Digital signature
Bug#304350: Always ask for root passowrd twice, even on critical priority installs?
* Christian Perrier ([EMAIL PROTECTED]) wrote: critical Questions that you really, really need to see (or else). Strictly speaking, the first password question pertains to the critical priority, because it does not have any reasonable default. Well, actually, not so much. If you really would like to be picky about it, have the root password default to 'debian'. In a reasonably secure environment this is fine (and allows for someone to run around and install a bunch of machines quickly and then have a script which changes the password after the machine has rebooted w/ ssh running, etc). The confirmation question has a reasonable default or, to say this another way, is not strictyly necessary to be able to continue and not break anything. About this I would disagree, and would agree w/ Manoj's argument. When you don't show the password back to the user then you *don't* have a reasonable default because it's not at all clear that what the user typed in is what the user *intended* to type in. Your 'reasonable default' argument only holds if you assume the user is perfect, and that's generally not a good thing to assume. :) (and not in -devel, at least for the first round) Now that's kind of bizarre. You're planning to go to -devel *after* having gone to the technical committee? I guess you're not actually expecting the technical committee to make a ruling on it, or you're just going to ignore it if it's one you don't like? Thanks, Stephen signature.asc Description: Digital signature
Bug#309983: libnss-ldap: wml/perl segfaults
* Cyril Chaboisseau ([EMAIL PROTECTED]) wrote: when I try to build an html file with wml (which is just a Perl script), it segfaults $ cat 1.wml h1title/h1 $ wml -o 1.html 1.wml ** WML:Break: Error in Pass 2 (status=139, rc=0). Just to note, this does *not* happen on i386, so far as I've been able to tell. [EMAIL PROTECTED]:/home/sfrost cat 1.wml h1title/h1 [EMAIL PROTECTED]:/home/sfrost wml -o 1.html 1.wml [EMAIL PROTECTED]:/home/sfrost l 1.wml -rw-r--r-- 1 sfrost sfrost 16 May 20 18:55 1.wml So whatever the problem is, it doesn't appear to manifest itself on i386. Can we get some other libnss-ldap users to test on other archs? Other amd64 users, maybe some alpha users, etc? Thanks, Stephen signature.asc Description: Digital signature
Bug#306258: libnss-ldap libpam-ldap need to be linked against same lib
* Frank Lichtenheld ([EMAIL PROTECTED]) wrote: On Mon, Apr 25, 2005 at 10:00:33AM -0400, Stephen Frost wrote: Just following up for those playing along at home. libnss-ldap and libpam-ldap need to be linked against the same ldap (either 'ldap' or 'ldap_r'). I thought I had done this for both, but apparently not. Linking against ldap_r fixed an issue in nss-ldap previously, so my intent is to change libpam-ldap to also link against ldap_r (like libnss-ldap). I hope to upload a fixed package this evening. Ignore my previous mail, I confused the upload date. What has happened to that upload? Did you just have no time or is there a problem with it that needs to be fixed? It got a bit more complicated. Basically, libldap2 is bad for shipping two different libraries in one package. NSS sucks because when using libnss-ldap and an LDAP-using application it's possible both of these (conflicting) libraries can end up being loaded into memory. The end solution as discussed with Steve Langasek (our illustrious RM) is to: a) recompile libpam-ldap against ldap_r and upload (will happen soon) b) rebuild libldap2, remove 'libldap' and replace it with a symlink to 'libldap_r', which has the same ABI. Thus, there will be only one LDAP library left on the system which everything will link against, hopefully avoiding the situation where two different LDAP libraries are loaded into memory. Let me know if you can think of any reason why this might be a bad idea. :) Thanks, Stephen signature.asc Description: Digital signature
Bug#306987: New upstream supports 2.6 kernels
* Marco Amadori ([EMAIL PROTECTED]) wrote: Package: kernel-patch-openmosix Version: any Severity: wishlist Tags: fixed-upstream As announced here : http://sourceforge.net/forum/forum.php?forum_id=460497 openMosix now supports 2.6 kernels. You can find the patches that should apply to kernel sources here : http://openmosix.snarc.org/wiki/GetSourceByPatch I'm testing them on amd64 right now, late for sarge I think :-( Uh, are there any userland utilities for it though? Is it actually usable? Thanks for the pointer, Stephen signature.asc Description: Digital signature
Bug#306836: postgresql-common postgresql-client-8.0 w/o postgresql-8.0 (server)
Package: postgresql-common Version: 7 Severity: Important Tags: +experimental Greetings, Looks like if postgresql-common and postgresql-client-8.0 are installed but postgresql-8.0 isn't then /etc/postgresql-common/user_clusters never gets populated and you get an 'Invalid PostgreSQL cluser version' when you try to run psql. It should be possible to install postgresql-client-8.0 postgresql-common and be able to then connect to a PostgreSQL server on a remote machine w/o having to configure things. Not sure the best way to fix this, perhaps if the file is empty just pick the highest-version number in /usr/lib/postgresql? Thanks, Stephen signature.asc Description: Digital signature
Bug#306258: libnss-ldap libpam-ldap need to be linked against same lib
reassign 306258 libpam-ldap retitle 306258 libpam-ldap needs to be linked against ldap_r thanks Greetings, Just following up for those playing along at home. libnss-ldap and libpam-ldap need to be linked against the same ldap (either 'ldap' or 'ldap_r'). I thought I had done this for both, but apparently not. Linking against ldap_r fixed an issue in nss-ldap previously, so my intent is to change libpam-ldap to also link against ldap_r (like libnss-ldap). I hope to upload a fixed package this evening. Also, doing this fixed the problem for the bug submitter (we discussed it on IRC and he tested having libpam-ldap linked against ldap_r and that fixed his problem with the libnss-ldap package in unstable). Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#303197: Status?
Greetings, What's the status on the new upload? I need a working ulogd, like, today, so if you're not uploading a new version very soon I'm gonna have to build one myself. Do you have packages around anywhere yet with the newer version? Thanks, Stephen signature.asc Description: Digital signature
Bug#305427: initgroups() not called by pg_ctlcluster
Package: postgresql-common Version: 6 Severity: important Tags: experimental Greetings, pg_ctlcluster has a rather serious flaw- it doesn't, and can't apparently from perl (amazing as that is...) call initgroups(). This means that if you want to have Postgres use PAM and pam_unix and /etc/shadow to authenticate users, it can't. This is because perl doesn't call initgroups() after the setuid() change and so postmaster never ends up with shadow permissions, even if it's in the shadow group in /etc/groups. This will break anyone who's currently using PostgreSQL w/ pam_unix, a rather ugly setup but not at all uncommon. A quick hack that I did was to just add '. 42;' to the $( = $) = blah line. I suppose you could fix this by doing a getgrnam and then doing the $( = $) stuff for the appropriate groups. Somewhat ugly but then I don't believe perl gives you any other options. Be really nice to have this fixed... :) Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#278810: Bug#305287: ITP: slony1 -- Slony-I is a master to multiple slaves replication system with cascading and failover.
* Tim Goodaire ([EMAIL PROTECTED]) wrote: I haven't been able to find an ITP for this. I've found an RFP for it though (278810). Is this what you're referring to? Yes. Also, my ITP bug (305287) has already been closed on me. Apparently I Yes, I closed it since it was a duplicate WNPP bug. was supposed to change the title of the RFP bug to ITP, but I've been unable to find anything in the Debian maintainer's documentation that would indicate that this is what you're supposed to do. Is this just common practice or something? This is my first attempt at packaging something for Debian. The developer's reference (http://www.debian.org/doc/developers-reference/ch-pkgs.en.html#s-newpackage) would lead you to http://www.debian.org/devel/wnpp/ which outlines how to use WNPP, specifically under Removing entries there's: RFP If you are going to package this, retitle the bug report to replace .RFP. with .ITP., in order for other people to know the program is already being packaged, and set yourself as the owner of the bug. Then package the software, upload it and close this bug once the package has been installed. Of course, it'd be good to *read* the RFP bug before retitling it, etc, which would have provided you with the information I wrote about in my prior email- specifically that there's a number of other people working on slony packaging already and there's specific and good reasons why it hasn't already been uploaded to the archive. I don't particularly care who ends up maintaining the package but it's more than a little annoying to have someone not read the documentation, prior bug reports, or apparently even look for prior bugs and then be bitched out by what I'm guessing was your boss on IRC for pointing out to you the existing bug report and why it hadn't been uploaded yet. As an additional tidbit- it'd probably be best to wait till the 8.0 debs are in Debian before putting the slony packages in to avoid what will probably be a great deal of ugliness in the transistion from one packaging methodology to another in the main Postgres packaging. The 8.0 debs are already in experimental, they're mainly waiting for sarge to be released before going into sid because of the libpq SONAME bump. Thanks, Stephen signature.asc Description: Digital signature
Bug#278810: Bug#305287: ITP: slony1 -- Slony-I is a master to multiple slaves replication system with cascading and failover.
* Tim Goodaire ([EMAIL PROTECTED]) wrote: Package: wnpp Severity: wishlist Owner: Tim Goodaire [EMAIL PROTECTED] * Package name: slony1 Version : 1.1.1 What 1.1.1 version? 1.1.0 isn't even out yet- it's in Beta. Additionally, there's already an RFP bug on it which you could have changed the title on, etc, and talked to the people who've commented on that bug already. Additionally, the CVS version of it's already been packaged actually, there's some packages at http://kenobi.snowman.net/~sfrost/slony. I've got some better and more recent ones than that (which I think are a fair bit better than what's in the Slony CVS upstream debian/ directory which I personally don't think should even exist) but Slony upstream asked that we not put a CVS release into a released version of Debian and that means holding off on actually uploading the package until Slony 1.1 is officially released or filing annoying RC bugs and whatnot against a CVS upload to keep it out of sarge. Thanks, Stephen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]