Re: amd64 uploads
On Sun, Apr 09, 2006 at 08:16:07PM +0200, Pierre Habouzit wrote: > ok, done. > > I'd also like to alert debian-python about #351149 and #351150. Those > were fixed in a quite short period, but Lustin did not found any > sponsor. I don't know how hard he searched, but at least his packages > looked clean and well followed. He deserves better ;) It could be true that I didn't look very hard - I moved to a different country in the last month and I didn't really settle down, so to speak, as to be able to pursue this as it should have been. > maybe someone from debian-python can step up as a sponsor ? I offered > him to do so, but I'm not *that* python interested, and someone more > involved with python would surely be better. That would be great, indeed. Thanks, Iustin Pop signature.asc Description: Digital signature
Re: Bug#192101: We need gnucash in stable
Hello, After digging to see why gnucash fails the tests (as I use it daily and wouldn't like to see it go away...), I found these: On Sat, Sep 27, 2003 at 04:03:19PM -0500, Steve Langasek wrote: > Quite so. Red Hat is a build environment where 'make check' succeeds, > though the bug is latent. Debian i386 is a build environment where > 'make check' succeeds, though the bug is latent. On other Debian > architectures, the bug causes a build failure, due to differences in the > output of the PRNG. Eliminating the randomness, it's probably possible > to construct valid test input that fails on all platforms, Red Hat > included. So either the test itself is wrong for allowing these input > values, or there's a bug in gnucash. Playing with the random generator has shown this to be true. Introducing additional calls to it (i.e. extracting more values, at random places) has generated both successfull runs (with 400 tests) and failures after 130 tests. > > Note that at any point, this bug could be effectively downgraded by > removing the call to 'make check' from debian/rules, if it was decided > that the impact of this bug on the program's viability is not > significant. That's a decision for the maintainer to make, however; > it's certainly justified to be wary of *any* failures where financial > software is concerned. The test that fails is doing the following: for N number of times create a random query of up to 4 terms, which are (randomly) combined using operators from the set QUERY_AND, QUERY_OR, QUERY_NAND, QUERY_NOR, QUERY_XOR. transform the query to another format and back, and check to see that we have the same query. Now the 244 test (in the original setup) that fails has 4 terms combined such that in the end the query, broken down in its internal form has more than 5000 sub-terms. Other tests that fail also have an big number of sub-terms (>4000, etc.). However, there were also queries with 4 terms, but which broken down amounted to a small number of subterms, that succeeded. Reducing the maximum number of terms in the test from 4 to 3 has allowed me to run > 120,000 iterations of the test (at which point I ran out of memory, the test program was using 400MB). I'd say that if we are not making searches in gnucash using 4 or more terms combined strangely, and gnucash itself does not construct such queries, it *could* be ok - e.g. no data or functionality loss. However, this is clearly a bug in gnucash - they should either fix the transformations of queries with big numbers or limit the maximum number of terms allowed. Hopefully this info will be of use to someone. Regards, Iustin Pop signature.asc Description: Digital signature
Bug#466583: ITP: ganeti-instance-debian-etch -- instance OS definition for ganeti
Package: wnpp Severity: wishlist Owner: Iustin Pop <[EMAIL PROTECTED]> * Package name: ganeti-instance-debian-etch Version : 0.4 Upstream Author : Google Inc. <[EMAIL PROTECTED]> * URL : http://code.google.com/p/ganeti * License : GPL Programming Lang: Shell Description : etch instance OS definition for ganeti Ganeti is a virtual server cluster management software tool built on top of the Xen virtual machine monitor and other Open Source software. After setting it up it will provide you with an automated environment to manage highly available virtual machine instances. . This package provides an OS definition for ganeti that will allow installation of Debin Etch instances via debootstrap. -- System Information: Debian Release: lenny/sid APT prefers unstable APT policy: (500, 'unstable'), (500, 'stable'), (1, 'experimental') Architecture: amd64 (x86_64) Kernel: Linux 2.6.24-teal (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/bash -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Bug#488547: ITP: python-mox -- a mock object framework for Python
Package: wnpp Severity: wishlist Owner: Iustin Pop <[EMAIL PROTECTED]> * Package name: python-mox Version : 0.5.0 Upstream Author : Google Inc. * URL : http://code.google.com/p/pymox/ * License : Apache-2.0 Programming Lang: Python Description : a mock object framework for Python Mox is a mock object framework for Python. Mox is based on EasyMock, a Java mock object framework. Mox will make mock objects for you, so you don't have to create your own. It mocks the public/protected interfaces of Python objects. You set up your mock objects expected behavior using a domain specific language (DSL), which makes it easy to use, understand, and refactor. -- System Information: Debian Release: lenny/sid APT prefers unstable APT policy: (500, 'unstable'), (500, 'stable'), (1, 'experimental') Architecture: amd64 (x86_64) Kernel: Linux 2.6.25.8-teal (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Bug#489282: ITP: ratproxy -- web application security audit tool
Package: wnpp Severity: wishlist Owner: Iustin Pop <[EMAIL PROTECTED]> * Package name: ratproxy Version : 1.51 Upstream Author : Michal Zalewski <[EMAIL PROTECTED]> * URL : http://code.google.com/p/ratproxy/ * License : Apache-2.0 Programming Lang: C Description : web application security audit tool A semi-automated, largely passive web application security audit tool, ratproxy is optimized for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments. It detects and prioritizes broad classes of security problems, such as dynamic cross-site trust model considerations, script inclusion issues, content serving problems, insufficient XSRF and XSS defenses, and much more. -- System Information: Debian Release: lenny/sid APT prefers unstable APT policy: (500, 'unstable'), (500, 'testing'), (500, 'stable'), (1, 'experimental') Architecture: amd64 (x86_64) Kernel: Linux 2.6.25.8-teal (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Bug#562658: ITP: ganeti-htools -- Cluster tools for Ganeti
Package: wnpp Severity: wishlist Owner: Iustin Pop * Package name: ganeti-htools Version : 0.2.0 Upstream Author : Iustin Pop * URL : http://code.google.com/p/ganeti/ * License : GPL2 Programming Lang: Haskell Description : Cluster tools for Ganeti These are additional tools used for enhanced allocation and capacity calculation on Ganeti clusters. -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: quilt 3.0 source format and dpkg-source/dpkg-buildpackage
On Mon, Dec 28, 2009 at 01:14:46AM +0100, Norbert Preining wrote: > Can someone of the proposers of this (nice? stupid? rubbish?) format > explain me please why on earth: > - git-buildpackage > - dpkg-buildpackage > - and in fact at the bottom dpkg-source > fuck around in my git repository, applying patches, just for builing > a source package? Sorry to hear about your bad experience. I use the same workflow, git-buildpackage + 3.0 (quilt) and I have no problems so far. > If someone is so kind and tell me how that should work: > > $ git-buildpackage -us -uc -S > ... ok new quilt 3.o source package has been built > $ git status > ...peng, all patches applied, but I don't WANT them applied!!! Are you using --git-export-dir? It seems not, and that you build the package in-place. I use this snippet in my gbp.conf: [git-buildpackage] export-dir = ../build-area/ which never runs anything in my git dir, so it's always pristine. > $ quilt pop -a > ... blabla cannot find bla bla... > $ git status > ...still a pain Maybe git reset --hard + removal of the .pc directory. > Ok, it might be that some people enjoy working permanently in that format, > but then, how to create a new patch? quilt new does not work: > $ quilt new > ... bummer, there is now ./patches in my git repository My .quiltrc includes this: QUILT_PATCHES=debian/patches So it uses the right directory. > I don't know what big advantages there really are, I have seen the > announcements again and again and haven't seen any compelling reason in > it. The only reason is that it is just plain counter intuitive > to work with. > > Well, anyway, I converted one pakcage to quilt 3.0, and I will convert > it back. I don't care for it. Again, sorry to hear this experience - in my case, after reading the wiki page, it was a painless experience. And the new format seems cleaner - no longer quilt-specific stuff in debian/rules, and a nice debian.tar.gz instead of a diff. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: quilt 3.0 source format and dpkg-source/dpkg-buildpackage
On Mon, Dec 28, 2009 at 01:38:43AM +0100, Norbert Preining wrote: > On Mo, 28 Dez 2009, Iustin Pop wrote: > > Sorry to hear about your bad experience. I use the same workflow, > > git-buildpackage + 3.0 (quilt) and I have no problems so far. > > Good for you. > > > Are you using --git-export-dir? It seems not, and that you build the > > package in-place. > > No, and it is nowhere mentioned on the wiki page. > > Mind that git-buildpackage with normal 1.0 source format does NOT pollute > the git repository, so my expectation is that the 3.0 format does the > same, but alas, it doesn't. As others have remarked, the working copy is polluted with 1.0 too, and you would need to run debian/rules clean to get back to a pristine state. > > My .quiltrc includes this: > > > > QUILT_PATCHES=debian/patches > > That is wrong, because I do other projects where I don't have my > patches in debian/patches ... > > Is a DD expected to only use quilt in that mode? Arggg. Well, I personally am using it only in that mode (since I only use it for deb packaging), and for me it's fine. I saw other people saying one could selectively choose this, which is good. > > Again, sorry to hear this experience - in my case, after reading the > > wiki page, it was a painless experience. And the new format seems > > Well, because you had the gbp.conf stuff already in place, and the .quiltrc, > but nothing of that is mentioned in the Wiki. I will try to get it in the Wiki later this week (unless someone else beats me to it). > > cleaner - no longer quilt-specific stuff in debian/rules, and a nice > > debian.tar.gz instead of a diff. > > Beh, I disagree, the 3 different lines in debian/rules are NOT bad > by itself, it shows that *something* is changed. And a nice debian.tar.gz, > what does it give you? Do you look at the files and enjoy their > artistic beauty? I don't care for what they look like, I upload them, > and as long as the tools can work with them, that is fine. > > Well, de gustibus non disputandum est. Just to clarify: not for my packages, but for other's people packages! When downloading the sources, it's much easier for me to look at the debian.tar.gz rather than read the diff and try to understand how would that apply. Furthermore, by standardising on quilt patches, I hope that we will move away from directly patching upstream source in the debian diff.gz, which I find very sloppy work. So yes, while I used quilt in my packages before and thus 3.0 (quilt) is just a small (but welcome) improvement for me, I hope that if most of the maintainers move to this format, we'll have much cleaner packages and things like patch-tracker.debian.org will be able to work better with packages. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Bug#489842: ITP: protobuf -- flexible and efficient mechanism for serializing structured data
Package: wnpp Severity: wishlist Owner: Iustin Pop <[EMAIL PROTECTED]> * Package name: protobuf Version : 2.0.0~beta Upstream Author : Google Inc * URL : http://code.google.com/p/protobuf/ * License : Apache 2.0 Programming Lang: C++/Python/Java Description : flexible and efficient mechanism for serializing structured data Protocol buffers are a flexible, efficient, automated mechanism for serializing structured data – similar to XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages. You can even update your data structure without breaking deployed programs that are compiled against the "old" format. . Google uses Protocol Buffers for almost all of its internal RPC protocols and file formats. -- System Information: Debian Release: lenny/sid APT prefers unstable APT policy: (500, 'unstable'), (500, 'testing'), (500, 'stable'), (1, 'experimental') Architecture: amd64 (x86_64) Kernel: Linux 2.6.25.8-teal (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/dash -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Bug#440359: ITP: ganeti -- cluster-based virtualization management software
Package: wnpp Severity: wishlist Owner: Iustin Pop <[EMAIL PROTECTED]> * Package name: ganeti Version : 1.2~b1 Upstream Author : Google Inc. <[EMAIL PROTECTED]> * URL : http://code.google.com/p/ganeti/ * License : GPL Programming Lang: Python Description : cluster-based virtualization management software Ganeti is a virtual server management software tool built on top of Xen virtual machine monitor and other Open Source software. Once installed, the tool will take over the management part of the virtual instances (Xen DomU), e.g. disk creation management, operating system installation for these instances (in co-operation with OS-specific install scripts), and startup, shutdown, failover between physical systems. It has been designed to facilitate cluster management of virtual servers and to provide fast and simple recovery after physical failures using commodity hardware. -- System Information: Debian Release: lenny/sid APT prefers unstable APT policy: (500, 'unstable'), (500, 'stable'), (1, 'experimental') Architecture: amd64 (x86_64) Kernel: Linux 2.6.22.5-teal (SMP w/2 CPU cores) Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/bash -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: where is /etc/hosts supposed to come from?
On Thu, Dec 31, 2009 at 04:12:30PM +0100, Jeremiah Foster wrote: > > On Dec 31, 2009, at 15:04, Vincent Lefevre wrote: > > > On 2009-12-31 14:10:46 +0100, Mike Hommey wrote: > >> On Thu, Dec 31, 2009 at 02:02:36PM +0100, Vincent Lefevre wrote: > >>> POSIX says: > > Have we resolved where the canonical hostname is going to reside or does > reside? > > Debian's policy manual[0] states that `hostname --fqdn` is where this > information should be gathered from. Is that the canonical method? This is a personal opinion, but having the canonical name rely on “hostname --fqdn” is not a favorite of mine: hostname needs the resolver to be working and functioning (e.g. it talks to your nameservers if /etc/hosts doesn't contain your hostname/ip already). IMHVO, this is a brittle setup and the FQDN should be available directly on the host, without external dependencies. Which is why I personally think the machine name (the one that the kernel knows) should hold the canonical name. Just my opinion, no need to flame - I know I lost this argument many times already. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: md5sums files
On Wed, Mar 03, 2010 at 08:34:27AM +, Philipp Kern wrote: > On 2010-03-03, Neil Williams wrote: > > Changing to SHA won't help. I'm for ditching all md5sums from packages. > > It's not a lot of disc space gained but it does give a false sense of > > security or 'insurance' if you want to avoid the more formal meaning of > > 'security'. > > Please don't. It's not about security. It's about being able to detect > corruption. Also it is very helpful when recovering from ext4 root FS > corruption after a sudden power loss. Sure, you cannot guarantee that > the md5 store isn't corrupted too but if it isn't then debsums is > helpful. Very much agreed. Please do not remove the md5sums - even better, I'm all for requiring md5sums (the cost to do so is, I think, insignificant) because they are very helpful for the above purpose. iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20100303212646.gb9...@teal.hq.k1024.org
Re: debian/watch problem due to http://code.google.com download page's link format change
On Sat, May 15, 2010 at 06:48:41PM +0900, Osamu Aoki wrote: > Hi, > > On Sat, May 15, 2010 at 05:16:08PM +0800, Asias He wrote: > > Hi, All > > > > Recently, code.google.com changed the download page link format. > > As a result, the old debian/watch file in packages whose upstream source > > code > > hosted on code.google.com did work anymore. > > > > Take the ibus project for example: > > $ cat ibus/debian/watch > > version=3 > > http://code.google.com/p/ibus/downloads/list \ > > http://ibus.googlecode.com/files/ibus-([0-9].*)\.tar\.gz > > > > The new download page contain the "detail" link, one has to follow > > the new "detail" link in order to find the real download url. > > something like this: > > http://code.google.com/p/ibus/downloads/list > > http://code.google.com/p/ibus/downloads/detail?name=ibus-1.3.3.tar.gz&can=2&q= > > http://ibus.googlecode.com/files/ibus-1.3.3.tar.gz > > > > I believe this problem affects all the packages whose upstream source > > code is hosted on code.google.com. > > > > Is there an easy way to deal this. > > I guess we need to generarize situation on sf.net to other popular > download sites. This data is used mainly by uscan program. > > When the watch file has an URL matching with the Perl regexp > "^http://sf\.net/";, the uscan program substitutes it with > "http://qa.debian.org/watch/sf.php/"; and then applies this rule. The URL > redirector service at this http://qa.debian.org/ is designed to offer a > stable redirect service to the desired file for the watch file having > "http://sf.net/project/tar-name-(.+)\.tar\.gz". This solves issues > related to the periodically changing URL there. > > So if someone impliment similar URL redirector service, we can have > stable link. > > Until then, we need to keep up each watch file manually. I just checked and there's no open bug against code.google.com to restore the old, plain style links. Note they do have a direct download link, but now it's obfuscated behind a javascript "onclick" method. Wouldn't it be better to ask and see if they're willing to either restore the old link or remove the javascript bit, instead of going ahead with a workaround? regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20100523174659.ga16...@teal.hq.k1024.org
Re: Recent changes in dpkg
On Wed, May 26, 2010 at 10:43:36PM +0200, Bernd Zeimetz wrote: > On 05/24/2010 11:05 AM, Raphael Hertzog wrote: > > * The plan concerning dpkg-source and the default source format has been > > clarified. In the long term, the default format will disappear and > > debian/source/format will become mandatory. The lintian tag > > missing-debian-source-format[2] will help us track that. > > Which will force developers to touch most of the packages in the archive just > to > realize this? Sorry, but that's insane. You should not try to force people > into > migrating to some new format because *you* think it is better. This is not a > decision which should be decided by the dpkg maintainers - instead it needs to > be discussed within the developers and maintainers. While the new format > provides some advantages when it comes to the handling of patches, the 1.0 > format is still much more flexible to use - for example it does not require an > existing tarball to build a package, which is very useful for testing. > You know that there are a lot of arguments against the 3.0 format out there, > so > please do not enforce such changes without discussing them first. I think you're misreading the announcement. What will change is that declaring the format (either 1.0 or 3.0 in whatever variant) will be required, not migrating to the new formats. regards, iustin signature.asc Description: Digital signature
Re: Recent changes in dpkg
On Wed, May 26, 2010 at 10:34:32PM +0100, Neil Williams wrote: > On Wed, 26 May 2010 22:59:25 +0200 > Iustin Pop wrote: > > > On Wed, May 26, 2010 at 10:43:36PM +0200, Bernd Zeimetz wrote: > > > On 05/24/2010 11:05 AM, Raphael Hertzog wrote: > > > > * The plan concerning dpkg-source and the default source format > > > > has been clarified. In the long term, the default format will > > > > disappear and debian/source/format will become mandatory. The > > > > lintian tag missing-debian-source-format[2] will help us track > > > > that. > > > > > > Which will force developers to touch most of the packages in the > > > archive just to realize this? Sorry, but that's insane. You should > > > not try to force people into migrating to some new format because > > > *you* think it is better. This is not a decision which should be > > > decided by the dpkg maintainers - instead it needs to be discussed > > > within the developers and maintainers. While the new format > > > provides some advantages when it comes to the handling of patches, > > > the 1.0 format is still much more flexible to use - for example it > > > does not require an existing tarball to build a package, which is > > > very useful for testing. You know that there are a lot of arguments > > > against the 3.0 format out there, so please do not enforce such > > > changes without discussing them first. > > > > I think you're misreading the announcement. What will change is that > > declaring the format (either 1.0 or 3.0 in whatever variant) will be > > required, not migrating to the new formats. > > Declaring a format mandates touching every single package because the > vast majority of packages are currently dpkg source format 1.0 ONLY > because debian/source/format does NOT exist. […] I was only responding to Bernd's email which sounded like he misread the change. Whether the actual change is good or not, it's another issue, on which I'm disagreeing (but not very strongly, i.e. I could live with it): > I think the announcement is wrong, we cannot ever expect every single > package to be touched for any single change. We don't even do that when > libc changes SONAME - that only affects compiled packages, this > theoretically affects all source packages which means huge numbers of > rebuilds and transitions. Agreed. > There is nothing wrong with a source package that glides through several > stable releases without needing a rebuild, especially if it only > builds an Arch:all binary package. As long as it is bug free, an ancient > standards version alone is not sufficient reason to change anything in > the package or make any upload just for the sake of making an upload. But here I disagree. A couple of stable releases is, let's say, 4 years? In the last four years, there have been significant changes (advancements?) in the state of Debian packaging. As such, most, if not all, nontrivial packages would be improved if they're brought up to date. > debian/source/format cannot become mandatory without causing every > single source package to be modified. For what? Just to add 6 bytes? Mandatory? I agree it shouldn't be mandatory. I would rather propose a 'W' lintian tag, nothing more, and which will only fire if the last changelog date is after the date this proposal goes live. iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20100526214452.gb2...@teal.hq.k1024.org
Re: Recent changes in dpkg
On Thu, May 27, 2010 at 07:54:03AM +0100, Neil Williams wrote: > On Wed, 26 May 2010 23:44:52 +0200 > Iustin Pop wrote: > > > There is nothing wrong with a source package that glides through > > > several stable releases without needing a rebuild, especially if it > > > only builds an Arch:all binary package. As long as it is bug free, > > > an ancient standards version alone is not sufficient reason to > > > change anything in the package or make any upload just for the sake > > > of making an upload. > > > > But here I disagree. A couple of stable releases is, let's say, 4 > > years? In the last four years, there have been significant changes > > (advancements?) in the state of Debian packaging. As such, most, if > > not all, nontrivial packages would be improved if they're brought up > > to date. > > I can think of a few perl modules that won't need another upload until > Perl6 is not only released but sufficiently stable that Perl5 is to be > removed. That doesn't look like it will happen within a couple of > stable releases, if at all. (It will take us longer to transition > from Perl5 than it did for libgtk1.2 and that took more than two > stable releases.) Other packages affected could be data packages etc. Data packages are a good point, to which I reply: how will they take advantages of new compression formats? > After Squeeze is released, I'll have half a dozen or more packages that > will be the same version in oldstable through to unstable and none of > those currently have any bugs or lintian warnings other than an > old/ancient standards version or similarly minor issues. None of those > would give any benefits *to users* by being "updated" with respect to > the packaging. To users? Probably not. But to fellow developers? Do those packages already have Vcs-* fields so that I can retrieve them easily with debcheckout? Do the patches already come in DEP-3 format, so that tracking where they originate is easy for automated tools? I agree that we don't *have* to update the packages. All I'm saying, to me it seems that the world of packaging standards is not sitting, and not doing an update once per release seems a bit (just a bit) strange to me. But I understand your point, and I'm not saying it is a wrong point. Just trying to express why I believe doing a rebuild per release helps more than hurts. regards, iustin signature.asc Description: Digital signature
FYI: code.google.com downloads & uscan - fixed
Hi all, [resend since my original mail from Aug 26th got lost] As promised during Debconf, I went and talked with the code.google.com people about the changes to the download list which broke uscan. For the record, the rationale for the changes was (quoted with permission): > Background: We used to have the filename be a link that downloaded the > file, and everything else on that table row took the user to the > download detail page. Users liked having one-click to download the > file, but a lot of users had trouble finding how to get to the > download detail page because they were just always downloading the > file. So, we changed the file name to go to the download detail page > too, and we added the download button that you see now as the new > one-click way to get the file. > > It doesn't have to be a button, it just needs to make the user think > that clicking it will download the file. And, I don't want it to be > download because that will be too wordy and use > valuable horizontal space and be inconsistent with our other list > views. After some discussion on how the actual links should be presented so that uscan can find them while still keeping the desired UI behaviour, the links are now back in and the DEHS data for projects hosted on code.google.com should soon be fine. I have to note that the code.google.com people were very understanding once I explained how uscan works and what purpose it serves - so thanks! And while now I don't have to convert my remaining packages, I will have some uscan rules to undo… Note: I checked my packages and the download page is already fixed. I don't know if the changes will propagate to all projects soon or it will take some time, but if it still doesn't work in a few days, please let me know. regards, iustin signature.asc Description: Digital signature
Re: FYI: code.google.com downloads & uscan - fixed
On Fri, Sep 03, 2010 at 09:10:02PM +0200, David Paleino wrote: > On Fri, 3 Sep 2010 20:12:28 +0200, Iustin Pop wrote: > > > After some discussion on how the actual links should be presented so > > that uscan can find them while still keeping the desired UI behaviour, > > the links are now back in and the DEHS data for projects > > hosted on code.google.com should soon be fine. I have to note that the > > code.google.com people were very understanding once I explained how > > uscan works and what purpose it serves - so thanks! > > Can we expect this to be stable in future? > During the short lifetime of the redirector -- it started in May -- Google > changed their links once. And the redirector was born because they changed it > before. Just wondering, are you sure the change you refer to was not the fix itself? The fix went live around 26th August. > I have no problem in shutting down googlecode.debian.net -- apart from the > fact > that people using it are forced to go back to code.google.com -- but I'd like > not to touch my debian/watch files every two months :) I got a very good response from the developers once I got to them and managed to explain what uscan is/its purpose, and what it needs to work. Also, the href removal was unintended. So my personal feeling around this is that it should be fine. Given these, and the fact that there are enough DDs (or people involved with Debian in other forms) that work for/have contacts at Google, I think it's safe to say the me or someone else could keep the link with these developers. It's also possible to use the public bug tracker, but that will be slower, as you have seen (unfortunately). Of course, the future is unknown, so I can't *promise* it won't change ever again, or that we'll always be able to keep it working with uscan… regards, iustin signature.asc Description: Digital signature
Re: FYI: code.google.com downloads & uscan - fixed
On Fri, Sep 03, 2010 at 10:02:15PM +0200, David Paleino wrote: > On Fri, 3 Sep 2010 21:37:21 +0200, Iustin Pop wrote: > > > On Fri, Sep 03, 2010 at 09:10:02PM +0200, David Paleino wrote: > > > On Fri, 3 Sep 2010 20:12:28 +0200, Iustin Pop wrote: > > > > > > > After some discussion on how the actual links should be presented so > > > > that uscan can find them while still keeping the desired UI behaviour, > > > > the links are now back in and the DEHS data for projects > > > > hosted on code.google.com should soon be fine. I have to note that the > > > > code.google.com people were very understanding once I explained how > > > > uscan works and what purpose it serves - so thanks! > > > > > > Can we expect this to be stable in future? > > > During the short lifetime of the redirector -- it started in May -- Google > > > changed their links once. And the redirector was born because they changed > > > it before. > > > > Just wondering, are you sure the change you refer to was not the fix > > itself? The fix went live around 26th August. > > Nope, the last one (which you read on debian-qa, probably) waas the fix > itself. > It changed once at the beginning (when uscan first broke, and when I made the > redirector), and then another time during redirector's life. Ah, I see. Interesting. > > > I have no problem in shutting down googlecode.debian.net -- apart from the > > > fact that people using it are forced to go back to code.google.com -- but > > > I'd like not to touch my debian/watch files every two months :) > > > > I got a very good response from the developers once I got to them and > > managed to explain what uscan is/its purpose, and what it needs to work. > > Also, the href removal was unintended. So my personal feeling around > > this is that it should be fine. > > ACK. > > I'm going to shut it down one of these days, after announcing it somewhere. > I'm > kind of busy right now :) I think it would be best if you leave it running for a bit longer. The problem is that many packages (I believe) have already changed their URLs to use the redirector, and shutting it down now means that they'll have to (hurry to) upload in the middle of the freeze :( So if it doesn't cost anything, please leave it up a while longer. My announcement was more along the lines "if you haven't converted, no need to convert anymore", rather than "please convert back". regards, iustin signature.asc Description: Digital signature
Re: Backports service becoming official
On Tue, Sep 07, 2010 at 07:46:56AM +0200, Lucas Nussbaum wrote: > Now that backports are becoming official, I think that it is the right > time to reconsider the maintenance model of backports. I would > personally prefer if we had the same rules of packages ownership as for > normal packages ("normal" backport maintainer = maintainer of the > package in unstable). > > Of course, that doesn't remove the possibility for people to upload NMU > backports when the maintainer is not responsive/interested in providing > a backport. But then the normal rules of NMUs should apply (in > particular, the NMUer must not change the Maintainer field, and should > monitor the bugs of the package). Good idea, I like it. iustin signature.asc Description: Digital signature
Re: Backports service becoming official
On Tue, Sep 07, 2010 at 08:35:05PM +0200, Gerfried Fuchs wrote: > I really would like to see us trying to work together more effectively > instead of objecting to things right ahead without even knowing wether > it is such a big relevant deal to make a fuzz about. IMHO it isn't, far > from it. Well said. Especially since the people objecting to this seem (note seem) to be the people who don't actually do backports… regards, iustin (who is very happy to care about backports too) signature.asc Description: Digital signature
Re: Bits from the Security Team (for those that care about bits)
On Sun, Jan 23, 2011 at 11:32:07PM +0100, Thijs Kinkhorst wrote: > Hi! > > In the weekend 14-16 January 2011, the Debian Security Team convened in > Linux Hotel, Essen. We discussed many things, a lot of security work was done > and of course the necessary socialising wasn't forgotten. We'd like to thank > the Linux Hotel for again receiving us in such a great way! […] > * README.test > > Although many packages include a test suite that is run after package build, > there are packages that do not have such a suite, or not one that can be > run as part of the build process. It was proposed to standardise on a > README.test file, analogous to README.source, describing to others than the > regular maintainer how the package's functionality can properly be tested. > This is something we would like to see discussed and implemented for the > Wheezy development cycle. This is a very good idea, but I think it could be taken two steps further. These are just some ideas I have but did not explore in depth, so take them with a grain of salt. First, tests run during a package build are good, but they do not ensure, for example, that the package as installed is working OK. I've been thinking that (also) providing tests to be run after the package is installed (and not on the build results) would be most useful in ensuring that both the build process and the packaging is correct. Second, README.test are designed for human consumption, whereas a standardisation of how to invoke the tests would allow for much more automation. E.g. piuparts would not only be able to test that the install succeeds, but the automated tests also work. Of course, these would be useful only for some classes of packages, but for those they would be of much help. I have something like this in one package of mine, and it gives me a lot of confidence while doing packaging changes. thanks, iustin signature.asc Description: Digital signature
Re: Bits from the Security Team (for those that care about bits)
On Mon, Jan 24, 2011 at 10:52:54AM +0800, Paul Wise wrote: > On Mon, Jan 24, 2011 at 7:19 AM, Iustin Pop wrote: > > > First, tests run during a package build are good, but they do not > > ensure, for example, that the package as installed is working OK. I've > > been thinking that (also) providing tests to be run after the package is > > installed (and not on the build results) would be most useful in > > ensuring that both the build process and the packaging is correct. > > Debian has definitely needed this for a long time. > > I'm thinking that these automated post-install tests are something > that all distributions could benefit from and probably we should push > them upstream. > > Automated post-install testing would be great, but it cannot apply in > all cases and should be complemented by README.test. I think both > approaches are needed. Agreed—I wasn't suggesting that README.test is not useful, just that there is potential for more in this area, at least for some class of packages. > For example: > > libwww-topica-perl: > […] > warzone2100: > […] Good examples, indeed. regards, iustin signature.asc Description: Digital signature
Re: Bits from the Security Team (for those that care about bits)
On Sun, Jan 23, 2011 at 06:45:56PM -0500, Michael Hanke wrote: > On Mon, Jan 24, 2011 at 12:19:32AM +0100, Iustin Pop wrote: > > First, tests run during a package build are good, but they do not > > ensure, for example, that the package as installed is working OK. I've > > been thinking that (also) providing tests to be run after the package is > > installed (and not on the build results) would be most useful in > > ensuring that both the build process and the packaging is correct. > > > > Second, README.test are designed for human consumption, whereas a > > standardisation of how to invoke the tests would allow for much more > > automation. E.g. piuparts would not only be able to test that the > > install succeeds, but the automated tests also work. > > Exactly. In the NeuroDebian team we started playing around with more > comprehensive testing -- both regarding single packages, but also > integration tests involving multiple packages. We started composing a > SPEC for a testing framework, but we haven't gotten very far, yet. What > we have is here > > http://neuro.debian.net/proj_debtest.html > > and here > > > http://git.debian.org/?p=pkg-exppsy/neurodebian.git;a=blob_plain;f=sandbox/proposal_regressiontestframwork.moin > > If somebody is interested in working on this topic, we'd be glad to join > forces. > > Originally, we wanted to develop the SPEC a little further, but since > the topic came up, I figured it might be better to add these pointers > now. Thanks for sharing. Your proposal seems to focus on a higher level, e.g. group-based testing, resource and scheduling, etc. IMHO what would be a sufficient first step would be much simpler: - being able to know if a package does offer build & post-install tests - how to run such tests - for post-install tests, what are the depedencies (Test-Depends? ;-) This would allow a maintainer to implement an automatic test of his packages whenever doing a new upload (which is my personal issue :). A framework like your proposed DebTest can then build upon such basic functionality to provide coordinated, archive-wide or package-set-wide running of tests. A few comments on your proposal: - “Metainformation: duration”: how do you standardise CPU/disk/etc. performance to get a useful metric here? - assess resources/performance: in general, across architectures/platforms and varied CPU speeds, I think it will be hard to quantify the performance and even resources needed for a test suite - some structured output: given the variety of test suites, this might be very hard to achieve; in my experience, the best that can be hoped for across heterogeneous software is a count of pass/fail, and log files should be left for human investigation in case of failures regards, iustin signature.asc Description: Digital signature
Re: autopkgtest (was Re: Bits from the Security Team (for those that care about bits)
On Mon, Jan 24, 2011 at 10:37:26AM +0100, Holger Levsen wrote: > Hi, > > On Montag, 24. Januar 2011, Iustin Pop wrote: > > Second, README.test are designed for human consumption, whereas a > > standardisation of how to invoke the tests would allow for much more > > automation. E.g. piuparts would not only be able to test that the > > install succeeds, but the automated tests also work. > > Package: autopkgtest > Description: automatic as-installed testing for Debian packages > autopkgtest runs tests on binary packages. The tests are run on the > package as installed on a testbed system (which may be found via a > virtualisation or containment system). The tests are expected to be > supplied in the corresponding Debian source package. > . > See adt-run(1) and /usr/share/doc/autopkgtest. > Use of adt-virt-xenlvm requires the autopkgtest-xenlvm package too; > Use of the pre-cooked adt-testreport-onepackage script requires curl. > > > I'm happy that you like piuparts, but it's not the best tool for > everything :-) As I said, "E.g." - for example. Thanks for mentioning autopkgtest, I didn't know about it. iustin signature.asc Description: Digital signature
Re: package testing, autopkgtest, and all that
On Thu, Jan 27, 2011 at 02:45:57PM +, Ian Jackson wrote: > Stefano Zacchiroli writes ("package testing, autopkgtest, and all that"): > > Regarding this specific point (tests run on packages as if they were > > installed), IIRC Ian Jackson worked a bit on the matter, producing some > > code (autopkgtest---as mentioned elsewhere in this thread) and a > > specification of the interaction among tests and packages. Ian: would > > you mind summarizing the status of that effort and comment on whether, > > in your opinion, people interested on this topic should continue from > > there or start over? > > Sure. autopkgtest (the codebase) isn't very big but it contains > several interlocking parts. The key parts are: > > * A specification which allows a source package to declare that it >contains tests, and how those tests need to be run. This >specification was discussed extensively on debian-devel at the >time and a copy is in the autopkgtest package, but I'll follow up >this email with a copy of it. > > * Machinery to interpret those declarations, and: > - build the package (if needed) > - install the package(s) needed for the runtime tests > - run the tests (if any) and collect the results > > * Some surrounding ad-hoc shell scripts and crontab code to: > - select a package to test > - run the test > - send the results in a fairly raw form to a webserver host > - make some notes about how the test went for the benefit of the >selection algorithm > > * A standardised interface to a virtualisation/snapshot testbed, with >three implementations: Xen VMs and LVM snapshots; chroot; or >simply running things on the actual host. > > All of this seemed to work reasonably well. The 1.2.0 in the archive > is essentially identical to my bzr head so all the autopkgtest code is > out there. Excellent. I've read your followup email and the spec seems very good (for my purposes). > The problems are that: > […] As it seems to me, right now this is most useful for individual maintainers to declare, and run, their own tests to ensure the built packages are fine. A good start, I'd say. > > Having a specification and some code to run the tests early on in the > > Wheezy release cycle would be amazing, as it will enable maintainers to > > add tests to their packages during the expected package updates for > > Wheezy. > > Absolutely. > > If someone would like to set up a machine running these tests and > perhaps do some of the qa.debian.org integration, I would be > delighted. I think even without a full archive integration, having this spec publicised a bit among developers would be useful; I know I'm looking forward to adapting my own packages. thanks, iustin signature.asc Description: Digital signature
Re: package testing, autopkgtest, and all that
On Fri, Jan 28, 2011 at 03:19:32PM +, Ian Jackson wrote: > Iustin Pop writes ("Re: package testing, autopkgtest, and all that"): > > I think even without a full archive integration, having this spec > > publicised a bit among developers would be useful; I know I'm looking > > forward to adapting my own packages. > > Perhaps I should be trying to get the spec referred to in useful > places ? My belief is that yes, that's a good idea. But I'm a bit surprised that other people didn't chime in on this thread. regards, iustin signature.asc Description: Digital signature
Re: The future of m-a and dkms
On Sun, Feb 13, 2011 at 06:00:10PM -0500, Michael Gilbert wrote: > On Sun, 13 Feb 2011 23:52:22 +0100 Christoph Anton Mitterer wrote: > > > On Sun, 2011-02-13 at 23:21 +0100, Patrick Matthäi wrote: > > > since we have got a stable release with dkms now, I am asking myself, if > > > it is still necessary to support module-assistant. > > > dkms is IMHO the better system and maintaining two different systems for > > > kernel modules is a bit bloated. > > With dkms, can you also create packages of the modules? > > > > At least I found it always very useful, to create modules with m-a, or > > via make-kpkg, and provide them via local archives to all my Debian > > boxes. Can save quite some compilation time, and one doesn't need kernel > > header + build packages etc. on all nodes. > > Yes, there is the "mkdeb" command-line option, but I suppose that > doesn't get as much testing as it should. With my sysadmin hat on, compilation on servers is a *very* big no-no, so if mkdeb doesn't work or if it doesn't provide nice modules, then m-a should stay in. I know that right now, when backporting stuff at work, we have to drop the DKMS stuff and write our own packaging since DKMS doesn't play nicely with multiple kernel versions, embedding the kernel *and* package version in the final module version, etc. Things might have changed recently, but last time I looked DKMS was only good for desktops, and not as a reliable package-building method. Of course, I might have wrong information, so clarifications welcome. regards, iustin signature.asc Description: Digital signature
Re: The future of m-a and dkms
On Mon, Feb 14, 2011 at 02:43:23PM +, Ben Hutchings wrote: > On Mon, Feb 14, 2011 at 12:19:37PM +0100, Klaus Ethgen wrote: > > Hello, > > > > Am So den 13. Feb 2011 um 23:21 schrieb Patrick Matthäi: > > > since we have got a stable release with dkms now, I am asking myself, if > > > it is still necessary to support module-assistant. > > > dkms is IMHO the better system and maintaining two different systems for > > > kernel modules is a bit bloated. > > > > Well, dkms might be a good system for workstations, but on servers where > > you want to have reliable systems and security first you do not want > > dkms ever. > > DKMS was developed by Dell originally to support servers (as they > did not sell any other systems running Linux until recently). That doesn't mean it's a good solution for servers, just that it can be used on servers too. With bad results, see below. > > With m-a it was and is possible to create nice debian packages for > > custom modules which can be installed on all systems getting all the > > same modules. With dkms that is not possible. More over you need to have > > a full gcc suite on all servers where you have custom modules. That is > > not acceptable. > [...] > > This is not true. You can use 'dkms mkdeb' to build module packages > elsewhere. As others have said in this thread (and from my experience too), you can't use dkms mkdeb to build and install separate packages for two kernel versions but same module. That is because it uses only the package name & version in the module paths, hence it doesn't support nice upgrades. iustin signature.asc Description: Digital signature
Small transition: protobuf 2.4.0
Hi all, I've uploaded to experimental the new protobuf version (2.4.0a) and this brings as always a SONAME increase. It also has a new experimental backend (C++-based) for the Python language bindings, hence the heads-up. Here is the list of packages that is affected by this (assuming my grep-dctrl foo is good): Build-depends: chromium-browser drizzle hbase mozc mumble protobuf-c sawzall Depends: python-protobuf.socketrpc Maintainers/uploaders of the respective packages are CCed. I would appreciate feedback on whether the new version is good for your packages, and/or if you need time for this transition; assuming nothing breaks, I'll probably upload this to unstable in a couple of weeks. thanks, iustin signature.asc Description: Digital signature
Re: Small transition: protobuf 2.4.0
On Sun, Feb 20, 2011 at 08:39:38PM +0100, Jan Dittberner wrote: > On Sun, Feb 20, 2011 at 07:01:55PM +0100, Iustin Pop wrote: > > Depends: > > python-protobuf.socketrpc > > I maintain this in DPMT. > > The python-protobuf.socketrpc module works fine with python-protobuf from > experimental, the whole test suite works without any issues. Thanks for the confirmation. For Python especially, please read the newly added README.Debian and check to see if your test suite works both with the C++-backend and the pure Python one; the upstream release notes mention that the C++ one will/might become the default in the future. regards, iustin signature.asc Description: Digital signature
Re: Bug#519339: ITP: tmux -- an alternative to screen, licensed under 3-BSD
On Thu, Mar 12, 2009 at 11:59:00PM +0100, Carsten Hey wrote: > On Thu, Mar 12, 2009 at 11:17:02PM +0100, Guus Sliepen wrote: > > On Thu, Mar 12, 2009 at 10:37:41PM +0100, Karl Ferdinand Ebert wrote: > > > - a clearly-defined client-server model: windows are independent > > > entities which may be attached simultaneously to multiple sessions > > > and viewed from multiple clients (terminals), as well as moved > > > freely between sessions within the same tmux server; > > > > I do not really see anything here that screen can't do... > > GNU screen can't move one window from one session to another or attach > one window to two session. Maybe not, but since I can view a window with two clients anyway, what advantage does "attaching one window to two sessions" bring? What is the intended use-case? regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Bug#522454: ITP: protobuf -- Protocol Buffers are a way of encoding structured data in an efficient yet extensible format.
On Fri, Apr 03, 2009 at 11:25:22PM +0200, Sune Vuorela wrote: > On Friday 03 April 2009 23:02:28 Ehren Kret wrote: > > Package: wnpp > > Severity: wishlist > > Owner: Ehren Kret > > > > > > * Package name: protobuf > > Version : 2.0.3 > > Upstream Author : Kenton Varda , et al. > > * URL : http://code.google.com/p/protobuf/ > > http://packages.qa.debian.org/p/protobuf.html ? > > Though it looks like it could kind of need a hand. The maintainer (i.e.) is just a little (or more) swamped with other stuff and was waiting for the new python policy changes to settle down before packaging the new upstream version. thanks, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: lilo about to be dropped?
On Mon, Apr 06, 2009 at 11:42:42AM -0500, William Pitcock wrote: > On Mon, 2009-04-06 at 18:19 +0200, Vincent Zweije wrote: > > On Mon, Apr 06, 2009 at 06:06:38PM +0200, Mike Hommey wrote: > > > > || On Mon, Apr 06, 2009 at 08:02:04PM +0400, Dmitry E. Oboukhov > > wrote: > > > > || > I use lilo, I like lilo. > > || > I don't like grub because it has unlogically config, unlogically > > || > behavior, strange reconfig-system. I don't like the programs with > > || > perverse intellect. Grub is not unixway. > > || > > || Which is more perverse to read a kernel? > > || - reading actual files from actual filesystems > > || - reading hardcoded blocks on the device > > > > I think this question should be: > > > > Which is more perverse to read without a kernel? > > > > The answer could still fall either way. > > No, the answer is always the second one. Err, why? I've seen grub failing more often, and heard way more report of this, than of lilo. Please explain why you say so. The grub installer also used to read the blockdevice while the filesystem was mounted, which is never the right answer. It has always seemed hackish to me, duplicating fs functionality (and not always correctly, e.g. related to journal replaying on ext3/xfs). A simple block list is just that. iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: lilo about to be dropped?
On Tue, Apr 07, 2009 at 07:36:40AM +0200, Mike Hommey wrote: > On Mon, Apr 06, 2009 at 11:10:25PM +0200, Iustin Pop wrote: > > On Mon, Apr 06, 2009 at 11:42:42AM -0500, William Pitcock wrote: > > > On Mon, 2009-04-06 at 18:19 +0200, Vincent Zweije wrote: > > > > On Mon, Apr 06, 2009 at 06:06:38PM +0200, Mike Hommey wrote: > > > > > > > > || On Mon, Apr 06, 2009 at 08:02:04PM +0400, Dmitry E. Oboukhov > > > > wrote: > > > > > > > > || > I use lilo, I like lilo. > > > > || > I don't like grub because it has unlogically config, unlogically > > > > || > behavior, strange reconfig-system. I don't like the programs with > > > > || > perverse intellect. Grub is not unixway. > > > > || > > > > || Which is more perverse to read a kernel? > > > > || - reading actual files from actual filesystems > > > > || - reading hardcoded blocks on the device > > > > > > > > I think this question should be: > > > > > > > > Which is more perverse to read without a kernel? > > > > > > > > The answer could still fall either way. > > > > > > No, the answer is always the second one. > > > > Err, why? I've seen grub failing more often, and heard way more report > > of this, than of lilo. Please explain why you say so. > > > > The grub installer also used to read the blockdevice while the > > filesystem was mounted, which is never the right answer. It has always > > seemed hackish to me, duplicating fs functionality (and not always > > correctly, e.g. related to journal replaying on ext3/xfs). > > > > A simple block list is just that. > > Run update-initramfs -u without running lilo. Oh, you boot on the old > initramfs. Now remove the old initramfs and put some other files in > /boot. Then you're likely to not be able to boot at all. That sure is > better. Are you complaining that lilo allows one to shot oneself in the foot? Because that how it looks like. My point is that in controlled environments, lilo looks to be more stable. Not for desktop usage, not for update-iniramfs usage without knowing what needs to be done. Again, my point is (like the grand-grand-parent), that the answer differs by application. grubs is not always better. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Yes, we have bugs
On Wed, Apr 15, 2009 at 09:57:25PM +0200, Tollef Fog Heen wrote: > ]] Luca Niccoli > > | But what if I don't need hotplugging? Why should I bear hal flaws if I > | don't need its features? > > A machine without USB or PCI is not a particularly common sight those > days. Heck, even machines without SATA are becoming uncommon. My desktop machine with USB, PCI, SATA, eSATA, PCIe and Firewire. And it runs happily, supports hotplugging devices, *without* hal. hal, AFAIK, is useful for users who don't want to customize systems; once you start customizing, hal and similar tools more get in the way than help. Just my oppinion. Everyone is of course entitled to disagree. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: deprecating /usr as a standalone filesystem?
On Tue, May 05, 2009 at 05:36:02PM +0200, Marco d'Itri wrote: > I have been told by upstream maintainers of one of my packages and by > prominent developers of other distributions that supporting a standalone > /usr is too much work and no other distribution worth mentioning does it > (not Ubuntu, not Fedora, not SuSE). > > I know that Debian supports this, but I also know that maintaning > forever large changes to packages for no real gain sucks. > > So, does anybody still see reasons to continue supporting a standalone > /usr? > If you do, please provide a detailed real-world use case. > A partial list of invalid reasons is: > - "I heard that this was popular in 1998" > - "it's a longstanding tradition to support this" > - "it's really useful on my 386 SX with a 40 MB hard disk" Scenarion A, desktop - / on non-LVM, fixed size, as recovery from a broken LVM setup is way harder if / is on LVM - /usr on LVM, as it can grow significantly, and having it on LVM is much more flexible Scenario B, laptop/netbook - / non-encrypted, small, asks for passphrase to boot - everything else on dm-crypt+lvm This seems a very weird proposition to me. Separate /usr works, as is more flexible: needed space for / is < 500MB, needed space for /usr is > 4GB. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: deprecating /usr as a standalone filesystem?
On Wed, May 06, 2009 at 02:56:20PM +0200, Stefano Zacchiroli wrote: > In particular, from the replies to my question the picture I get is > that everybody is using ad hoc solutions to implement what some people > are pretending to be properly supported by Debian. I found it not > defendable, maybe it's just me, maybe it's just bad marketing. > > Of the two one: > > - We decide that mounting /usr remotely is a Debian goal. > > If we do so, the mechanisms to make it work should not be as ad hoc > as this thread as hinted. We should provide a package explicitly > made to make this workflow tenable and point our users to it. > > - We decide that if you want to mount /usr remotely you are on your > own. > > If we do so, we should stop using "mount /usr remotely" as an > argument for keeping /usr as a single filesystem. What about the (many) arguments made here about the *other* reasons to have /usr a separate filesystem? regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: RFC: DEP-14: Recommended layout for Git packaging repositories
On Tue, Nov 11, 2014 at 10:26:24PM +0100, Raphael Hertzog wrote: > Hello, > > following the initial discussion we had in August > (https://lists.debian.org/debian-devel/2014/08/thrd2.html#00499), I have > written a first draft of the Debian Enhancement Proposal that I suggested. > It's now online at http://dep.debian.net/deps/dep14 and also attached > below so that you can easily reply and comment. > > I have left one question where I have had conflicting feedback > and I'm not sure what's best. Thus I will welcome a larger set of > opinions on this specific question (search for "QUESTION" in the > text). […] > Packaging branches and tags > === > > Packaging branches should be named according to the codename of the > target distribution. In the case of Debian, that means for example > `debian/sid`, `debian/jessie`, `debian/experimental`, > `debian/wheezy`, `debian/wheezy-backports`, etc. We specifically avoid > "suite" names because those tend to evolve over time ("stable" becomes > "oldstable" and so on). > > The Git repository listed in debian/control's `Vcs-Git` field should > usually have its HEAD point to the branch corresponding to the > distribution where new upstream versions are usually sent. For Debian, > it will usually be `debian/sid` (or sometimes `debian/experimental`). I find this paragraph confusing. With gbp, this is where new Debian developments are made, and new upstream versions are sent to upstream/xxx. Or do you mean something else here? > QUESTION: some people have argued to use debian/master as the latest > packaging targets sometimes sid and sometimes experimental. Should we > standardize on this? Or should we explicitly allow this as an alternative? Interesting. Assuming a normal Debian package that has just a few backports (as opposed to every sid release being backported), and which imports only upstream tarballs/snapshots (not the whole history), I expect that a high proportion of the commits would happen on this branch. In which case, why not make it 'master', without debian/ ? Is it (only) in order to cleanly support multiple vendors? thanks, iustin signature.asc Description: Digital signature
Re: RFC: DEP-14: Recommended layout for Git packaging repositories
On Wed, Nov 12, 2014 at 09:21:56AM +0100, Raphael Hertzog wrote: > Hi, > > On Tue, 11 Nov 2014, Iustin Pop wrote: > > > Packaging branches and tags > > > === > > > > > > Packaging branches should be named according to the codename of the > > > target distribution. In the case of Debian, that means for example > > > `debian/sid`, `debian/jessie`, `debian/experimental`, > > > `debian/wheezy`, `debian/wheezy-backports`, etc. We specifically avoid > > > "suite" names because those tend to evolve over time ("stable" becomes > > > "oldstable" and so on). > > > > > > The Git repository listed in debian/control's `Vcs-Git` field should > > > usually have its HEAD point to the branch corresponding to the > > > distribution where new upstream versions are usually sent. For Debian, > > > it will usually be `debian/sid` (or sometimes `debian/experimental`). > > > > I find this paragraph confusing. With gbp, this is where new Debian > > developments are made, and new upstream versions are sent to > > upstream/xxx. Or do you mean something else here? > > Is it clearer if I rewrite it this way ? > > « The Git repository listed in debian/control's `Vcs-Git` field should > usually have its HEAD point to the branch where new upstream versions > are being packaged. For Debian, it will usually be `debian/sid` (or > sometimes `debian/experimental`) » Yes, (to me) that is much more clear. > > Interesting. Assuming a normal Debian package that has just a few > > backports (as opposed to every sid release being backported), and which > > imports only upstream tarballs/snapshots (not the whole history), I > > expect that a high proportion of the commits would happen on this > > branch. In which case, why not make it 'master', without debian/ ? Is it > > (only) in order to cleanly support multiple vendors? > > Henrique answered to that. The non-prefixed namespace is dedicated > to upstream development. Ack. Thank you, iustin signature.asc Description: Digital signature
Re: Moving /tmp to tmpfs is fine
On Fri, May 25, 2012 at 08:14:10PM +0300, Serge wrote: > 2012/5/25 Neil Williams wrote: > > Different hardware -> different software selection. > > I don't understand your point. I could understand it if we were choosing > among benefits that most users get from /tmp being on disk and /tmp being > on tmpfs. But there're NO benefits in having /tmp on tmpfs. It works (not > works better, just works somehow) only on systems with a lot of RAM. This is plain wrong. NO benefits for tmpfs? "just works somehow"? Whatever other arguments you had, the statement above tells me you only look at _your_ use case and dismiss all others, or that you don't understand the different behaviours of fsync() (with enough memory, that is) on tmpfs, HDDs and SSDs. And no, "I really can't think of any popular application" is not a valid discussion point. iustin, happily using /tmp on tmpfs since many, many years ago, and configuring it as such on all Debian machines he installs (of various roles). -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120525202638.ga19...@teal.hq.k1024.org
Re: Moving /tmp to tmpfs is fine
On Sun, May 27, 2012 at 05:39:21AM +0300, Serge wrote: > 2012/5/25 Iustin Pop wrote: > > > And no, "I really can't think of any popular application" is not a valid > > discussion point. > > But there're already popular applications and usecases that break because > of that. It can render the system unstable because of heavy swap usage. > So there must be some strong point to still use it despite those problems. > There must be some very popular application, that do not break because > of that feature and even becomes better. There's a difference between "tmpfs is bad" and "the defaults for tmpfs are bad". > Otherwise, if this feature causes problems to some applications and no > benefits to others, what's the point in using it? There are benefits, but your broken benchmarks don't show it. > > This is plain wrong. NO benefits for tmpfs? "just works somehow"? > > Ok, I must be missing some obvious benefit. Please, help me and name it. > But real one not theoretical. Some real (and popular, since we're talking > about defaults) application that becomes faster (or better in any other > way) because of /tmp being on tmpfs. All the tests showed tmpfs being no > better than ext3 so far. Your tests are wrong, as Adam Borowski very nicely explained. > > you only look at _your_ use case and dismiss all others, or that you > > don't understand the different behaviours of fsync() (with enough memory, > > that is) on tmpfs, HDDs and SSDs. > > I don't dismiss them. But we talk about *defaults*. And I don't know > any real applications, heavily fsync()ing files in /tmp, that people are > expected to use by default. Can you name some? Which people? You can't overgeneralise. I agree that the default sizes of tmpfs are likely wrong. But that doesn't mean, as you claim, that tmpfs itself is wrong. > > iustin, happily using /tmp on tmpfs since many, many years ago > > Heh... A lot of people use it. I guess most of them have seen "/tmp", then > they were seing "tmpfs", and decided that "tmpfs is the fs for /tmp", it > seemed natural to them. They never really thought, whether it's good or > bad idea, or that there may be some better ideas. It was just natural to > use it. I appreciate this attack. It helps your point very much to paint people who argue for tmpfs as clueless people. > And when I say, "hey, that's a bad idea", I hurt them (I'm sorry for that). > They start arguing that "it's not that bad as you say, look, not everything > is broken, many programs still work". They can't say why it's better than > using disk because they never though about that. It was just "natural"... Serge, I very much appreciate the fact that you're trying to make a better experience for Debian users. But I don't appreciate the fact that, in your overzealous attitude, you: - come up with faulty benchmarks, which show that you misunderstand what the bottlenecks are - make assumptions that people are using tmpfs because they are ignorant - claim that people are using tmpfs only because they have overpowered hardware - etc. Honestly, other people in this thread have made the point against tmpfs much better than you; I would suggest you tone down a bit your position, and stop assuming ignorance on other's people part. I will stop replying to this thread, because I don't have much to add; there are pros and cons to both solutions, but I personally I'm still surprised that people don't see the advantage of tmpfs for not underpowered memory cases. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120527132036.ga19...@teal.hq.k1024.org
Trivia: very old files on incoming.debian.org
Hi all, Not sure where else to send this… but while checking for a small upload I did I saw some very old files on incoming. If you check sorted by date, http://incoming.debian.org/?C=M;O=A, you'll see some very old things (1997, 2003, 2007, etc…). Their size is not big, but seems… unclean to not remove failed uploads (which I presume they are) after a while. regards, iustin signature.asc Description: Digital signature
Re: Trivia: very old files on incoming.debian.org
On Tue, Nov 08, 2011 at 09:50:27PM +, Adam D. Barratt wrote: > On Wed, 2011-11-09 at 06:34 +0900, Iustin Pop wrote: > > Not sure where else to send this… but while checking for a small upload > > I did I saw some very old files on incoming. If you check sorted by > > date, http://incoming.debian.org/?C=M;O=A, you'll see some very old > > things (1997, 2003, 2007, etc…). > > > > Their size is not big, but seems… unclean to not remove failed uploads > > (which I presume they are) after a while. > > They're not. They're part of (or at least associated with) very recent > uploads, and their being there is a Good Thing[tm], as it means > incoming.d.o contains source for the binaries provided there. > > The 1997 file, for instance, is xloadimage_4.1.orig.tar.gz, sitting > alongside xloadimage_4.1-16.3_*.deb for several architectures. Aaaah, I see. Thanks for the explanation! I should have realised this is the case from the fact that only orig.tar.gzs have such old dates. (which also means: we can still build upstream software from more than one decade ago? nice! even if patched…) > [Also note that the public HTTP-exported view of incoming.d.o has been > little more than a link tree for some time now (since the introduction > of "install-direct-from-unchecked-to-projectb" iirc) rather than the > "accepted files not yet in the archive" it once was.] Ah, I'm not familiar with that change, so thanks again for the information. iustin signature.asc Description: Digital signature
Package mailing lists (was: bits from the DPL for September 2011)
On Sun, Oct 09, 2011 at 03:48:35PM +0200, Stefano Zacchiroli wrote: > - I've made the "private email aliases considered harmful" point [10], > in a somehow unrelated thread. I ask you to watch out for interactions > in Debian that could happen only through private email addresses. > There are some cases where they are warranted (e.g. security or > privacy concerns), but having regular activities of a team going > through private email aliases harms us in so many ways. Please point > me to project areas that could benefit from improvements on this > front, ... unless you can just go ahead and fix the issue! Sorry for reviving and old email. To what extend do you think this should apply - even at individual package level? I ask this because of the following: recently I had a 1-1 discussion with a co-maintainer of one of my packages, which went between our personal emails. I quite disliked this (since it will be buried in our mailboxes), but email conversations seem simpler than going through the BTS for all discussions. On the other hand, http://wiki.debian.org/Alioth/PackagingProject discourages requesting Alioth projects for smaller packages, so in that sense it encourages people contacting directly the maintainers via their emails, instead of having the archived, indexable lists. Could/should Debian make it easier for each package to have an own email list (i.e. making it easier to have "1-person team maintenance")? Or is BTS enough? (I don't think so, since it doesn't have a simple canonical entry point for all packages) regards, iustin signature.asc Description: Digital signature
Re: from / to /usr/: a summary
On Thu, Dec 08, 2011 at 08:16:57PM +0100, Marco d'Itri wrote: > On Dec 07, Stephan Seitz wrote: > > > If this is the future way and the way the developer want to go, then > > the way will succeed in time, but as Goswin said, it will take time. > > > > The admins who think the new way is bad will not change their > > systems. New admins may think otherwise, and if the old server will > > be replaced, they change the system to the new way. > I do not think you understand well the issue: we cannot accomodate > everybody, this is not just a matter of local policy. > Two issues are being discussed here: > - mounting /usr in the initramfs > - symlinking /bin, /sbin and /lib to their /usr counterparts[1] > > The first feature sooner or later will be needed to support some major > software: this is not just about udev, it going to be required soon by > major desktop/gnome components. Please excuse me if I misunderstand things, but there was no information about this in the previous thread: why would gnome _ever_ care when /usr is mounted? This seems like a violation of proper layering, if the desktop software care about such minute details of the boot sequence. Am I just misunderstanding things? (I hope so!) iustin signature.asc Description: Digital signature
Re: from / to /usr/: a summary
On Sun, Dec 25, 2011 at 12:08:57PM +, Philipp Kern wrote: > On 2011-12-25, Stephan Seitz wrote: > > All admins I know have at least some servers with custom kernels (in the > > past it was said, to build your firewall/server kernels without module > > support, so that no rootkit module could be loaded). > > No longer needed. See /proc/sys/kernel/modules_disabled. That's not equivalent - an attacker that can load modules can also remove the init script that sets this variable to 1 and reboot the machine. For proper safeguarding you still want no module support in the kernel at all. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20111226103810.ga1...@teal.hq.k1024.org
Re: from / to /usr/: a summary
On Mon, Dec 26, 2011 at 04:42:45PM +0600, Andrey Rahmatullin wrote: > On Mon, Dec 26, 2011 at 11:38:10AM +0100, Iustin Pop wrote: > > > > All admins I know have at least some servers with custom kernels (in the > > > > past it was said, to build your firewall/server kernels without module > > > > support, so that no rootkit module could be loaded). > > > > > > No longer needed. See /proc/sys/kernel/modules_disabled. > > > > That's not equivalent - an attacker that can load modules can also > > remove the init script that sets this variable to 1 and reboot the > > machine. > Why can't the same attacker replace the kernel? On Mon, Dec 26, 2011 at 12:01:43PM +0100, Philipp Kern wrote: > > For proper safeguarding you still want no module support in the kernel > > at all. > > Sorry, but what kind of argumentation is that? If the admin doesn't notice > reboots and/or file tampering, I could just replace the kernel with my > modified > one and reboot. Now of course you could increase your paranoia and boot the > kernel from an immutable disc. But then I'd just load all relevant modules in > the initramfs and set modules_disabled there instead of doing custom built > kernels just to get rid of modules. For both of you: for virtualised environments where the kernel is loaded from the hypervisor. Yes, doing the initrd from the hypervisor helps too, but the problem with that setting is that it defaults to 0 and has to be switched to 1. Whereas a kernel with no module support defaults to 0. regards, iustin signature.asc Description: Digital signature
Re: [Long] UEFI support
On Mon, Jan 09, 2012 at 04:29:12PM +, Tanguy Ortolo wrote: > Wookey, 2012-01-09 15:04+0100: > > I assume evyone here is aware of mjg's useful posts about the issue of > > key-management in UEFI secure boot? > > > > We need to do one of: > > > > * get our bootloaders signed by something like the 'linuxfoundation key' > > if such a thing gets widely installed, > > * explain to users how to get the 'debian key' installed > > * explain to users how to turn off secure boot. > > * Get manufacturers to put the Debian key in machines for sale (or > > just make them with Debian(or a deriviative) pre-installed. > > Just as a reminder, we must be aware that GRUB images are generated > locally on each host. Thus every user would have to have the secret key > to sign their boot loader image. Hmm, I might misunderstand this, but wouldn't just the grub binary need to be signed? And this binary then would parse the grub.cfg file and allow various kernels to boot. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120109185753.ga4...@teal.hq.k1024.org
Bug#664466: ITP: shelltestrunner -- test command-line programs or arbitrary shell commands
Package: wnpp Severity: wishlist Owner: Iustin Pop * Package name: shelltestrunner Version : 1.2.1 Upstream Author : Simon Michael * URL : http://joyful.com/shelltestrunner * License : GPLv3 Programming Lang: Haskell Description : test command-line programs or arbitrary shell commands shelltestrunner is a cross-platform tool for testing command-line programs (or arbitrary shell commands). It reads simple declarative tests specifying a command, some input, and the expected output, error output and exit status. Tests can be run selectively, in parallel, with a timeout, in color, and/or with differences highlighted. -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120317222033.30496.97179.report...@ruru.hq.k1024.org
Bug#665911: ITP: haskell-ekg -- remote monitoring of Haskell processes over HTTP
Package: wnpp Severity: wishlist Owner: Iustin Pop * Package name: haskell-ekg Version : 0.3.0.3 Upstream Author : Johan Tibell * URL : https://github.com/tibbe/ekg * License : BSD Programming Lang: Haskell Description : remote monitoring of Haskell processes over HTTP The ekg library lets you remotely monitor a running (Haskell) process over HTTP. It provides a simple way to integrate a monitoring server into any application. signature.asc Description: Digital signature
Re: Go (golang) packaging, part 2
On Tue, Jan 29, 2013 at 09:44:43AM +0100, Wouter Verhelst wrote: > On Tue, Jan 29, 2013 at 08:25:38AM +0100, Michael Stapelberg wrote: > > Hi Hilko, > > > > Hilko Bengen writes: > > > This is a pity for those of us who don't really subscribe to "get > > > everything from github as needed" model of distributing software. > > Yes, but at the same time, it makes Go much more consistent across > > multiple platforms. > > "consistency across multiple platforms" has been claimed as a benefit > for allowing "gem update --system" to replace half of the ruby binary > package, amongst other things. It wasn't a good argument then, and it > isn't a good argument now. > > > We should tackle one issue at a time. I suppose in > > the future, upstream’s take on library distribution might change. For > > now I agree with upstream on this — not introducing another source of > > errors/mistakes for the end user (version problems involving not only go > > get, but also a Debian version of some library) seems like a good idea. > > The problem with having a language-specific "get software" command is > that it introduces yet another way to get at software. There are many > reasons why that's a bad idea, including, but not limited to: > - most config management systems support standard packages, but not > language-specific "get software" commands, making maintenance of > multiple systems with config management harder if there aren't > any distribution packages for the things you want/need to have. > - It's yet another command to learn for a sysadmin. > - It makes it harder for the go program to declare a dependency on > non-go software, or vice versa > > So there are real and significant benefits to be had by actually trying > to do this right, meaning, "this will have to do" (as opposed to "this > will have to do /for now/, but we'll tackle doing it better once this > bit works right") would be a pity. I would add one thing here: Haskell/GHC also (currently) doesn't create shared libraries, and instead builds the program statically, but the Debian Haskell group still tries to package as best as they can the development libraries, for all the reasons above (which are very good reasons, IMHO). So, take this as an example of another language which doesn't do shared linking but for which libraries are still packaged in Debian. regards, iustin -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130129115540.ga13...@teal.hq.k1024.org
Re: [Lennart Poettering] Re: A few observations about systemd
On Tue, Jul 19, 2011 at 03:59:13PM +, Uoti Urpala wrote: > Wouter Verhelst debian.org> writes: > > IMAO, a statement of (paraphrased) 'portability is for weenies' isn't > > Keeping portability in mind is a good thing especially if you're doing > something > that is easily implementable with common interfaces. However in some cases > additional portability has very real costs, and it's by no means given that > the > balance should go in the favor of portability. … and neither should the balance go in favour of dropping portability. In my experience, programs written with portability in mind are much more resilient to breakage, and thus over time they survive bit-rot much better. Whenever I see a program that is explicitly non-portable, I tend to discount it in favour of portable alternatives, because it means: - the author has considered multiple alternatives, and doesn't rely on Linux kernel x.y.z especially or glibc version n - if a given feature is deprecated, the program might still work by falling back to another feature (re. epoll-vs-poll), possibly in a degraded mode but still work - any many other considerations So, while you have said very clearly in this thread "portability should be amongst the last considerations", understand that not everyone shares your point of view. regards, iustin signature.asc Description: Digital signature
Re: A few observations about systemd
On Sun, Jul 24, 2011 at 01:17:50PM -0700, Russ Allbery wrote: > Wouter Verhelst writes: > > > No. systemd wants to throw out init scripts, because they are shell > > scripts, and Shell Scripts Are Bad!!!1!! oh noes. > > I don't get that impression. Rather, I think both systemd and upstart > want to significantly reduce the involvement of shell scripts in the boot > process for the same reason that I'd love to have the time to eliminate a > lot of them from Debian package maintainer scripts: using a (rather quirky > with interesting portability issues) Turing-complete programming language > is only a good idea when you're doing things that require that power. The > rest of the time, it's much harder to analyze, much less adaptable (you're > often duplicating work in every separate script because helper systems > lag, and if there's a bug, you have to fix it everywhere instead of in one > place), more complicated, and gives people enough freedom to get > themselves into trouble. Much less adaptable? Config files are fixed and non-extendable by themselves. A shell script, because it's Turing-complete, I regard as very adaptable. > Falling back on such a language when you have to do something really > complicated is a good thing to support, but most of the time, you really > want to use a simple, declarative language that says what you're trying to > do and then implement the tools and support to accomplish that in one, > well-tested place. I have to disagree here. In my experience, most of the time such simple languages are not enough (down the road), and people will either start extending and extending them (at which point they're not simple anymore), or replace the solution entirely. Turing complete → Config file → Turing complete → and so on… I think that this cycle will go on and on, until we find a superior solution to both. regards, iustin signature.asc Description: Digital signature
Re: Providing official virtualisation images of Debian
On Fri, Jul 29, 2011 at 08:23:57PM +0400, Michael Tokarev wrote: > 29.07.2011 18:02, Aaron Toponce wrote: > > On Tue, Jul 26, 2011 at 12:27:09AM +0200, Moritz Mühlenhoff wrote: > >> What virtualisation solutions should be supported? > > > > Open Virtualization Format (OVF) is the only format that should need to be > > supported. VirtualBox, VMWare, RHEV, AbiCloud, Citrix XenConvert, and > > OpenNode all support the format. > > The problem with OVF is that it does not define actual disk image > format, it merely describes the VM for a management layer (like > libvirt), but makes no big effort to standardize disk format. > > So it's basically useless in this context - it's kinda trivial to > provide the management stuff, the more important bit is the disk > content. I wouldn't put it that strong :) While it's true that OVF is 'weak' in respect to disk formats, one can choose and support a few widely-used formats to cover enough space. This is exactly what we're doing now in the Ganeti project - after some consideration, supporting just raw, vmdk and the qcow variants seems to have wide-enough use to cover most interoperability issues. Free tools to convert to/from vhd is what I think is still missing. I think that for Debian's purposes, offering one of the above formats (most likely vmdk) should be good enough, at least for start. regards, iustin signature.asc Description: Digital signature
Re: Integrating Emdebian Grip into Debian
On Mon, Aug 08, 2011 at 05:43:55PM +0200, Tollef Fog Heen wrote: > ]] Neil Williams > > | The discussions have resulted in quite a few points which I've now put > | on the wiki: > | > | http://wiki.debian.org/EmdebianIntegration > | > | Please refer to the wiki before raising possible technical problems. > > It seems like nobody from DSA or the security team has been involved in > this process so far? > > also, the point: > > «Documentation will be required for maintainers who may not currently >know that their package has already been released as part of Emdebian >Grip 1.0 (Lenny) and Emdebian Grip 2.0 (Squeeze). This documentation >will need to cover the changes made within the package and how to >deal with bug reports which refer to the Emdebian Grip version.» > > I don't really think it's reasonable to suddenly increase the workload > of maintainers massively by making tracking bugs in emdebian their > problem. While your point is of good nature, why do you think this would increase the workload “massively”? Also, “and how to deal with bug reports which refer to the Emdebian version” might simply be (or not) how to forward bugs to the right team. regards, iustin signature.asc Description: Digital signature
Re: Introduction of a "lock" group
On Mon, Aug 15, 2011 at 04:11:49PM +0100, Roger Leigh wrote: > Hi folks, > > Fedora has moved to having /var/lock (now /run/lock) owned by > root:lock 0775 rather than root:root 01777. This has the advantage > of making a system directory writable only by root or setgid lock > programs, rather than the whole world. However, due to the > potential for privilege escalation¹² it may be desirable to adopt > what has been done subsequently in Fedora: > /var/lock root:root 0755 > /var/lock/lockdev root:lock 0775 > /var/lock/subsys root:root 0755 Hi, If /var/lock won't be 1777 anymore, where should then applications store application-specific lock files (e.g. synchronisation between daemons) if they can't/won't run as setgid lock? Is the intention that the init script creates a /var/lock/$NAME directory, chgrp's it to the right GIDs and only then start the daemons? thanks, iustin signature.asc Description: Digital signature
Re: Introduction of a "lock" group
On Mon, Aug 15, 2011 at 06:00:50PM +0100, Roger Leigh wrote: > On Mon, Aug 15, 2011 at 05:35:54PM +0100, Colin Watson wrote: > > On Mon, Aug 15, 2011 at 04:11:49PM +0100, Roger Leigh wrote: > > > Are these any other downsides we need to consider? One issue is the > > > existence of badly broken programs³, which make stupid assumptions > > > about lockfiles. > > > > What about programs that need to write lock files which are already > > setgid something else? I don't have an example off the top of my head, > > but it would surprise me if there were none of these. > > IIRC Fedora have a setgid lock locking helper for this, which lockdev > uses internally. I'd need to check the details on a Fedora VM. IIRC > it checks if you have write perms on the device being locked, and so > individual programs don't need to be setgid lock unless they are not > using liblockdev. The use of an external helper means this is significantly slower than an open(…, O_CREAT) + flock(). While this works for some workloads, it doesn't for all. As my other question was: /var/lock (or /run/lock) was a good solution for transient, "cheap" locks for coordination between some cooperative programs. It would be ideal if we have a recipe for this after the permissions change. thanks, iustin signature.asc Description: Digital signature
Re: /usr/share/doc/ files and gzip/xz/no compression
On Mon, Aug 15, 2011 at 11:59:07PM +0200, Andreas Barth wrote: > * Lars Wirzenius (l...@liw.fi) [110815 23:27]: > > On Mon, Aug 15, 2011 at 11:04:51PM +0200, Carsten Hey wrote: > > > * Lars Wirzenius [2011-08-15 18:33 +0100]: > > > > raw gz xz > > > > 584163 134 file sizes (MiB) > > > >0421 450 savings compared to raw (MiB) > > > > -421 0 29 savings compared to current gz (MiB) > > > In other words, it's 130 MiB against xz's 134 MiB. I'll leave it to > > others to decide if it's a significatn difference. > > bzip2 is definitly a more conservative choice than xz. If it's > smaller, than it's superior to xz. AFAIK, bzip2 has much worse decompression performance than xz: I have taken dpkg's changelog, concatenated it to itself 10 times (11MB size), and: gzip: 0.377s, down to 2.7MB gunzip: 0.077s bzip2: 1.45s, down to 1.8M bunzip2: 0.420s xz: 4.4s(!), down to 204K(!) xz -d: 0.035s So here bzip is an order of magnitude slower at decompression. I've repeated the test on uncompressible data (/dev/urandom), 10MB, and the numbers are even worse for bzip2: gzip: 0.410s / 0.060s bzip2: 2.400s / 0.960s xz:4.040s / 0.027s So while xz is costly for compression, it's faster than even gzip for decompression. bzip2's cost for decompresion (huge!) is what kept me personally from using it seriously before xz appeared. There is also information on Wikipedia about various compression benchmarks, but IMHO if we want to switch from gzip then bzip2 doesn't make sense for /usr/share/doc. regards, iustin signature.asc Description: Digital signature
Re: /usr/share/doc/ files and gzip/xz/no compression
On Sun, Aug 21, 2011 at 04:03:57PM +0100, Colin Watson wrote: > On Sat, Aug 20, 2011 at 08:30:24PM +0100, Darren Salt wrote: > > It's worth mentioning that man-db has had xz support since March last year > > (upstream). This is available in testing. > > Although I'd also like to mention that I expect that it would take > rather longer for mandb to process /usr/share/man if all of the manual > pages there were xz-compressed rather than gzipped, as it would have to > exec an external command for every page rather than using a library. I > support xz-compressed pages because you sent me a reasonable patch and > it might occasionally help somebody, but I don't recommend it for global > use in Debian. Since liblzmaX exists, would it be a simple matter of using it in order to make mandb handle such compressed man pages without having to fork? regards, iustin signature.asc Description: Digital signature
Re: /usr/share/doc/ files and gzip/xz/no compression
On Mon, Aug 22, 2011 at 09:27:25AM +0100, Colin Watson wrote: > On Sun, Aug 21, 2011 at 05:39:08PM +0200, Iustin Pop wrote: > > Since liblzmaX exists, would it be a simple matter of using it in order > > to make mandb handle such compressed man pages without having to fork? > > I don't want to add more linkage, especially in light of Adam's point > about xz not being worth it for most manual pages anyway. Indeed, with that context, agreed. iustin signature.asc Description: Digital signature
Re: debian github organization ?
On 2015-04-17 10:54:43, Russell Stuart wrote: > Github has all but > annihilated SourceForge in the hosting market place, and the stand out > change is it's UI. That is in spite of SourceForge's impressive mirror > network and SourceForge being VCS agnostic. I think the VCS agnosticism is actually detrimental in this context. It's much easier for the user when every repo is using the same VCS. And consistency makes it very easy, for example, to refer to commits across projects, to standardise pull/clone workflows, etc. regards, iustin signature.asc Description: Digital signature
Re: patch-tracker down?
On 2014-05-29 17:19:50, Paul Wise wrote: > On Thu, May 29, 2014 at 4:54 PM, Jörg Frings-Fürst wrote: > > > are you looking for new volunteers? > > I'm not a Debian sysadmin but yes, new volunteers are needed for > patch-tracker.d.o. > > Requirements are that you need to be a Debian member, but you can help > with the code if you are not. > > Some things that people who are interested in patch-tracker.d.o need to do: > > Clone the git repo: > > https://anonscm.debian.org/gitweb/?p=users/seanius/patch-tracker.git […] Resurrecting this year-old thread. I guess the patch tracker is still down, right? I cloned the git repo and it's possible to bring this up on a jessie host with only one small extra dependency that is not packaged (backports.lzma python module, which is small enough to be packaged in unstable and become a backport to jessie then; I don't know what's the policy though for debian.org hosts). Browsing a few random packages seems to work correctly, although without an extensive test suite I don't know how well the code works for various packaging types. So from the point of view of "fix the code for wheezy^Wjessie" this is rather trivial. Are people interested in having patch-tracker back (I know I am ☺)? If so, I'll try to take this over and start by contacting the debian-admin list. PS: Long-term, the current codebase needs some redoing - e.g. it uses cheetah as templating engine, and that's not available under Python 3, etc. regards, iustin signature.asc Description: Digital signature
Re: patch-tracker down?
On 2015-05-03 15:31:28, Stefano Zacchiroli wrote: > On Sun, May 03, 2015 at 02:50:17PM +0200, Iustin Pop wrote: > > PS: Long-term, the current codebase needs some redoing - e.g. it uses > > cheetah as templating engine, and that's not available under Python 3, > > etc. > > I very much agree. This is why, from an idea coming from Lucas, I've > included in the scope of the upcoming GSoC work on Debsources [1] the > following: > >2) a "patch tracker" web app to publish details about the source code >differences that Debian packages carry with respect to upstream >releases of the same software. >[...] >implement on top of Debsources a new web app, similar to the (now >defunct) patch tracker > > [1]: > https://wiki.debian.org/SummerOfCode2015/Projects/Debsources%20as%20a%20Platform > > From an ecosystem point of view, Debsources already have both packed and > unpackages source packages, updated daily via push. On top of it adding > a tiny web skin that presents the patches is not much of a work --- in > comparison with the general infrastructure overhead of having a source > mirror, unpackaging it, etc etc. This is why I'm interested in giving a > try to recasting the old patch-tracker.d.o on top of sources.d.n (to > that end I've temporarily booked patches.d.n), rather than deploying > them as separate services. Makes a lot of sense, indeed. > OTOH, the patch-tracker.d.o code base already exists, and Debsources is > not yet a *.d.o service (mostly due to my lack of energy in doing the > migration, nothing else). So YMMV on what is the best course of action > for having back a web app exposing Debian patches w.r.t. upstream. My only goal is to have a patch tracker - I'm less concerned about how it is implemented, except that: > In terms of technologies, Debsources is both Python 2 and 3 (although > currently still deployed on Python 2), and Flask based. If you, or > anyone else, is motivated more on working on these technologies than in > modernizing the old patch-tracker.d.o code base, please come and talk to > me. To be honest, I'm not interested in working on any python code base long term. If I were to take on the old patch tracker, my second step after bringing the service up would be to rewrite (at least the web application) in a different language. I mentioned Cheetah/Python 2 only in the context that it can't stay like this for too long. So: - we could have the current patch tracker resurrected easily as a stop gap measure; not sure what the policies are around debian.org services, but from the point of view of just the patch tracker, probably about one weekend effort; or: - we could ignore the current patch tracker, since the GSoC will implement the same functionality anyway, just presenting the patches should be rather simple, and we're not in a hurry I'm fine either way… What do people think? thanks, iustin signature.asc Description: Digital signature
Re: patch-tracker down?
On 2015-05-03 18:32:03, Stefano Zacchiroli wrote: > Thanks for your prompt answer, Iustin, > > On Sun, May 03, 2015 at 04:16:26PM +0200, Iustin Pop wrote: > > So: > > > > - we could have the current patch tracker resurrected easily as a stop > > gap measure; not sure what the policies are around debian.org > > services, but from the point of view of just the patch tracker, > > probably about one weekend effort; or: > > This, absolutely. > > I just wanted to mention that something related to patch tracker is > brewing around Debsources, especially for those that might be planning > significant refactoring on the patch-tracker.d.o code base. Ah, understood, very good point. I'm happy that this won't be a long-term project, especially if it's going to be replaced by a better/cleaner/more modern code base. > But GSoC > outcomes are hardly unpredictable, so I'm not ready to commit to the > availability of something by the end of the summer. Ack. > If it is easy to resurrect patch-tracker.d.o as is, and you're willing > to work on it, by all means go for it. OK, I'll proceed then. thanks! iustin signature.asc Description: Digital signature
Bug#787184: ITP: haskell-js-jquery -- bundles the minified jQuery code into a Haskell package
Package: wnpp Severity: wishlist Owner: Iustin Pop * Package name: haskell-js-jquery Version : 1.11.3-1 Upstream Author : Neil Mitchell * URL : https://github.com/ndmitchell/js-jquery * License : BSD Programming Lang: Haskell Description : bundles the minified jQuery code into a Haskell package This package bundles the minified jQuery code into a Haskell package, so it can be depended upon by Cabal packages. The first three components of the version number match the upstream jQuery version. The haskell library is designed to meet the redistribution requirements of downstream users, and to reduce duplication of bundled code in Debian. Will be maintained as part of debian-haskell team. signature.asc Description: Digital signature
Bug#787185: ITP: haskell-js-flot -- bundles the minified Flot code into a Haskell package
Package: wnpp Severity: wishlist Owner: Iustin Pop * Package name: haskell-js-flot Version : 0.8.3 Upstream Author : Neil Mitchell * URL : https://github.com/ndmitchell/js-flot * License : BSD Programming Lang: Haskell Description : bundles the minified Flot code into a Haskell package This package bundles the minified Flot code (a jQuery plotting library) into a Haskell package, so it can be depended upon by Cabal packages. The first three components of the version number match the upstream Flot version. The package is designed to meet the redistribution requirements of downstream users, and to reduce the number of duplicate copies of embedded code. Will be maintained as part of debian-haskell team. signature.asc Description: Digital signature
Re: I resigned in 2004
On 2018-11-10 15:31:31, Mattia Rizzolo wrote: > Also, indeed I'm not an HRM person, […] And especially because of that, thank you very much for your work. > I don't feel any "guilt" here, sorry. And neither should you. thanks, iustin signature.asc Description: PGP signature
Re: Do we want to Require or Recommend DH
On 2019-05-13 17:58:47, Thomas Goirand wrote: > On 5/13/19 3:57 PM, Marco d'Itri wrote: > > On May 13, Sam Hartman wrote: > > > >> As promised, I'd like to start a discussion on whether we want to > >> recommend using the dh command from debhelper as our preferred build > >> system. > > I have already asked this last time, but nobody answered. > > I use debhelper in all of my packages but I have never switched to dh: > > why should I bother? > > "Everybody is doing this" is not much of an argument. > > > > Would dh really make a debian/rules file like these simpler to > > understand? Can somebody try to win me over with a patch? :-) > > > > https://salsa.debian.org/md/inn2/blob/master/debian/rules > > Without looking much, without checking if the package even builds, > here's a possible result: > > https://salsa.debian.org/zigo/inn2/blob/master/debian/rules > > Admittedly, I haven't understood all of the hacks you did (what's the > $(no_package) thing for?). > > It's only 15 lines shorter, but that's not the point. The point is that > it only declares things you are not doing like everyone else. > > Now, I have another example, which is quite the opposite one of what you > gave as example: > > https://salsa.debian.org/openstack-team/debian/openstack-debian-images/blob/debian/stein/debian/rules > > Why would one want to switch that one to something else? The package, > basically, consists of a shell script and a man page only. The > minimalism of this package doesn't require an over-engineered dh > sequencer, does it? I'm happy the way the package is, and I don't think > I'd switch to the dh sequencer *UNLESS* someone has a better argument > than "it's new", or "debian/rules will be smaller", or even "it's going > to evolve without you even noticing it" (which is more scary than > anything else, which is IMO one of the defects of the dh sequencer). This example is indeed interesting, but IMO for the opposite reason. The last commit on this file was to fix #853907 which is about the intricacies of exactly which targets to call in which sequence for the package type. Which dh avoids, because it has logic to do things rather than require the human to write the exact code needed - since we don't have (or didn't have at that time) good enough pre-upload tests. So in this case, wouldn't dh have completely avoided the bug? Very side note: why is that package a binary package instead of arch-indep, if it contains only a man page? regards, iustin signature.asc Description: PGP signature
Re: @debian.org mail
On 2019-06-04 17:51:56, Graham Inggs wrote: > Hi > > On 2019/06/03 10:40, Daniel Lange wrote: > > To do better, we should really offer SMTP submission/IMAP services for > > @debian.org as soon as possible and - after a grace period - publish a > > mx -all SPF record. > > I would certainly make use of SMTP for sending @debian.org email. I can't > see the advantage of IMAP over forwarding though, would you explain how you > see it working, or who would use it? +1 on both counts.
Re: Hyphens in man pages
On 2023-10-15 16:08:32, Wookey wrote: > I think you can consider me representative of the typical maintainer > who's intereaction with *roff languages almost entirely takes the > form: 'Oh bloody hell I really ought to write a man page for this > because upstream is too youthful to have done so - now how the hell > does roff/nroff/groff work again' (no I'm not sure which it is I'm > actually using, nor how any of this machinery really works, nor where > to look for good practice, so I mostly copy existing stuff and DDG for > answers, which is less than ideal when it comes to details like this). At least you're not lazy. I am, so what I did many times is add a build-depends on pandoc, and write the man page in rst or md. I think that's a worse solution (pandoc is really heavy), but at least, I don't have to go back to *roff. regards, iustin
Re: Validating tarballs against git repositories
On 2024-03-30 08:02:04, Gioele Barabucci wrote: > Now it is time to take a step forward: > > 1. new upstream release; > 2. the DD/DM merges the upstream release VCS into the Debian VCS; > 3. the buildd is notified of the new release; > 4. the buildd creates and uploads the non-reviewed-in-practice blobs "source > deb" and "binary deb" to unstable. > > This change would have three advantages: I think everyone fully agrees this is a good thing, no need to list the advantages. The problem is that this requies functionality testing to be fully automated via autopkgtest, and moved off the "update changelog, build package, test locally, test some more, upload". Give me good Salsa support for autopkgtest + lintian + piuparts, and easy support (so that I just have to toggle one checkbox), and I'm happy. Or even better, integrate all that testing with Salsa (I don't know if it has "CI tests must pass before merging"), and block tagging on the tagged version having been successfully tested. And yes, this should be uniform across all packages stored on Salsa, so as to not diverge how the testing is done. iustin
Re: Validating tarballs against git repositories
On 2024-03-30 11:47:56, Luca Boccassi wrote: > On Sat, 30 Mar 2024 at 09:57, Iustin Pop wrote: > > > > On 2024-03-30 08:02:04, Gioele Barabucci wrote: > > > Now it is time to take a step forward: > > > > > > 1. new upstream release; > > > 2. the DD/DM merges the upstream release VCS into the Debian VCS; > > > 3. the buildd is notified of the new release; > > > 4. the buildd creates and uploads the non-reviewed-in-practice blobs > > > "source > > > deb" and "binary deb" to unstable. > > > > > > This change would have three advantages: > > > > I think everyone fully agrees this is a good thing, no need to list the > > advantages. > > > > The problem is that this requies functionality testing to be fully > > automated via autopkgtest, and moved off the "update changelog, build > > package, test locally, test some more, upload". > > > > Give me good Salsa support for autopkgtest + lintian + piuparts, and > > easy support (so that I just have to toggle one checkbox), and I'm > > happy. Or even better, integrate all that testing with Salsa (I don't > > know if it has "CI tests must pass before merging"), and block tagging > > on the tagged version having been successfully tested. > > This is all already implemented by Salsa CI? You just need to include > the yml and enable the CI in the settings I will be the first to admit I'm not up to date on latest Salsa news, but see, what you mention - "include the yml" - is exactly what I don't want. If maintainers need to include a yaml file, it means it can vary between projects, which means it can either have bugs or be hijacked. In my view, there should be no freedom here, just one setting - "enable tag2upload with automated autopkg testing", and all packages would behave mostly the same way. But there are 2KiB single-binary packages as well as 2GB 25 binary packages, so maybe this is too wide scope. I just learned about tag2upload, need to look into that. (I'm still processing this whole story, and I fear the fallout/impact in terms of how development is regarded will be extremely high.) regards, iustin
Re: Validating tarballs against git repositories
On 2024-03-31 00:58:49, Andrey Rakhmatullin wrote: > On Sat, Mar 30, 2024 at 10:56:40AM +0100, Iustin Pop wrote: > > > Now it is time to take a step forward: > > > > > > 1. new upstream release; > > > 2. the DD/DM merges the upstream release VCS into the Debian VCS; > > > 3. the buildd is notified of the new release; > > > 4. the buildd creates and uploads the non-reviewed-in-practice blobs > > > "source > > > deb" and "binary deb" to unstable. > > > > > > This change would have three advantages: > > > > I think everyone fully agrees this is a good thing, no need to list the > > advantages. > > > > The problem is that this requies functionality testing to be fully > > automated via autopkgtest, and moved off the "update changelog, build > > package, test locally, test some more, upload". > Do you mean this theoretical workflow will not have a step of the > maintainer actually looking at the package and running it locally, or > running any building or linting locally before pushing the changes? > Then yeah, looking at some questions in the past years I understand that > some people are already doing that, powered by Salsa CI (I can think of > several possible reasons for that workflow but it still frustrates me). Not that it necessarily won't have that step, but how to integrate the testing into the tag signing/pushing step. I.e. before moving archive wide to "sign tag + push", there should be a standard of how this is all tested for a package. Maybe there is and I'm not aware, my Debian activities are very low key (but I try to keep up with mailing lists). > > Give me good Salsa support for autopkgtest + lintian + piuparts, and > > easy support (so that I just have to toggle one checkbox), and I'm > > happy. Or even better, integrate all that testing with Salsa (I don't > > know if it has "CI tests must pass before merging"), and block tagging > > on the tagged version having been successfully tested. > AFAIK the currently suggested way of enabling that is putting > "recipes/debian.yml@salsa-ci-team/pipeline" into "CI/CD configuration > file" in the salsa settings (no idea where is the page that tells that or > how to find it even knowing it exists). Aha, see, this I didn't know. On my list to test once archive is unblocked and I have time for packaging. regards, iustin
Re: xz backdoor
On 2024-03-31 10:47:57, Luca Boccassi wrote: > On Sun, 31 Mar 2024 at 08:39, Bastian Blank wrote: > > > > On Sun, Mar 31, 2024 at 12:05:54PM +0500, Andrey Rakhmatullin wrote: > > > On Sat, Mar 30, 2024 at 11:22:33PM -0300, Santiago Ruano Rincón wrote: > > > > As others have said, the best solution is to relay on HSW for handling > > > > the cryptographic material. > > > Aren't these answers to different questions? > > > Not all attacks are about stealing the key or using it to sign unintended > > > things. > > > > Also a HSM does only allow to control access to the cryptographic > > material. But it asserts no control over what is actually signed. > > > > So an attacker needs to wait until you ask the HSM it is okay to sign > > something. > > > > Bastian > > This is true as in the default configuration you get asked for the > yubikey pin only once per "session", and then it's cached > transparently and there's no GUI feedback when the token is used (the > light on it blinks, but noticing that requires having it in line of > sight at all times). However, it's already better than nothing as it > means such an attack must be "online", and run in the same "session" > as the active user, so perfect should definitely not be the enemy of > good here IMHO. Also, iirc this can be configured to always ask for > the pin on each signature, although this could get burdensome. But > given the very low price of yubikeys (or similar tokens), and how well > and seamless they work these days, I think the offer of buying any DD > that doesn't have one such a token is one that we should take up and > make it happen. Jumping in late in the HSM thread, but I'm not sure I understand the exact setup people propose. Option 1: Moving keys to one yubikey, while keeping the original key material "safe" offline. How do you know the "safe offline" material is safe and hasn't been copied? Option 2: Generate keys on the yubikey and have them never leave the secure enclave. That means having 2 yubikeys per developer, and ensuring you keep track of _two_ keys, but it does ensure there's a physical binding to the key. Are there other options? And which option is proposed? I have quite a few yubikeys, but I haven't migrated to use them since it's not clear to me what is a good, and recommended, workflow. I'm relatively against option 1, since the "safe offline" key material somehow doesn't appeal to me. regards, iustin
Re: Validating tarballs against git repositories
On 2024-03-31 08:03:40, Gioele Barabucci wrote: > On 30/03/24 20:43, Iustin Pop wrote: > > On 2024-03-30 11:47:56, Luca Boccassi wrote: > > > On Sat, 30 Mar 2024 at 09:57, Iustin Pop wrote: > > > > Give me good Salsa support for autopkgtest + lintian + piuparts, and > > > > easy support (so that I just have to toggle one checkbox), and I'm > > > > happy. Or even better, integrate all that testing with Salsa (I don't > > > > know if it has "CI tests must pass before merging"), and block tagging > > > > on the tagged version having been successfully tested. > > > > > > This is all already implemented by Salsa CI? You just need to include > > > the yml and enable the CI in the settings > > > > I will be the first to admit I'm not up to date on latest Salsa news, > > but see, what you mention - "include the yml" - is exactly what I don't > > want. > > Salsa CI is enabled by default for all projects in the debian/ namespace > <https://salsa.debian.org/debian/>. > > Adding a yml file or changing the CI settings to reference the Salsa CI > pipeline is needed only for projects in team- or maintainer-specific > repositories, or when the dev wants to enable additional tests (or > configure/block the default tests). That sounds good, but are you sure that all /debian/ projects get it? I chose one random package of mine, https://salsa.debian.org/debian/python-pyxattr, and on the home page I see "Setup CI/CD" (implying it's disabled), and under build, I see nothing enabled. Is there a howto somewhere? Happy to read/follow. iustin
Re: xz backdoor
On 2024-03-31 22:23:10, Arto Jantunen wrote: > Didier 'OdyX' Raboud writes: > > > Le dimanche, 31 mars 2024, 14.37:08 h CEST Pierre-Elliott Bécue a écrit : > >> I would object against creating a PGP key on the HSM itself. Not having > >> the proper control on the key is room for disaster as soon as you lose > >> it or it dies. > > > > For subkeys, isn't that a benefit rather than a disadvantage? > > > > You lose the key, or it gets destroyed / unusable; good, you get a new > > subkey > > instead of reusing the existing one on a different HSM. > > For the authentication and signing subkeys this is indeed true. For the > encryption subkey significantly less so (as things encrypted against > that key then become impossible to decrypt). > > Personally I have generated the signing and authentication subkeys on > the HSM itself (and thus at least in theory they cannot leave the HSM), > and the encryption subkey I have generated on an airgapped system and > stored on the HSM after making a couple of backups. I am really confused now on how all this works. How can you generate parts of a key (i.e. subkeys) on the HSM (well, yubikey), and the other parts locally? Looking forward to having up-to-date documentation once the dust settles. I have enough yubikeys which are only used for 2FA. (Well, and I'd need an airgapped, separate system, which I don't have) thanks, iustin
How to cleanup Ubuntu bugs for my Debian packages?
Hi all, Asking for people who have experience as Debian developers and who are annoyed by the Ubuntu bug count in the QA (debian) page. These bugs are trivial/minor, but still, I'd like to clean up. Let's take for example https://bugs.launchpad.net/ubuntu/+source/python-pylibacl/+bug/1876350. I have a Launchpad account, and the project itself lists my Launchpad account as "maintainer", but on the bug itself I can't mark it as "Won't fix", only as "Invalid". Which tells me that I'm missing _some_ rights in Ubuntu itself… I also have https://bugs.launchpad.net/ubuntu/+source/python-mox/+bug/852095, where again I can't mark "Won't fix" (it refers to distributions 8 years old)… So it seems the same problem. Any hints? thanks, iustin
Re: How to cleanup Ubuntu bugs for my Debian packages?
On 2020-11-29 14:06:31, Mattia Rizzolo wrote: > Hello all, > > For the sake of disclosure, I'm also an Ubuntu Developer. > > On Sun, Nov 29, 2020 at 12:58:53PM +0100, Iustin Pop wrote: > > I have a Launchpad account, and the project itself lists my Launchpad > > account as "maintainer", but on the bug itself I can't mark it as "Won't > > fix", only as "Invalid". Which tells me that I'm missing _some_ rights > > in Ubuntu itself… > > Indeed, there are some status that are restricted to the so called "bug > supervisor", which a team defined in the project settings in launchpad. > In the case of Ubuntu itself, that is the ~ubuntu-bugcontrol team > https://launchpad.net/~ubuntu-bugcontrol . Users who are part of that > team have access to all the features of the bug tracker. > > Another common limitation imposed on users that are not bug supervisors, > is that they cannot re-open an already closed bug. Or, targeting a > series. Understood, thanks. So I would have to ping someone in that group if I want cleanup in the future, right? As it is not possible to delegate bug supervisor per (source) package. > > I also have > > https://bugs.launchpad.net/ubuntu/+source/python-mox/+bug/852095, where > > again I can't mark "Won't fix" (it refers to distributions 8 years old)… > > I've marked that as wontfix. > > However, in the case of > https://bugs.launchpad.net/ubuntu/+source/python-pylibacl/+bug/1876350 > "invalid" is the correct status, not "wontfix". Thanks, my dashboard will be clean, much appreciated :) > BTW, in case you missed it, if you are going to fix bugs reported on > launchpad, you can use "LP: #x" in the (debian) changelog and close > them, similarly to what you do to close bugs against the BTS. Ack, will keep in mind. Also, thanks the very fast response! iustin
Re: Firmware - what are we going to do about it?
On 2022-04-23 22:48:03, Paul van der Vlis wrote: > Op 23-04-2022 om 16:10 schreef Andrey Rahmatullin: > > On Sat, Apr 23, 2022 at 03:13:29PM +0200, Paul van der Vlis wrote: > > > > I see several possible options that the images team can choose from > > > > here. > > > > However, several of these options could undermine the principles of > > > > Debian. We > > > > don't want to make fundamental changes like that without the clear > > > > backing of > > > > the wider project. That's why I'm writing this... > > > > > > I have an idea for an extra option: > > > > > > 6. Put the closed source firmware somewhere in the Debian images, but > > > never > > > install closed source firmware by default. "No" should be the default. > > That's the option 3 more or less. > > Option 3 says to publish two sets of images. > And it says nothing about defaults. > > > 3. We could stop pretending that the non-free images are unofficial, and > maybe move them alongside the normal free images so they're published > together. This would make them easier to find for people that need them, but > is likely to cause users to question why we still make any images without > firmware if they're otherwise identical. > > > > > to put "non-free" into sources.list should also be an non-default choice, > > > even when you install closed source firmware. > > No, that's a bad idea, which is one of the main reasons for the option 5. > > The idea is not to promote closed source firmware in any way. Have it > available, but only for the people who really want it. Uh. Have you actually read the start of the thread? Most machines nowadays *need* it. It's not about wanting, but about the fact that firmware is needed, and the more barriers you put in front of people, the more people will just go with Ubuntu or other alternatives. Making Debian hard to use is a very short-sighted view of how to promote free software - it works in the very short term only. regards, iustin
Re: default firewall utility changes for Debian 11 bullseye
On 2019-12-19 12:29:59, Roberto C. Sánchez wrote: > Hi Arturo! > > I know that this discussion took place some months ago, but I am just > now getting around to catching up on some old threads :-) Same here :) > On Tue, Jul 30, 2019 at 01:52:30PM +0200, Arturo Borrero Gonzalez wrote: > > > 2) introduce firewalld as the default firewalling wrapper in Debian, at > > > least in > > > desktop related tasksel tasks. > > > > > > > There are some mixed feelings about this. However I couldn't find any strong > > opinion against either. > > > > What I would do regarding this is (just a suggestion): > > * raise priority of firewalld > > * document in-wiki what defaults are, and how to move away from them > > * include some documentation bits in other firewalling wrappers on how to > > deal > > with this default, i.e what needs to be changed in the system for ufw to > > work > > without interferences (disable firewalld?) > > > I like the idea of documenting this all in a wiki. Yes, please. I was also bit by nftables migration when moving to buster for some of my home-grown firewal scripts (running just fine for 10+ years, but now - looking forward to migrate to nft), so having this documented would be very welcome, to see what alternatives are there. iustin
Re: Bits from the Stable Release Managers
On 2016-11-27 20:42:26, Adam D. Barratt wrote: >* The bug should be of severity "important" or higher Quick question: assuming all the other conditions are met (minimal patch, clean debdiff, etc.), this seems to discourage normal bugs fixing. Is that intentional (i.e. there must be significant breakage), or more about "we don't want random bugs fixed"? Just curious. thanks, iustin signature.asc Description: PGP signature
Re: Bits from the Stable Release Managers
On 2016-11-29 13:08:54, Adam D. Barratt wrote: > On 2016-11-28 23:07, Iustin Pop wrote: > > On 2016-11-27 20:42:26, Adam D. Barratt wrote: > > >* The bug should be of severity "important" or higher > > > > Quick question: assuming all the other conditions are met (minimal > > patch, > > clean debdiff, etc.), this seems to discourage normal bugs fixing. Is > > that intentional (i.e. there must be significant breakage), or more > > about "we don't want random bugs fixed"? > > Further to Julien's answer, it's also worth noting that until a few years > ago the bar was in fact higher - essentially requiring the bug to be RC. Ack. Thanks both for the answer! iustin signature.asc Description: PGP signature
Bug#854422: ITP: multitime -- an to time which runs a command multiple times and gives detailed stats
Package: wnpp Severity: wishlist Owner: Iustin Pop * Package name: multitime Version : 1.3 Upstream Author : Laurence Tratt * URL : http://tratt.net/laurie/src/multitime/ * License : BSD Programming Lang: C Description : a time-like tool which does multiple runs Unix's time utility is a simple and often effective way of measuring how long a command takes to run ("wall time"). Unfortunately, running a command once can give misleading timings: the process may create a cache on its first execution, running faster subsequently; other processes may cause the command to be starved of CPU or IO time; etc. It is common to see people run time several times and take whichever values they feel most comfortable with. Inevitably, this causes problems. multitime is, in essence, a simple extension to time which runs a command multiple times and prints the timing means, standard deviations, mins, medians, and maxes having done so. This can give a much better understanding of the command's performance.
Re: A proposal for improving transparency of the FTP NEW process
On 2018-03-02 13:51:24, Gert Wollny wrote: > Am Freitag, den 02.03.2018, 14:15 +0200 schrieb Lars Wirzenius: > > > > > > Counter proposal: let's work on ways in which uploaders can make it > > easy and quick for ftp masters to review packages in NEW. The idea > > should be, in my opinion, that any package that requires more than a > > day of work to review should be rejected by default. > > How do you want to achieve this with a source package that has 13k+ > source files and where upstream does not provide a standard license > header for each file? I.e. there is some license text and it needs to > be quoted, but licensecheck doesn't detect the license or doesn't > detect the copyright entry, so one has to manually inspect many files > to get it right. > > Do you really want to reject these packages outright from Debian, even > though they follow the DFSG? How do you (we) know the package indeed is DFSG-compliant, if there is no license information? If upstream cannot bother to provide headers, how do we know the code is indeed licenced under the claimed licence? Etc. Note: I haven't looked at the package. Maybe I misunderstand the situation…
Re: Non-free RFCs in stretch
On 2017-03-05 12:41:18, Ben Finney wrote: > Sebastiaan Couwenberg writes: > > I'd like to see a compromise in the DFSG like #4 for standards to > > allow their inclusion in Debian when their license at least allows > > modification when changing the name or namespace for schemas and the > > like. > > Since that does not describe the license granted in these documents, I > don't see why you raise it. > > On the contrary, I would like to see the license granted in these > documents changed to conform to the DFSG, and then they can be included > without violating or changing our social contract. I have to say I lean more on practicality side here, and I don't really see a need or reason to have standards documents under the "free to modify" clause. Could you try to explain to me why one would need the same liberties for source code and standard documents? confused, iustin signature.asc Description: PGP signature
Re: Defaulting to i686 for the Debian i386 architecture
On 2015-09-28 22:14:44, Ben Hutchings wrote: > We propose to drop support for i386 processors older than 686-class in > the current release cycle. This would include folding libc6-i686 into > libc6, changing the default target for gcc, and changing the 586 kernel > flavour to 686 (non-PAE). > > Since the 686-class, introduced with the Pentium Pro, is now almost 20 > years old, we believe there are few Debian systems still running that > have 586-class or hybrid processors. Not true! My trusty soekris 5501 has an AMD Geode LX, which (AFAIK) is still only i586. It seems it is missing one instruction (NOPL) from being a full 686 chip :) Anyway, what I mean to say: thanks for finally motivating me to upgrade. This move definitely makes sense, but I wouldn't be so sure that there are no actual production systems running on a 586-only CPU. And thanks for your work! iustin signature.asc Description: Digital signature
Re: git, debian/ tags, dgit - namespace proposal
On 2015-11-15 21:26:52, Ben Hutchings wrote: > On Sun, 2015-11-15 at 21:10 +, Ian Jackson wrote: > [...] > > So this message is: > > > > * A request for anyone to say if they know of a reason I shouldn't do > > this. > [...] > > Deliberately creating identifiers that differ only by case seems > gratuitously confusing. Further, it may cause difficulties for someone > who is packaging for some other operating system and tries to clone a > dgit repository onto a case-insensitive filesystem (which git generally > does support). +1, differing only by case is indeed very confusing. regards, iustin signature.asc Description: PGP signature
Re: git, debian/ tags, dgit - namespace proposal [and 1 more messages]
On 2015-11-16 13:27:33, Ian Jackson wrote: > [resending because my MUA messed up] > > Ben Hutchings writes ("Re: git, debian/ tags, dgit - namespace > proposal"): > > Deliberately creating identifiers that differ only by case seems > > gratuitously confusing. > > I acknowledge that this is a downside of my proposal. However, it is > IMO important that the tag names I choose are relatively prominent. > > A major part of the audience is people who don't know much about > Debian packaging, or even necessarily much about git workflows, let > alone about how Debian uses git. > > If such a user sees two similarly named tags, they will at least know > that something is confusing. They can look at the contents of each, > or type ignorant questions into a search engine, and hopefully find > the answer. So the very obvious nature of the confusion is an > advantage. I would say that's a long shot; depends on your users, of course, but in general the plan "let's make X confusing so that people will pay more attention to it and learn its intricacies" is more likely to backfire, IMHO. regards, iustin signature.asc Description: PGP signature
Re: support for merged /usr in Debian
On 2016-01-01 13:39:35, Adam Borowski wrote: > On Fri, Jan 01, 2016 at 12:23:20PM +0100, Ansgar Burchardt wrote: > > m...@linux.it (Marco d'Itri) writes: > > > Thanks to my conversion program in usrmerge there is no need for a flag > > > day, archive rebuilds or similar complexity and we can even continue to > > > support unmerged systems. > > > > Is there any use case that requires supporting unmerged systems? > > I don't think so. You already need the / filesystem, and with today storage > sizes, if you can hold that, you can hold the whole system, period. Even on > any embedded that can run Debian. > > The last time I've seen a split done due to small / was Maemo ten years ago. > And guess what? They didn't use / vs /usr but hacked something where both / > and /usr were on the small mmc while big /opt hold most of the files, with > symlinks from /usr. That's because their needs were different from those of > Ken Thompson in 1971. > > A reasonable and often important split is keeping /+/usr apart from a box's > main purpose, be it /home, /srv or /var/lib/postgresql -- but in any case > both / and /usr will be on the same filesystem. > > Thus, I'd say /usr is pointless on any machine we can reasonably support. I respectfully disagree. Having / contain basically only /etc means we finally have a full separation between configuration (/), binaries (/usr) and state (/var), which opens up some interesting options in the field of large-scale virtualisation. regards, iustin signature.asc Description: PGP signature
Re: [Fwd: Re: [DNG] FW: support for merged /usr in Debian]
On 2016-01-03 15:59:37, Svante Signell wrote: > Hi, > > This message was not intended to be sent to a debian-* mailing list by > the author. However, since it is (in my opinion) of large interest I > got the permission to forward it to debian-devel. Hopefully, also some > of the debian-devel subscribers will appreciate it too. Wow, here I was thinking this would be some informed oppinion, but: "Oh, there are tools with which you can periscope into initramfs, but have you ever really looked at everything in an initramfs." Wait, what? Yes, I have many times unpacked the simple cpio archive that an iniramfs is, and I have looked at its entire contents. It's not black magic. This is just more FUD being spread by somebody who doesn't want to change their ways, at all. iustin, who wasted 1 minute on this signature.asc Description: PGP signature
Re: support for merged /usr in Debian
On 2016-01-03 17:03:02, Simon McVittie wrote: > […] For > instance, /bin -> /usr/bin is needed because otherwise #!/bin/sh would > stop working, […] This brings to mind—I wonder if the performance impact of having /bin/sh be read through two indirections (/bin/sh → /usr/bin/sh → /usr/bin/{dash, bash, etc.}) is non-zero and if it could be reliably measured. This is not an argument against UsrMerge, I'm very much for it; I'm just curious. iustin signature.asc Description: PGP signature
Re: support for merged /usr in Debian
On 2016-01-03 12:59:01, Tom H wrote: > On Sat, Jan 2, 2016 at 6:42 PM, Geert Stappers wrote: > I don't like usr-merge because it goes against my historical > expectation that "/{,s}bin" be separate from their /usr namesakes and > contain binaries required for boot. OK, so adjust your historical expectation. It's not a technical issue, it's simply a matter of expectations, which have no reason to stay the same for ever. regards, iustin signature.asc Description: PGP signature
Re: support for merged /usr in Debian
On 2016-01-03 22:22:16, Tom H wrote: > On Sun, Jan 3, 2016 at 6:17 PM, Iustin Pop wrote: > > On 2016-01-03 12:59:01, Tom H wrote: > >> > >> I don't like usr-merge because it goes against my historical > >> expectation that "/{,s}bin" be separate from their /usr namesakes and > >> contain binaries required for boot. > > > > OK, so adjust your historical expectation. It's not a technical issue, > > it's simply a matter of expectations, which have no reason to stay the > > same for ever. > > Did you read the next para of my email?! Yes. Instead of trying to think of initramfs as the new / or any other way of thinking that keeps the distinction, I'm suggesting to simply drop this distinction/expectation. You didn't give any reasons except historical expectation, and I don't think that's a needed one (given that you can't properly make a distinction between what-is-boot and what isn't). regards, iustin signature.asc Description: PGP signature
Re: support for merged /usr in Debian
On 2016-01-04 12:03:07, Marc Haber wrote: > On Sun, 3 Jan 2016 19:15:18 +0100, m...@linux.it (Marco d'Itri) wrote: > >Anyway, if you think that the merged /usr scheme is about systemd then > >you are automatically disqualified from taking part in this discussion > >because you are not understanding the basic underlying issues. > > As friendly as always. Friendly? Maybe not. But correct? Yes. iustin
Re: support for merged /usr in Debian
On 2016-01-04 23:41:40, Eric Valette wrote: > On 04/01/2016 20:43, Michael Biebl wrote: > > >an initramfs is not mandatory as long as you don't have /usr on a > >separate partition. > >No initramfs + split /usr is not supported and has been broken for a while. > > Did you actually test it? It works for me TM on fairly simple setup... That's the key point - "on a fairly simple setup". You don't know when it will break - at which point the setup becomes complex enough that some library actually needed lives in /usr instead of /, and you can't boot anymore without going into rescue/emergency mode. The whole point of UsrMerge is to _guarantee_ that booting works on _all_ setups, and to reduce the maintenance burden at the same time. regards, iustin signature.asc Description: PGP signature
How to properly remove obsolete init script?
Hi and sorry for what is a basic question, but I can't find a clear answer. It seems that dpkg-maintscript-helper is the tool to remove conffiles, but it doesn't seem to have any special handling for init scripts. Which means that simply calling "rm_conffile …" will just remove the /etc/init.d/ script, leaving potential symlinks in /etc/rc?.d around. The snippets inserted by dh_installinit in postrm will only call "update-rc.d xxx remove" in case of purge, so they won't be called during a package upgrade. Any smarter solution than just adding code to postinst/configure with the old version to call update-rc.d remove? thanks, iustin signature.asc Description: PGP signature
dpkg-genchanges warning for -dbgsym packages
Hi all, I'm very glad for the automatic debug packages, but I wonder if I'm doing something wrong since I get this warning: dpkg-genchanges: warning: package foo-dbgsym listed in files list but not in control info The package is generated, so this mostly seems like a harmless warning. I checked but I don't see any bug about it, and searching the internet turns out nothing. Thoughts? thanks, iustin signature.asc Description: PGP signature
Interaction between dpkg's directory removal and dpkg-maintscript-helper
Hi, I'm not sure if this is a bug or if I'm doing something wrong, looking for advice. If one wants to deprecate a conffile locate in an otherwise empty directory (let's say /etc/foo/bar), the following happens at new package install time: - before files unpacking, new-preinst is called; from dpkg-maintscript-helper manual, at this stage the conffile is renamed to bar.dpkg-remove (assuming not changed by user) - new package files are unpacked; at this point, the new package doesn't own the conffile anymore, so dpkg doesn't have any tracked file under /etc/foo, so it tries to remove the directory; however this fails due to the /etc/foo/bar.dpkg-remove - the new package is configured, and the postinst call to dpkg-maintscript-helper now remove /etc/foo/bar.dpkg-remove, but leaves /etc/foo as an unowned, dangling directory At least, this is my understanding from reading policy and from piuparts output. I can workaround it by manually removing the directory (if empty), but seems somewhat out-of-place. Thoughts? thanks, iustin signature.asc Description: PGP signature
Re: Interaction between dpkg's directory removal and dpkg-maintscript-helper
On 2016-05-06 11:07:57, Mike Hommey wrote: > On Fri, May 06, 2016 at 02:16:32AM +0200, Iustin Pop wrote: > > Hi, > > > > I'm not sure if this is a bug or if I'm doing something wrong, looking > > for advice. If one wants to deprecate a conffile locate in an otherwise > > empty directory (let's say /etc/foo/bar), the following happens at new > > package install time: > > > > - before files unpacking, new-preinst is called; from > > dpkg-maintscript-helper manual, at this stage the conffile is renamed > > to bar.dpkg-remove (assuming not changed by user) > > - new package files are unpacked; at this point, the new package doesn't > > own the conffile anymore, so dpkg doesn't have any tracked file under > > /etc/foo, so it tries to remove the directory; however this fails due > > to the /etc/foo/bar.dpkg-remove > > - the new package is configured, and the postinst call to > > dpkg-maintscript-helper now remove /etc/foo/bar.dpkg-remove, but > > leaves /etc/foo as an unowned, dangling directory > > > > At least, this is my understanding from reading policy and from piuparts > > output. I can workaround it by manually removing the directory (if > > empty), but seems somewhat out-of-place. Thoughts? > > See also: bug 815969. I see, thanks for the fast reply. rmdir it is then… iustin