Re: what file's mtime is the current time?
Dan Jacobson [EMAIL PROTECTED] writes: I could do $ touch file to make a file with the current time as its mtime, but I think one already exits. Is /proc/1 the best choice? If you realy, realy, realy need to do this /proc/self would be much better since you don't have rights for the inits proc entry in a chroot. MfG Goswin
Re: regarding the (gcc) bug no 206715
Petrisor Marian [EMAIL PROTECTED] writes: I was wondering, since gcc 2.95 till gcc 1:3.3.1-0pre0 did the standards change? Since I managed to compile the kernel source with that version of the gcc, I think they did !?!?!? So, now how can I write multiple instructions in asm so that gcc 1:3.3.1-0pre0 could say is ok? I am not in front of my debian machine now to check this, but I don't think the guys that maintain the kernel will just remove the asm files. So, Will I be able to compile kernel 2.4.21 with gcc 1:3.3.1-0pre0? I don't want to flame you or anybody else but I think there is a bug in gcc 1:3.3.1-0pre0!!! The standard has indeed changed or rather gcc adheres more to the standard. All recent kernels compile though, update and look there how they do it. MfG Goswin
Re: dak changes (names, version control, mail headers)
Joe Smith [EMAIL PROTECTED] writes: James Troup [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hi, I've just updated ftp-master.debian.org to use a new version of dak which no longer uses the silly names at all. You killed katie, jennifer, et al.? Poor britney will feel so alone! After all the ranting he finaly gave in to 'Debian Women Please Pass Away [EMAIL PROTECTED]' and removed some women from the project. A sad sad day indeed. Debian has lost all sexapeal. :) MfG Goswin -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: GCC 4.1 now the default GCC version for etch
Lennart Sorensen [EMAIL PROTECTED] writes: On Sun, Jun 18, 2006 at 11:40:03PM +0200, Wouter Verhelst wrote: On Sun, Jun 18, 2006 at 02:24:35PM -0700, Blars Blarson wrote: (amd64 is only faster in 64-bit mode because of all the poorly designed x86 32-bit instruction set.) x86 32-bit instruction set and designed in one sentence? Hah. How about the fact it has more registers available in 64bit mode. People always said the x86 didn't have enough registers after all. Using sse for floating point rather than the awful stack based x87 probably helps too. Len Sorensen Exactly. Isn't the x86_64 instruction set basicaly the same as ia32 just with a few extra opcodes and more registers? Any general fault in the instruction set should still remain. MfG Goswin -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: GCC 4.1 now the default GCC version for etch
Lennart Sorensen [EMAIL PROTECTED] writes: On Tue, Jun 20, 2006 at 12:32:18PM +0200, Goswin von Brederlow wrote: Exactly. Isn't the x86_64 instruction set basicaly the same as ia32 just with a few extra opcodes and more registers? Any general fault in the instruction set should still remain. x86 processors have multiple modes with different instructions and registers in each mode. AMD decided to make long mode remove some old features and in some cases replace them with new features. But the changes in the instruction set are minimal. With a bit of care you can use the same inline asm code for ia32 and x86_64 for example. They didn't fix any fundamental flaws in the 386 instruction set. Just droped some later addons like mmx. That is what I ment. MfG Goswin -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Nothing NEW in Debian
Hi, a momentous ocasion. Never seen before in a million years (or 5 or so): There is nothing NEW[1] in Debian. :) MfG Goswin [1] http://ftp-master.debian.org/new.html -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A panoply of errors [Re: xcdroast: Who to blame? Joerg!]
Don Armstrong d...@debian.org writes: On Tue, 03 Mar 2009, Joerg Schilling wrote: As I mentioned before, the attacks have been initated by Eduard Bloch Likely incorrect. who is no longer active in Debian. Completely incorrect. It would be a nice gesture if Debian would through him out. Morally incorrect. As he is currently already in a suspended status, Totally incorrect. this would be something that is not hard to do by Debian Naively incorrect. but it would show a sign of will. Gramatically incorrect. Don Armstrong And the award for the most incorrects goes to... MfG Goswin -- To UNSUBSCRIBE, email to debian-curiosa-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Switching /bin/sh to dash without dash essential
Neil McGovern ne...@debian.org writes: On Sun, Jul 26, 2009 at 11:34:01AM +0200, Cyril Brulebois wrote: Goswin von Brederlow goswin-...@web.de (24/07/2009): Give me the freedom to choose. It looks like we just reached the âLinux is about choiceâ Goswin point. Goswin's law: As the length of a debian discussion increases, the chance of Goswin mentioning that Linux is about choice tends towards 1. Neil On what are basing your observation? Seems to be you are extrapolating from one thread to the universe. MfG Goswin -- To UNSUBSCRIBE, email to debian-curiosa-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Debian, universal operating system?
Amaya am...@debian.org writes: Anything else to confirm? :) Was there realy a lack of male geeks at the Geeks Dancing BoF? Has Debian-woman become too successfull? MfG Goswin -- To UNSUBSCRIBE, email to debian-curiosa-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: Switching /bin/sh to dash without dash essential
Neil McGovern ne...@debian.org writes: On Tue, Jul 28, 2009 at 03:51:41PM +0200, Goswin von Brederlow wrote: On what are basing your observation? Seems to be you are extrapolating from one thread to the universe. Well, I don't normally reply to green ink emails, but 87slhp3nwo@informatik.uni-tuebingen.de 87iqia308d@frosties.localdomain 87iqhijasa@frosties.localdomain 87fxck1ut0@frosties.localdomain 87tz0zn52h@frosties.localdomain may help. Neil ps: there really is no need to cc: me. -- Maulkin Damned Inselaffen. Oh, wait, that's me. Wow, 3 threads over 3 years where I used the word choice. Yes indeed that is a solid basis to extrapolate. :) MfG Goswin -- To UNSUBSCRIBE, email to debian-curiosa-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: archive rebuilds wrt Lucas' victory
On Mon, Apr 15, 2013 at 12:30:43AM +0200, Adam Borowski wrote: Too bad, I see what seems to be most of the time being spent in dpkg installing dependencies -- how could this be avoided? One of ideas would be to reformat as btrfs (+eatmydata) and find some kind of a tree of packages with similar build-depends, snapshotting nodes of the tree to quickly reset to a wanted state -- but I guess you guys have some kind of a solution already. I think snapshoting is a good idea there. Or rather forking the filesystem. Say you have 2 packages: Package: A Build-Depends: X, Y Package: B Build-Depends: X, Z You would start with the bare build chroot and install X. Then you create snapshots SA and SB from that. In SA you install Y and in SB you install Z. Now both packages can be built. BUT: - Easy with 2 packages. But how do you do that with 3? - Y and Z may both depend on W. So initialy we should have installed X and W. - Package C may Build-Conflicts: X but depend on most of the stuff X depends on. So taking the filesystem with X installed and purging X will be faster than starting from scratch. - Doing multiple apt/dpkg runs is more expensive than a combined one. A single run will save startup time and triggers. - Could we install packages without running triggers and only trigger them at the end of each chain? Or somewhere in the middle? - There will be multiple ways to build the tree. We might install U first and then V or V first and then U. Also we might have to install V in multiple branches and V can not be installed in a commong root. Unless we install V in a common root and then uninstall V again for a subtree. This probably needs a heuristic for how long installing (or uninstalling) a package takes. Package size will be a major factor but postinst scripts can take a long time to run (update-texmf anyone?). - Build chroots, even as snapshots, take space. You can only have so many of them in parallel. A depth first traversal would be best there. Building packages against locally build packages (instead of the existing official ones) gives a better test. But that would require a more width first ordering. Some compromise between the two would be needed. Note: With multiple cores it is better run multiple builds in parallel, given enough ram, than trying to build a single package on multiple cores. Most packages don't support parallel building (even if they could) and some break if you force the issue. Now if you have multiple builds that are based around the same snapshot then common header files and libraries will be cached only once. So cache locality (and therefor efficiency) should increase. So I'm looking forward to someone taking up this idea and implementing an algorithm that will sort sources into a tree structure for installing and snapshoting / forking the filesystem at each node. Optimized for reducing the number of dpkg runs, snapshots needed and installing the same package in multiple branches. MfG Goswin -- To UNSUBSCRIBE, email to debian-curiosa-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130416092943.GA23900@frosties
Re: archive rebuilds wrt Lucas' victory
On Tue, Apr 16, 2013 at 10:22:20PM +0200, Adam Borowski wrote: On Tue, Apr 16, 2013 at 11:29:43AM +0200, Goswin von Brederlow wrote: On Mon, Apr 15, 2013 at 12:30:43AM +0200, Adam Borowski wrote: Too bad, I see what seems to be most of the time being spent in dpkg installing dependencies -- how could this be avoided? One of ideas would be to reformat as btrfs (+eatmydata) and find some kind of a tree of packages with similar build-depends, snapshotting nodes of the tree to quickly reset to a wanted state I think snapshoting is a good idea there. Or rather forking the filesystem. Say you have 2 packages: Package: A Build-Depends: X, Y Package: B Build-Depends: X, Z You would start with the bare build chroot and install X. Then you create snapshots SA and SB from that. In SA you install Y and in SB you install Z. Now both packages can be built. So you would include intermediate states as nodes in the graph as well? Interesting -- this could indeed optimize cases like that, at the cost of making the problem a good deal harder algorithmically. BUT: - Easy with 2 packages. But how do you do that with 3? You mean, an algorithmical challenge? In our kind of crowd? That's the fun stuff! Anything exponential will not work. And for O(n^c) the c must be rather small to still find a solution in time. Fun stuff indeed. After all, if we search 4 weeks for an optimal solution we might as well just build everything like now and be quicker. If we reduce the problem by two simplifications: * can snapshot only before building a package (no intermediate states) I don't think that is a good simplification. The chance that for two packges like A and B there is a third package C that only Build-Depends on X is rather small. And then you wouldn't get a common node for A and B. * the cost of purging a package is same as installing it That somewhat fixes what the first one broke. Since now B can be a child of A by purging Y and installing Z. Still wastefull though. It makes the graph a lot smaller though. a solution is to find a minimal spanning tree, possibly with a constraint on the tree's height. And with the graph not being malicious, I have a hunch the tree would behave nicely, not requiring too many snapshots (random graphs tend to produce short trees). Luckily minimal spanning tree is well researched. :) Note: the graph would be directed and should have weights according to the (estimated) complexity of installing/purging a package. The full problem may take a bit more thinking. - Y and Z may both depend on W. So initialy we should have installed X and W. - Package C may Build-Conflicts: X but depend on most of the stuff X depends on. So taking the filesystem with X installed and purging X will be faster than starting from scratch. Ie, edges that purge should have a lesser cost than edges that install. - Doing multiple apt/dpkg runs is more expensive than a combined one. A single run will save startup time and triggers. Again, parameters to the edge cost function. - Could we install packages without running triggers and only trigger them at the end of each chain? Or somewhere in the middle? Could be worth looking at. Not sure how many triggers can work incrementally, and how many rebuild everything every time like man-db. Of course, this is moot if we snapshot only before package build. Even if the trigger is incremental we don't loose anything if delay running the trigger. It will do more work for that later run but not more than the individual runs put together. On the other hand non incremental triggers will add up to a lot more. - There will be multiple ways to build the tree. We might install U first and then V or V first and then U. Also we might have to install V in multiple branches and V can not be installed in a commong root. Unless we install V in a common root and then uninstall V again for a subtree. This probably needs a heuristic for how long installing (or uninstalling) a package takes. Package size will be a major factor but postinst scripts can take a long time to run (update-texmf anyone?). What about something akin to: log(size) + size/X ? For smallish packages, dpkg churn is the dominant factor, for big ones it's actual I/O and registering individual files. That would be something to tune experimentally I guess. Possible factors I can see would be: - fixed cost for apt/dpkg startup time (or cost relative to number of installed packages/files, i.e. database size) - number of dpkg runs needed for a set of packages (multiplier for the first) - package size in bytes and number of files - cost for triggers - cost for preinst/postinst/prerm/postrm scripts The last two would be harder to get. The rest all comes from the Packages files. - Build chroots, even as snapshots, take space. You can only have so many of them