Re: Firmware GR result - what happens next?
On Mon, Oct 03, 2022 at 02:47:33PM +0200, Pascal Hambourg wrote: > Not even replace "stable/updates" with "stable-security" during the upgrade > from buster to bullseye ? Hmm I don't recall but I suppose it just wasn't very memorable to do it. At least it would have given an error fetching the list if you didn't do it. Not adding a new entry on the other hand will not. -- Len Sorensen
Re: Firmware GR result - what happens next?
On Sun, Oct 02, 2022 at 08:21:31PM +0100, Steve McIntyre wrote: > Two things: > > 1. I'm worried what bugs we might expose by having packages be in two > components at once. > 2. I really don't like the idea of leaving two different > configurations in the wild; it'll confuse people and is more > likely to cause issues in the future IMHO. > > Plus, as Shengjing Zhu points out: we already expect people to manage > the sources.list anyway on upgrades. People that just have 'stable' in their sources.list haven't had to do anything. I can't think of ever having had to add anything, only change the release name. This will get missed and it will get missed a lot. -- Len Sorensen
Re: partman, growlight, discoverable partitions, and fun
On Mon, Sep 27, 2021 at 03:18:48PM +0200, John Paul Adrian Glaubitz wrote: > Whether a tool that was developed new from scratch is automatically better is > not a given. The burden of proof is on the person trying to introduce the new > software, not on the people maintaining the current set of software. > > And claiming that parted is in pure maintenance mode is not true either. It > has a paid developer working on the project and is receiving updates and > improvements. > > Whether growlight is better and more suitable for Debian needs to be > technically proven, not just by arguing that it’s the newer project. I would have thought that if libparted was missing 1 or 2 features, it would make more sense to add those features than to write a new tool duplicating most of the functionality. Well unless working with the maintainers of libparted is imposible. There are a few projects like that but I don't remember ever seeing complains about libparted. Now you have two tools both missing some features. Hardly an improvement. -- Len Sorensen
Re: Y2038 - best way forward in Debian?
On Tue, Feb 04, 2020 at 09:38:46AM -0500, Stefan Monnier wrote: > > * 32-bit ABIs/arches are more awkward. glibc will continue *by > >default* to use 32-bit time_t to keep compatibility with existing > >code. This will *not* be safe as we approach 2038. > > > > * 32-bit ABIs/arches *can* be told to use 64-bit time_t from glibc > >upwards, but this will of course affect the ABI. Embedded uses of > >time_t in libraries will change size, etc. This *will* be safe for > >2038. > > And that's chosen at build time (i.e. when compiling glibc)? > Why not provide different entry points for time-manipulating functions, > so a single build can support both applications using 32bit time and > applications using 64bit time? > > E.g. > > struct time32_t ... > struct time64_t ... > > double difftime32 (time32_t time1, time32_t time0); > double difftime64 (time64_t time1, time64_t time0); > > and in the time.h have > > #if TOO_OLD_TO_LIVE_PAST_2038 > typedef time32_t time_t; > ... > #else > typedef time64_t time_t; > ... > #endif I agree. Why should this be any different than 64 bit file support? Transparent on 64 bit architectures, and 32bit code gets to pick the one it wants to support at compile time while glibc supports both, and eventually just about everything switches. You can even eventually add warnings when any program calls the 32 bit version of the functions. -- Len Sorensen
Re: Bug#929752: Changing quote signs in GPL allowed? [Was: Bug#929752: installation-guide: left quotes in gpl.xml are not correctly rendered in pdf ]
On Tue, Aug 06, 2019 at 08:30:56PM +0200, Holger Wansing wrote: > I was about to commit these changes, however it came to my mind if such > changes to the GPL are allowed? > > At least the English variant of the GPL is 'official' and is not to be > changed, so what about changing the quoting signs into > entities? gpl.xml isn't official. It isn't one of the files from the FSF. There is a gpl-2.0.dbk[1] version available which in fact does use and while every other file format uses ` and '. So at least there is precedence for using tags instead. After all if you decided to use the docbook file as your source text, you would get the quotes desired. [1] https://www.gnu.org/licenses/old-licenses/gpl-2.0.dbk -- Len Sorensen
Re: Anyone using stretch/buster/sid on ARMv4t ?
On Mon, Nov 20, 2017 at 09:37:10AM +, Lars Brinkhoff wrote: > My StrongARM-based Netwinder machine has been lying dormant for a while, > but I was planning to bring it back up. It's ARMv4 without Thumb. Does a netwinder have enough ram these days to run the installer (or much of anything really)? -- Len Sorensen
Re: Please add lzip support in the repository
On Mon, Jul 03, 2017 at 12:38:59PM +0100, Thomas Pircher wrote: > Hi Maria, > > in the example you mentioned upstream have added xz to the set of archives > they distribute their source in. Currently[1] the GNU Octave source code is > being distributed as .gz, lz and .xz tarballs. > > I don't get it; what exactly is the problem when upstream distributes their > source in multiple formats, including .xz and .lz, among others? > > Thomas > > [1] https://ftp.gnu.org/gnu/octave/ Looking at the timestamps, it appears starting with 4.2.0, only gz and lz was provided, and again for 4.2.1 that was the case, and then in the middle of June this year (so some 7 months after the 4.2.0 release) someone went and added xz archives as well, probably because they used to have them, and someone asked to keep having them. So they used to be gz and xz only, then went to gz and lz only, and then later had xz added back again so they now have 3 types. Seems good in the end. No idea what compression options were used, but certainly the lz looks a good chunk smaller than the xz for those archives. -- Len Sorensen
Re: armel after Stretch (was: Summary of the ARM ports BoF at DC16)
On Wed, Dec 14, 2016 at 06:40:22PM +0100, Wouter Verhelst wrote: > On Wed, Dec 07, 2016 at 08:50:40PM +0900, Roger Shimizu wrote: > [...asking for armel to be retained...] > > One way in which the need to keep armel around would be reduced is if we > could somehow upgrade from armel machines to armhf ones, without > requiring a reinstall. > > After all, armel has been around longer than armhf has, which means that > there may be some machines out in the wild that were installed (and > upgraded) when armel existed but armhf did not yet (or at least, was not > stable yet). Some of those machines might be armv7 machines that would > be perfectly capable of running the armhf port, except that it wasn't > around yet when they were first installed, and switching to armhf > without reinstalling isn't possible. > > I once did try to do a similar migration on my Thecus (from arm to > armel, rather than armel to armhf), but that failed miserably; and since > I hadn't installed the firmware update to be able to access the console > so as to figure out what went wrong, that essentially bricked the > machine. > > If there was a supported and tested way to upgrade older armel > installations on hardware that actually works with armhf, then those > machines wouldn't need to be able to run armel anymore, and part of this > problem would go away... I actually highly doubt there are that many armv7 boxes running armel. armhf was a nice performance improvement and worth the hassle to reinstall if you had such a box in the first place. I think most armel systems are probably armv5, often the marvell chips. Not sure if anyone is running it on Raspberry pi (Original, not 2 or 3) systems or if those all run Rasbian armhf instead. -- Len Sorensen
Re: Porter roll call for Debian Stretch
On Sat, Oct 01, 2016 at 01:17:54PM +0100, Ben Hutchings wrote: > This is not at all true. My experience is that IBM doesn't even build- > test 32-bit configurations, as evidenced by several stable updates > causing FTBFS in Debian. > > Which are very different from the Power Macs and similar platforms that > most Debian powerpc users care about. Well a number of those embedded systems are in fact using Debian powerpc. They do care. The powermacs on the other hand I would think are hardly used anymore. But that may just be my impression because I never cared about them in the first place. They weren't relevant to me. :) I would not be surprised if there are more embedded freescale based powerpc systems running debian out there than powermacs, but maybe I am under estimating how many powermac users are still out there running debian. -- Len Sorensen
Re: debootstrap and cdebootstrap vs systemd
On Fri, Nov 07, 2014 at 08:30:32PM +1100, csir...@yahoo.com.au wrote: Apologies in advance. You really hit a nerve here. Kernel 3.7 was released December 2012. Debian project created a dependency on this for the default init system roughly 15 months later. Which is fine, and perfectly understandable. It makes sense. I don't want to argue that. Well I know wheezy runs fine on a 3.0 kernel. Not sure how much further back you can go. Of course that was as far as I can tell released Around August of 2011, so only another year and a bit longer. But please don't make light of the situation for those who can't apt-get install hardware-redesign beg-silicon-vendor-for-updates port-and-re-validate-custom-undocumented-modules go-back-in-time-and-teach-hardware-engineers-linux-kernel-lifecycle If udev decides to stop supporting kernels without some useful recent feature, do you expect Debian to keep patching to code to support older kernels that even Debian has no intention of using in new releases? What would be the point of that? 3.7 is less than 2 years ago even today, apparently even that is a blip in many embedded hardware solutions' life-cycle. Some manufacturing sectors are still selling m68k and Z80 CPUs. For SoCs though, it seems the tradition is: fork a particular Linux kernel release, mangle it beyond recognition, throw it over the wall and then act like customers are speaking an alien language if they ever ask for updates. Don't accept old kernels is almost equivalent to telling many unrelated businesses in a particular ecosystem to burn their investments and start again from scratch, just because the SoC and/or board vendors have a broken business model. And that's hard to explain to business people and even hardware engineers that a chip/board/subsystem is unsupported even though supply guarantees stretch out to the year 2020 and beyond. Well if you don't try to explain it, you will be stuck with the problems forever. Where I work we make it clear to the suplier that support for the chip has to be mainlined to Linus's tree, or we don't want to deal with the chip. We know what it is like to deal with a vendor kernel that isn't maintained and we don't want to do that. We want nothing to do with SDKs or BSPs either. They are not useful for long term maintainance of a product. They are harmful. And for all I know, perhaps these businesses deserve everything that happens to them, who knows. Sounds fair to me. They are doing things wrong and hurting their customers. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20141107145016.gr24...@csclub.uwaterloo.ca
Re: [Pkg-xfce-devel] Reverting to GNOME for jessie's default desktop
On Mon, Aug 11, 2014 at 11:15:15AM +0200, Thomas Weber wrote: Not sure why you'd want to go for third world countries, but let's look at Germany (Aldi is one of the two biggest discounters here): http://www.presseportal.de/pm/112096/2653870/aldi-senkt-preise-fuer-fischprodukte-oel-und-smoothies CD-R Rohlinge (80 Minuten, je 50er Spindel) 5,99 Euro DVD+R Rohlinge (je 20er Spindel)3,99 Euro That is 0.12 EUR per CDR and 0.20 EUR per DVD. My local computer store has $8.99 for 50 DVD-R and $16.99 for 50 CD-R. Of course they also have 100 CD-R for $18.88 and 100 DVD-R for $24.88, so who knows. Seems the price is pretty similar depending what you buy and how many. Of course as for gnome as a default, unless it can have sane defaults where it behaves as the vast majority of computer users are used to a desktop working, then I don't think it is a usable desktop. That means it needs buttons on windows that people expect to see where they expect to see them, and things behaving as they expect them to behave. Would Debian be willing to make gnome3 have different defaults than upstream in the interest of actually being useable to new users who are used to other operating systems and desktops? -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20140811144154.gu17...@csclub.uwaterloo.ca
Re: [Pkg-xfce-devel] Reverting to GNOME for jessie's default desktop
On Mon, Aug 11, 2014 at 05:34:04PM +0200, Matthias Urlichs wrote: You mean left vs. right side? Or even showing them at all (certainly last time I bothered to look at gnome 3 it seemed to think buttons on windows were mostly to be avoided). People who are so afraid of new stuff to learn that they won't even figure out how to close a window are not Gnome's (or XFCE's, for that matter) target audience. If you want that, install KDE and tell it to use one of the let's-mimic-Windows/MacOS themes. xfce is perfectly useable to most people by default. All I personally expect from a window manager is: Be able to launch programs (ideally using alt+F2) Be able to resize the window using the edge of the window Have a maximize/restore button Have a minimize button Have a close button (These last 3 should also show up when I hit alt+space, because well I have used that keystroke on many systems for over 20 years to do that). That's it. I don't need any more than that. Gnome 3 failed that out of the box. It seems Microsoft is willing to accept they fucked up on windwos 8 and are backing down and restoring what people really want in the next version. I wonder if the gnome UI designers will ever be willing to admit they screwed up and back down. Adding new idea is fine, but not at the expense of existing features and behaviour. You have to let people continue to use things until they get used to the new things if they ever do. You can't just force people to switch the way they work. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20140811173528.gw17...@csclub.uwaterloo.ca
Re: [Pkg-xfce-devel] Reverting to GNOME for jessie's default desktop
On Mon, Aug 11, 2014 at 07:42:41PM +0200, David Weinehall wrote: Available in GNOME 3. Available in GNOME 3. Not enabled by default (if I remember correctly), but possible to enable using gnome-tweak-tool. I shouldn't have to know that. And I am pretty sure when gnome3 appeared in sid, it wasn't available. Not enabled by default (if I remember correctly), but possible to enable using gnome-tweak-tool. I will somewhat agree that one is hardly ever used since I just alt+tab to the other window I want. Available in GNOME 3. Alt+space brings up the window menu in GNOME 3. So, sounds like GNOME 3 provides/can provide everything you seem to expect from a window manager. Trying to navigate the horrible menu system trying to find where to configure things was highly unpleasant too. It made windows 8 seem sane. I just believe the default when you install and log in the first time shoudl be something that makes sense to your typical average user, and I don't think gnome3 by default does that. It can be tweaked to do so now (I don't think it could initially), but the typical user won't know how to do that. The defaults are bad. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: https://lists.debian.org/20140811174753.gx17...@csclub.uwaterloo.ca
Re: Roll call for porters of architectures in sid and testing (Status update)
On Fri, Sep 20, 2013 at 11:19:24AM -0400, Federico Sologuren wrote: i have a HP Visualize B2000 that i managed to install last night from iso distribution that i found after a lot of looking. at this point only terminal is working. will keep reading to get debian up and running. i would like to get involved. will need some additional information on what is needed and what skills are required. what does DD/DM stand for? DD = Debian Developer DM = Debian Maintainer unless I remember wrong of course. :) -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130920181059.gj13...@csclub.uwaterloo.ca
Re: Roll call for porters of architectures in sid and testing (Status update)
On Thu, Sep 19, 2013 at 10:38:29AM +0200, Niels Thykier wrote: Here is a little status update on the mails we have received so far. First off, thanks to all the porters who have already replied! So far, the *no one* has stepped up to back the following architectures: hurd-i386 ia64 mips mipsel s390x I have pinged some people and #d-hurd, so this will hopefully be amended soon. Remember that the *deadline is 1st of October*. In the list above, I excluded: amd64 and i386: requirement for porters is waived s390: Being removed from testing during the Jessie cycle (Agreement made during the Wheezy release cycle) The following table shows the porters for each architecture in *unstable* that I have data on so far: armel: Wookey (DD) armhf: Jeremiah Foster (!DD, but NM?), Wookey (DD) kfreebsd-amd64: Christoph Egger (DD), Axel Beckert (DD), Petr Salinger (!DD), Robert Millan (DD) kfreebsd-i386: Christoph Egger (DD), Axel Beckert (DD), Petr Salinger (!DD), Robert Millan (DD) powerpc: Geoff Levand (!DD), Roger Leigh (DD) sparc: Axel Beckert (DD), Rainer Herbst (!DD) If you are missing from this list above, then I have missed your email. Please follow up to this mail with a message-ID (or resend it, whichever you prefer). Message-ID: 20130904160124.gt12...@csclub.uwaterloo.ca Sent September 4th. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130919145648.gf13...@csclub.uwaterloo.ca
Re: Plat'Home OpenBlocks discounted for Debian member and FLOSS Developer
On Wed, Apr 17, 2013 at 08:53:24AM +0900, Nobuhiro Iwamatsu wrote: Plat'Home is one of the Debconf 13 sponsor will sell at discount to Debian member and FLOSS Developer for the OpenBlocks of small ARM micro server. https://openblocks.plathome.com/form/obs_verification/input.html Two types of the machine will be sold. * OpenBlocks A6 / $250 ARM (kirkwood) 600 MHz CPU and 512 MB onboard memory. http://openblocks.plathome.com/products/a6/ * OpenBlocks AX3 / $450 Dualcore ARM (armadaXP) 1.33 GHz CPU and 1 GB onboard memory. http://openblocks.plathome.com/products/ax3/ If you are interested, please contact to the following URL. https://openblocks.plathome.com/form/obs_verification/input.html You have to provide proof that you are active in a given community? That's silly. And company name is mandatory, which seems odd for what is often a community of volunteers doing things at home in their spare time. When I bought my i.MX53 QSB, freescale just asked what I was planning to use it for, but had no problem selling developer boards to random interested people. It certainly did some help in getting debian armhf going, and I believe the initial work on rapbian was also done on i.MX53 QSB because they were easy to get, decent performance, and recommended by other Debian community members working on arm. It also seems rather expensive compared to the other arm systems I have of similar features and performance. My prediction will be that with that price and the required information to order one, you won't get much interest. That's a shame. It does look pretty though. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130417145451.gv21...@csclub.uwaterloo.ca
Re: enabling wheezy-backports by default (Re: Backports integrated into the main archive
On Tue, Mar 26, 2013 at 02:16:51PM +0900, Hideki Yamane wrote: Hi, On Mon, 18 Mar 2013 13:59:02 +0100 Gerfried Fuchs rho...@debian.org wrote: == For Users == What exactly does that mean for you? For users of wheezy, the sources.list entry will be different, a simple substitute of squeeze for wheezy won't work. The new format is: deb http://ftp.debian.org/debian/ wheezy-backports main So, is there plan to enable by default? Most of desktop users doesn't get used to edit system files, and enabling it solves some problems can help it, IMO. Or is there any harm? stable stops being stable? I suppose not, given you have to explicitly ask for the backports version with the default priorities as far as I know. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130326151236.gd1...@csclub.uwaterloo.ca
Re: [RFC] Putting the date back into utsname::version
On Fri, Mar 22, 2013 at 01:20:01AM +, Jeremy Stanley wrote: Another alternative, not represented, is epoch seconds. Takes as many 7-bit printable characters to display (at least for the next few hundred years) as an ISO-8601 date with separators but provides much greater precision... and it's still trivially sortable. Can also be converted (on Debian and other GNU platforms) to your current locale with date -d@1234567890 I would rather have something readable, than that level of precision. Does debian update the kernel multiple times in a day? Having to remember what tool to use to convert an epoch is just annoying, and you have to remember it is an epoch value. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130322132757.ga1...@csclub.uwaterloo.ca
Re: [RFC] Putting the date back into utsname::version
On Thu, Mar 21, 2013 at 06:07:26PM -0700, Russ Allbery wrote: Ben Hutchings b...@decadent.org.uk writes: Here are examples of the old, new and possible alternative formats using likely maximum-length components: old: #1 SMP PREEMPT RT Tue Mar 21 23:12:08 GMT 2023 [46] new: #1 SMP PREEMPT RT Debian 9.99~rc99-9~experimental.9 [51] alt: #1 SMP PREEMPT RT 2023-02-21 Debian 9.99~rc99-9~experimental.9 [62] alt: #1 SMP PREEMPT RT Debian 9.99~rc99-9~experimental.9 (2023-02-21) [64] We could perhaps shorten 'experimental' to 'exp', which would leave stable security updates with the longest version strings and allow for: alt: #1 SMP PREEMPT RT Tue Mar 21 2023 Debian 9.99.99-9codename9 [59] alt: #1 SMP PREEMPT RT Debian 9.99.99-9codename9 (Tue Mar 21 2023)[61] Would anyone like to argue in favour of any particular alternative? I will at least make a plea for ISO dates rather than the specific date format in the last two examples. I think my favorite is the last example, with an ISO date (2023-03-21). Shortening experimental to exp seems like a good idea anyway. ISO dates are certainly the best. Who really cares that it was a tuesday, and especially since Tue is english, not universal. Never mind that Mar is also a language problem. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130322132931.gb1...@csclub.uwaterloo.ca
Re: bugs.debian.org: something's wrong...
On Thu, Mar 21, 2013 at 12:17:16PM +0800, Paul Wise wrote: On Thu, Mar 21, 2013 at 3:58 AM, Jeremy Stanley wrote: The bigger concern is that this is a web bug, whether it wants to be or not. Whoever hosts any of the images being included knows the IP address and time of every visitor to a bug report. Only if you use a web browser that is obeying other people's web servers instead of obeying you the user. I suggest that everyone in this thread should fix their web browser so that it obeys them, as I have done. If your browser downloads an image, then the server hosting the image at the very least knows your IP. The statement was entirely correct. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130322133513.gc1...@csclub.uwaterloo.ca
Re: arm64 Debian/Ubuntu port image available
On Wed, Feb 27, 2013 at 06:38:55PM -0300, Cláudio Sampaio wrote: Is there any device with Aarch64 on sale? I couldn't find any, only some mentions from Calxeda. Would you mind to provide suggestions of any seller which sells through the internet? There are none for sale yet. I believe some prototype chips exist, as does an emulator from ARM. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130227221633.gp20...@csclub.uwaterloo.ca
Re: Go (golang) packaging, part 2
On Fri, Feb 01, 2013 at 10:00:32AM +, Jon Dowland wrote: As a Haskell developer, I find cabal much more convenient than nothing, in the situation where the library I want is not packaged by Debian yet. If I want my haskell libraries and programs to reach a wide audience, I need to learn Cabal anyway. If you are writing libraries to add to the language, then I don't consider you a normal developer using the language. And yet they do, and so we need to manage it. And certainly saying We will package things for distribution using the package and installation system we have is managing it. Upstream and whine all they want about not using their install system, and they will be Wrong. Upstream can do what they want if they think there is a need, but if they don't consider that a lot of people don't want yet another system then that's their problem and they do not need to be catered to by everyone else. If they want their stuff used by people they have to make it accessible in a normal manner that fits in with whatever a given distribution does. Remember that Debian does not just provide a package management system: it also provides repositories and dictates what goes in them according to the DFSG. Whilst adding new repositories is relatively simple for users (and growing in popularity for upstreams), installing bare .deb files is still not a very smooth process (although massively improved by e.g. gdebi these days) You generally don't have to because things are in Debian archives already. From an upstream POV, they want their software in the hands of end users. They don't want to have to learn and build a myriad of different package types (.deb, .rpm, etc.) and crucially neither do we. In many cases they don't want to have to wait for a distro to package their software for them either. So what? A lot of us want stable systems that works and is consistent. They don't have to package everything, they just have to make it possible to get the sources and build them and allow them to be packaged and distributed in a consistent manner (unlike Ocracle's Java these days for example). If you want bleeding edge, then you are not a normal user and you certainly aren't a system administrator that wants to keep a controlled system they can reproduce. I know dpkg --get-selections will tell me all the software installed on the system so I can do the same on another one. If yet another package maanger gets involved I have to know about it and do something different to handle that. That's not a good thing. In the Go case, their users are people who might have a shell/web account but not admin access on a shared host somewhere, running god knows what distro and version, hence having a self-contained fat binary that is guaranteed to run wherever libc is meets their goals. That's a different goal than running a nice debian system. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130201202008.gb1...@csclub.uwaterloo.ca
Re: Go (golang) packaging, part 2
On Fri, Feb 01, 2013 at 10:45:33AM -0800, Clint Byrum wrote: Excerpts from Chow Loong Jin's message of 2013-01-29 19:15:01 -0800: Having multiple package managers which don't know about each other on a system is evil™ (but in some cases, can be managed properly). Robert Collins did a nice write up on this very subject not long ago: http://rbtcollins.wordpress.com/2012/08/27/why-platform-specific-package-systems-exist-and-wont-go-away/ And I think the best part if comment 19 (From September 15, 2012). Much better than the article itself. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130201203053.gc1...@csclub.uwaterloo.ca
Re: Go (golang) packaging, part 2
On Fri, Feb 01, 2013 at 12:38:16PM -0800, Russ Allbery wrote: I hope that's not generally true, because that would be horribly depressing. I don't believe that's true of the Perl community in general. It's certainly not true of the C or Java community! Not all C libraries are distributed from one central site and they certainly don't expect you to use a central package installation system. I personally consider Java a bad joke that won't go away. Speak for yourself. I've been a system administrator for twenty years, and sometimes I have to deploy bleeding-edge code in order to accomplish a particular task. You can do that in ways that also give you a reproducible system. If I want something updated that is newer than what debian provides, then I will make the .deb myself. I want everything consistently installed. Using Debian packages is a *means*, not an *end*. Sometimes in these discussions I think people lose sight of the fact that, at the end of the day, the goal is not to construct an elegantly consistent system composed of theoretically pure components. That's a *preference*, but there's something that system is supposed to be *doing*, and we will do what we need to do in order to make the system functional. I like my system to stay working and maintainable. I still have one system that was installed with Debian 2.1, and upgraded ever since and is still doing fine. You don't generally get there by taking shortcuts that seem convinient now, even though long term they are a bad idea. I very much find doing it right to begin with saves a lot of hassle and time in the long run. Avoiding trying to circumvent dpkg and apt is the best way to do that. dpkg and apt help you more than any other packaging system I have ever seen. No point trying to bypass them. Different solutions have different tradeoffs. Obviously, I think Debian packages are in a particularly sweet spot among those tradeoffs or I wouldn't invest this much time in Debian, but they aren't perfect. There are still tradeoffs. (For example, Debian packages are often useless for research computing environments where it is absolutely mandatory that multiple versions of any given piece of software be co-installable and user-choosable.) Making a debian package is generally very easy, so if you need something on your system, make a package for it. Now it's simple to deploy to many systems. Indeed. But it's a tradeoff. One frequently does not have the luxury of appending to this paragraph ...and therefore I will never install anything with a different package manager. Sometimes it's the most expedient way of getting something done. Sometimes people aren't as deft with turning unpackaged software into Debian packages as you and I are. But it's so easy (not like rpm and such, which tend to be more work). For cpan there is even dh-make-perl. The solution then is to make equivelant scripts for other languages. The solution is NOT to use some other package installation system. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130201210218.gd1...@csclub.uwaterloo.ca
Re: Go (golang) packaging, part 2
On Wed, Jan 30, 2013 at 09:16:59PM +, Thorsten Glaser wrote: Meh, it’s evil, period. Absolutely. As a user I have a nice package management system that I know how to use and which works well. I don't need another one. It is not the job of a language developer to invent yet another bloody package distribution and installation system. Just because windows doesn't have a decent way to handle software installations doesn't mean other systems don't know how to do it well. That would be nice but not need integration, just replacement of cpan with a small script ;-) And, of course, having all of CPAN packaged properly. Which comes to… I think having the useful bits done is fine, and dh-make-perl is pretty good for the few times you want to try something that isn't already packaged (and probably just long enough to find out it wasn't worth using in the first place). … something like that. Not quite. I don’t think it’s worth the pain; it may be manageable for Perl, and maybe PEAR, and with even bigger pain maybe even pypi, but not for Ruby and similarily hostile upstreams. Much as I like PHP, I really hate PEAR. I don't want another add on system to manage. My co-developer (on the MirBSD side) benz has written a script that almost automates creation of a port (source package) from/for a CPAN: http://www.slideshare.net/bsiegert/painless-perl-ports-with-cpan2port (It looks even MacPorts has adopted it!) Of course, it needs some manual review (and someone’d have to convert its output to Debian source packages, or merge it into the already existing dh-make-perl which also somewhat worked when I tried it), but it would make achieving this goal possible (and let running dpkg require 128 MiB of RAM or so, to fit the list of packages into it, I guess, but even those Amigas have that). Poor old Amigas. Not that any of mine have enough CPU to run linux. :( -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20130130234635.ga1...@csclub.uwaterloo.ca
Re: CD1 without a network mirror isn't sufficient to install a full desktop environment
On Tue, Sep 11, 2012 at 01:52:44PM +0200, Josselin Mouette wrote: Just because these people are noisy doesn’t make them numerous. Furthermore, Debian (and Ubuntu too IIRC) makes “GNOME classic” available right from the login manager, with the default installation. Not considering gnome-panel 3.x a continuation of the existing environment is purely bad faith. Well as a user, gnome-panel 3.x is NOT a continuation of gnome. When gnome 3 hit unstable, I switched to something else. I couldn't find anything, or make it do any of the basic things I expect my window manager to do, so it is gone. Useless piece of shit. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120911143225.gb22...@csclub.uwaterloo.ca
Re: CD1 without a network mirror isn't sufficient to install a full desktop environment
On Tue, Sep 11, 2012 at 03:23:09PM +0200, Josselin Mouette wrote: You can’t be serious. Xfce is way more different from GNOME 2 than GNOME 3 classic is. Well if gnome 3 classic was the default, then fine. But gnome 3 with the new panel as default is really not acceptable and just plain mean to users. If those users are satisfied with Xfce, that’s very good for them, and I agree Xfce is of very good quality. However, it still lacks a number of features and for many use cases, you need pieces of GNOME with it. Well it certainly doesn't lack the basic features I use that gnome 3 with the new panel is missing. I would not qualify most of the whiners as “fellow contributors”. Perhaps not, but I think a pretty large chunk would qualify. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120911143434.gc22...@csclub.uwaterloo.ca
Re: CD1 without a network mirror isn't sufficient to install a full desktop environment
On Tue, Sep 11, 2012 at 04:47:34PM +0200, Josselin Mouette wrote: It is the same codebase, and has the same functionality. It is the same source code tree, with a bunch of code completely changed, and it certainly does not have the same functionality (although it may be slowly gaining some of what was missing in 3.0 back). Please give one serious example. The two features that I know to be gone in the panel are the ability to change the color easily (ha, ha) and absolute positioning for applets (which was useless and buggy anyway). Unless you talk about gnome-shell which is an entirely different piece of software. Quite honestly, as a user, I don't care what gnome names each piece of the UI. I consider it all gnome. If I can't find how to maximize a window, how to logout, or much of anything else in the first 5 minutes of use, then it isn't usable. Of course this is probably getting off topic for debian-boot and almost debian-devel. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120911150302.gd22...@csclub.uwaterloo.ca
Re: smaller than 0 but not negative (Re: question about Conflicts:
On Thu, May 03, 2012 at 06:14:38PM +0200, Tollef Fog Heen wrote: So you don't support for instance 1.0~git20120503, then? (the git snapshot from today of what will become 1.0) They would probably call it 1.0_git20120503 since apparently they don't believe in the _ field seperator. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20120507164529.gy10...@caffeine.csclub.uwaterloo.ca
Re: [buildd-tools-devel] new buildd dependency resolution breaks self depends?
On Tue, Mar 29, 2011 at 06:58:36PM +0200, Wesley W. Terpstra wrote: I hope what you're telling me is true, because it will save me a lot of work! :) What I don't understand about your explanation: once the new all+i386 .debs hit unstable, won't the buildds see the new 'all' package in unstable and thus want to install it in preference to the old 'any' package even after it is removed from the Packages file? The 'all' package will still be uninstallable since it depends on the missing 'any' packages. While I can fix the problem at hand by removing the mlton 'all' package for an upload, I see a more troublesome problem on the horizon: The basis, runtime, and compiler packages should all be at the same version to compile correctly. The basis package is an 'all' package which includes the cross-platform bits of the runtime library. The runtime and compiler are 'any' packages with compiled object code. If the Build-Depends lists 'mlton-compiler' (ie: after I resolve the current problem), any future uploads will see that it has these versions available: mlton-compiler (= old-version) depends on runtime mlton-runtime (= old-version) depends on basis mlton-basis (= new version) ... which I believe means that the old-version mlton-compiler package will be uninstallable since the old-version of the basis in unstable is hidden by the new-version. Have I understood this problem correctly? Does mlton-basis depend on mlton-runtime or mlton-compiler to build? If the answer is yes, then most likely these should not be three seperate source packages. If no, then why doesn't it just work or is the problem a previous version causing a mess? I hate circular build dependancies. :) -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110329171048.ga...@caffeine.csclub.uwaterloo.ca
Re: [buildd-tools-devel] new buildd dependency resolution breaks self depends?
On Tue, Mar 29, 2011 at 07:59:12PM +0200, Wesley W. Terpstra wrote: It's all one source package. I split it up the binaries because: 1) about 60% of the package could be in an 'all' package. 2) the runtime components for different architectures can be installed side-by-side... thus enabling cross-compilation. Oh OK, so there is no build dependancy issue at all then (since no one would be dumb enough to make a package that build depends on one of its own binaries, would they?). According to Kurt, there is no problem. It's all in my head. :) Oh good. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/20110329190808.gb...@caffeine.csclub.uwaterloo.ca
Re: apt: cron.daily necessary?
On Tue, Sep 08, 2009 at 01:42:28PM +0200, Hans-J. Ullrich wrote: I would like to discuss and suggest the following thing: On my 64-bit notebook I am using anacron and (of course) apt. In the apt package included is the file /etc/cron.daily/apt, which contents some lines, which are starting a find process. This find process initiated by apt (and I hope, I am right with this information of the initiation source) consumes a lot of harddrive actions for several minutes after boot, which makes the computer at this time rather slow. Of course, it is one of the processes started by anacron. IMO this is an annoying situation for notebook users, as sepeciela , when you just want to start, wanted to do some things quickly, and then shutting down again - just as many notebook users do! My suggestion to this problem are these: 1. delete /etc/cron.daily/apt manually O.k., this can be easily done, but how necessary is this file at all? 2. If this file is not very necessary do not put /etc/cron.daily/apt into the apt-package, but maybe it should be put into some other package (for example cron-apt), or , another opportunity, as a standalone package. 3. put this file to cron.monthly or cron.weekly, or, let it start manually somehow (this third option was just a thought) What do you think? Is there a way and a chance, to improve things? Any feedback will be very welcome. Create file /etc/apt/apt.conf.d/disable_periodic_apt containing: APT::Periodic::Enable=0 Problem solved. -- Len Sorensen -- To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Re: DFSG violations: non-free but no contrib
On Thu, Oct 30, 2008 at 09:01:18PM -0700, Thomas Bushnell BSG wrote: On Thu, 2008-10-30 at 16:33 -0400, Lennart Sorensen wrote: So if any of the hardware that requires non-free firmware to operate and currently works in etch was to not work with Lenny, then that's completely unacceptable? If that's the case, then there is no way EVER to make Debian comply with the DFSG since you aren't going to get free firmware for all those devices. Um, yes there is. We could do the same thing we do with codecs, file formats, and all the rest--in the absence of support with free software, we don't support it in Debian. That's what I said. Those people who insist there can't be any regressions, are simply kidding themselves. Debian never should have supported that hardware. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: DFSG violations: non-free but no contrib
On Thu, Oct 30, 2008 at 03:33:49PM +0100, Pierre Habouzit wrote: For the sake of 10 binary firmwares, you want to make whole Debian depend upon non-free ? Wow, what an achievement. No, please, we don't accept regressions as a solution. So if any of the hardware that requires non-free firmware to operate and currently works in etch was to not work with Lenny, then that's completely unacceptable? If that's the case, then there is no way EVER to make Debian comply with the DFSG since you aren't going to get free firmware for all those devices. Maintaining support for most of that hardware through the use of non-free can be done. Maintaining support for all that hardware through only main is imposible and unrealistic. So a demand of no regressions is just insane. Debian shouldn't ever have worked with that hardware in the first place in the case of main only installs. -- Len SOrensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: [DRAFT] resolving DFSG violations
On Mon, Oct 27, 2008 at 12:26:31PM -0700, Jeff Carr wrote: True, I certainly feel like that at times with the opencores project I've been trying to maintain. On the other hand, I sure know that I know a pile more than you do or we wouldn't be having this discussion :) I have a different theory. Yes Gee whiz. You're not getting it. The firmware is a binary blob. You can distribute the source but you can't synthesize it. So, in the debian installer, you can't include it according to this insane policy. You could synthesize it if you had the tools for it. Debian's policy is not insane. It is consistent. Any hardware maker that wants their hardware to work with free software could use an eeprom to store the firmware within the device, so that there is nothing non-free that has to be distributed. That is what Debian is concerned with. If the firmware is embedded in the device, then it has nothing to do with Debian anymore, and it is entirely up to the user whether they care about how the hardware they buy is made. Those that do care can simply avoid that type of hardware (or at least try to). But the opencore case is the easy case, hybrid chips don't even have source. The firmware blob is often generated when you fabricate the chip changes with the physical board layout. You guys just don't understand the issues here. There isn't some nafarious intent; you have little flash chips holding these bits all over in your machine now. You just don't know it. And now, because someone is giving you the luxury of actually loading them via software (with gpl software no less) you seem to be all ticked off. You seem to want to stick your head in the sand and pretend this doesn't exist. If they use flash chips, then it doesn't affect Debian, because the flash chip already contains what is needed for the device to work. Debian doesn't have to have anything to do with updating them, and hence there is no distribution of non-free to worry about. And no, it's not about telling users This is all free. That's a lie at this level anyway. None of it is free. Whether you load it from /lib/firmware/ or if it's already stored on your motherboard doesn't change anything. It just makes us Debian look ridiculous. The message should be: There are some firmware blobs for some hardware that there is no known way to generate code for, nor any way to compile such code if we had it or any way to figure out how we would write a compiler for it either. This firmware is also hidden in flash for most of the chips on your machine. Some modern devices let the OS load this code into the chip then we are able to write fully GPL drivers for the device. Debian's focus is on free software; not free hardware designs (although we love those too). It does make a difference. Debian makes no promise about the freeness of the users hardware since Debian did no provide it. Debian promises that everything they provide is free. That really isn't very hard to understand. Debian policy is only concerned with the software Debian is distributing. If Debian didn't provide it as part of the distribution, then Debian's policy has nothing to do with it. Hence your motherboard and its BIOS and other firmware in flash has nothing to do with Debian's polcies at all. If your hardware requries closed source firmware to operate, then at best Debian can distribute that in non-free, and using it during the install will be slightly tricky (but not that hard. I have done so and it wasn't that big a deal. It just meant I had to personally accept that I was about to use one piece of non-free code to make that particular system work and it was my choice, not Debian's that made my system contain a piece of non-free code. That is how it should be with Debian). -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: [DRAFT] resolving DFSG violations
On Mon, Oct 27, 2008 at 12:29:58PM -0700, Jeff Carr wrote: Hardly perfectly readable - I put up code there too :) Oh well. Some people write ugly perl code, some write ugly VHDL. Not the language or tools fault, just bad programmers. Which is often not the case on cheap devices (often usb) because of cost, space, power, etc for another chip. I know. So I can either pay more (if I can find someone that still makes a proper complete device), or I can not use that device, or I can accept that to use it I must install some file from non-free. It should not be support by Debian main. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RC bug for 10 packages
On Sat, Oct 25, 2008 at 06:37:23PM +0200, Francis Tyers wrote: Heh, actually there is an ambiguity there... I should have spotted it. What I mean is: Add the specific dependence in each language package to apertium and not to libpcre3. The language packages already depend on apertium, but we'll just make this specific with substvar, the patch by Miguel probably explains it better. So if you were to recompile apertium after installing a different version of libpcre3, would it become incompatible with the previously built language packages? Just curious. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: [DRAFT] resolving DFSG violations
On Sat, Oct 25, 2008 at 06:46:14AM -0700, Jeff Carr wrote: I'm willing to stake my reputation on betting you are _not_ a firmware engineer. Your are totally wrong if you think all firmware blobs can be replaced by human readable source. There is hardware, for which to function, will always, for the lifetime of the equipment, require a firmware blob to operate. This will always be the case; there will never be a human readable version. It will never be possible to compile it (with non-free compilers) from source code. There seems to be the belief that there is some scary bogeyman at the end of this tunnel; some deliberate evil firmware engineer who refuses to release the source for the blob. This is hardly the case. In fact, the exact opposite is true; the most free pieces of hardware in the world require a firmware blob! A good example: try out the pci core from opencores.org. Even in that case, where you have the logic for the actual chip, you still have no choice but to distribute a firmware blob anyway. I would expect anything on opencores.org to be perfectly readable VHDL code, which is the prefered format for manipulating it. So what was your point again? Besides FPGA's can work with eeproms, so no binary blob has to be distributed with the OS to work with the device. Going and flapping around and irritating hardware engineers with totally impossible requests (Give us your psoc firmware sourcecode or you suck! Thanks, the debian project.) makes us look like a bunch of clueless and irrational software engineers. You think there must be some magic way, well there is not. For some firmware it does make sense, for others it does not. I doubt anyone reading this uses coreboot which means that the first instruction anyone ran today was a binary only firmware blob. Where is all your concern about that? Doubly annoying is that that firmware is actually x86 code and it is possible to get source code that can be compiled with gcc. That would actually be fruitful and practical. Yes the BIOS doesn't include source code, but there also is no need for Debian to distribute the BIOS code in main for Debian to be able to install and run on my system. This whole debate is about Debian having to ship said firmware, not about whether hardware needs firmware or not. That is a different debate, but not one that directly involves the Debian distribution. So much as closed source binaries and firmware on flash chips in raid controllers may be annoying, it does not in any way affect the freeness of the code _distributed_ by Debian. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: [DRAFT] resolving DFSG violations
On Sun, Oct 26, 2008 at 06:38:53PM -0700, Jeff Carr wrote: Because that's how the hardware works. If you are making a widget and you need a fpga or hybrid chip of any sort, then you generate a binary blob using the chip manufacturers tools. But you provide input to the tool, usually VHDL code or similar. That would be the source, and you can provide that. That is the prefered format for editing. We use plenty of FPGAs at work, and I have seen how they are programmed, and yes I have seen what the source looks like. It is certainly human readable source code. If you think otherwise, then you don't know how FPGAs and CPLDs work. The tool doesn't have a magic buttong labeled 'make the chip do what I am thinking of now. So, no matter how good you intend on being, how much you love free software, you don't have any choice. Again, this is ironic since things like the opencores project are the most free hardware of all and are not given credit for it in this thread. And opencores.org distributes source, not binary blobs. Gee whiz. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Bug reports of DFSG violations are tagged ???lenny-ignore????
On Fri, Oct 24, 2008 at 10:20:43AM -0200, Henrique de Moraes Holschuh wrote: I can deal with a two-step install process for my video cards, as long as it is properly documented. But I would serioulsy recommend that we produce easy to use non-free installer disks to go along with the Debian installer disks, not because of the GPUs, but because of the network cards. If this would slow down the release too much, make these AFTER the release, there is no reason why we can't release non-free a bit later. I recently installed a new server with Etch which uses a bnx2 network chip. This requires firmware from non-free. It wasn't particularly hard to install the firmware package on another box, copy the resulting firmware file to a usb stick, and mount it from vt2 and place it in the firmware directory so that the installer could load the network driver. A slight bit of work, but not hard. I do not expect debian to produce an installer with non-free code included. After all non-free is not part of debian as far as I understand things. Documenting the requries steps would be nice of course. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Bug reports of DFSG violations are tagged ???lenny-ignore????
On Tue, Oct 21, 2008 at 04:50:23PM -0500, William Pitcock wrote: In the kernel itself, yes. Provided that: * the kernel framework for loading firmware is used for drivers depending on non-free firmware, and * that firmware is available in non-free via firmware-nonfree What if the firmware has a license on it that doesn't permit redistribution in non-free? Then what? Infact, once I have time, I intend to start pushing patches upstream to make this happen. But this is going to take another kernel release cycle... if we intend to release Lenny with 2.6.26, than this is not an option. Well, if 2.6.27 in fact fixes a large amount of the firmware problem, and happens to be a long term support kernel which is going to be used by many distributions, then perhaps releasing Lenny with 2.6.26 is the wrong choice and should be reconsidered. For hardware where this is an unacceptable solution, rewriting the driver to not use the firmware may still be possible. Sometimes. Certainly some hardware doesn't do anything without its firmware. Perhaps alternate firmware could be written, although often there isn't any documentation around to do that. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#502959: general: raff.debian.org uses non-free software
On Tue, Oct 21, 2008 at 11:41:14AM +0200, Aurelien Jarno wrote: Package: general Severity: serious Justification: DFSG raff.debian.org uses a Compaq Smart 5i RAID card. A flash memory is used to store the firmware. While the firmware is freely downloadable (as in beer) on HP website [1], we don't have the corresponding source code. I suggest that someone works with HP to get the corresponding source code. Until we found a solution, I recommend we simply shutdown the machine. [1] http://h2.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=encc=usprodTypeId=329290prodSeriesId=374803prodNameId=266599swEnvOID=4004swLang=8mode=2taskId=135swItem=MTX-3d1aaa0b48c04b628789e598d3 By that stupid definition, the BIOS in your machine should be the first target. If the firmware is in a flash chip on the card, then the card works fine as shipped and needs no firmware to be included with Debian for the card to work. This is rather different than say, ivti, bnx2, and similar, which require the firmware to be loaded by linux on every boot and hence provided by linux to the hardware. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#502959: general: raff.debian.org uses non-free software
On Tue, Oct 21, 2008 at 01:30:42PM +0200, Romain Beauxis wrote: Agreed, though it does not restrain us from asking for free firmware. If I recall well, one of the origin of the GNU fondation was the fact that having free drivers alowed one to actually *fix* issues he may have with his *own* hardware. Then, the very same reasoning can apply to binary firmware. Having driver source code lets you fix the drivers and port htem to other operating systems and architectures. Having firmware source makes no difference to that problem as long as the firmware is working. If it isn't working, you would probably know soon enough and return the defective hardware. So, yes this is a brand new issue, that comes from the new way of designing hardware. But that doesn't mean we should give up and remain behind the line that was drawn 20 (or so) years ago. We now should also ask for open source firmware for the very same reason that this huge effort toward free drivers was done. If we did it for drivers, there's no reason we can't suceed for firmwares. Except the firmware is just a way to implement the board logic and has nothing to do with deciding which system you can use the hardware in. The drivers do control which system you can use the hardware with. Free open firmware is a nice goal, but a significantly less important one than open drivers (or at least specifications) to allow you to choose your OS and software to use that hardware with. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#496771: Deb AMD64 eats huge amounts of memory (and babies?) because of badly built libs
On Wed, Aug 27, 2008 at 12:50:45PM +0200, Gustaf R??ntil?? wrote: Package: general Version: AMD64 This is basically a debian AMD64-version of the bug report for ubuntu AMD64 bug 24691 [1]. The problem is (seems to be) that a lot of libraries are built with alignment above 2**3. Most of these cases are actually 2**20 in ubuntu AMD64 and 2**21 in debian AMD64. In other words, 1 and 2 MB correspondingly! I often see loose and vague arguments such as if 99MB of that is shared, the calculator is really only 'using' 1MB of ram -- and that's fine [2]. It's not fine. And it's certainly incorrect. Just because a library is shared doesn't mean it's fine that it consumes megabytes(!) of memory in vain. Especially libraries that are shared between 1 process. Now, I can't figure out why such huge amounts of memory is hogged on my computer. But I need to restart X about once a week. If I don't, my 4 GB of RAM is quickly filled and my 4 GB swap starts to work (hurray, 15 second delay when changing virtual desktop). It's been like this since I bought this machine (soon 2 years ago), and I frequently update my X driver (-radeon, -radeonhd, fglrx, etc), so I doubt they are to blame for stealing my memory, even though it could've been a good guess. memstat reports lots and lots of libraries which consumes slightly more than 2 mb each: $ memstat | grep '\.so' | grep -v PID gives me 551 lines, and by just grasping the result, easily 90% of them are slightly more than 2 mb (2**21 + small stuff). If these libraries could be built with 2**3 (8 byte) alignment, instead of 2**21 (2 megabytes), I assume, just like the discussion in [1], that at least some memory wouldn't be wasted in vain. So how many libraries (on my system) are built with 2**21 alignment? /lib: $ for file in `\ls *.so.*` ; do if objdump -x $file | grep -q -e '2\*\*21' ; then echo $file ; fi ; done | wc 99 /usr/lib: $ for file in `\ls *.so.*` ; do if objdump -x $file | grep -q -e '2\*\*21' ; then echo $file ; fi ; done | wc 2777 /usr/lib/*: $ for file in `\ls */*.so.*` ; do if objdump -x $file | grep -q -e '2\*\*21' ; then echo $file ; fi ; done | wc 396 99 + 2777 + 396 = 3272. Quite a lot of libraries. Loading them all would require roughly 7 GB. Remember: loading. Not using. And these are just on my system, it's not even close to all libraries in debian. Just looking at how much memory pidgin-specific plugins consumes is frightening: memstat | grep -E '(purple|pidgin)+.*\.so.*' Returns 88 libraries, ALL consuming slightly more than 2 mb. How many of them are shared with any other program than pidgin? I'd say none. But that's just a guess. So this means at least 200 mb memory usage for pidgin alone? Could this really be the case?! $ ps aux | grep pidgin gustaf 30432 0.2 1.7 631196 68532 ?SAug26 2:36 pidgin Oh yeah, 631192 kB virtual and 68532 kB resident. I say this again, some people argues; but most of that 616 MB is shared so it doesn't matter. It matters, because Linux prefers to swap it, to give place for IO buffers, and when things are being swapped, holy moses, Ctrl+Alt+Backspace is thy saviour. Pidgin consuming 616 MB virtual memory is just.. Well.. Messed up. To put it lightly. It could be reasonable to see the rest of the memory hoggers on my system, to make it clear that this really is a big problem. Please note that pidgin is only on 8th place! This is 'top' sorted by 'M' (memory usage): 31095 gustaf20 0 1314m 700m 35m R 30 17.8 252:06.36 firefox-bin 32197 gustaf20 0 1006m 439m 39m S 12 11.2 167:03.50 epiphany-browse 30134 root 20 0 625m 256m 14m S8 6.5 99:41.51 Xorg 2680 gustaf20 0 479m 103m 25m S0 2.6 4:07.31 banshee-1 30305 gustaf20 0 438m 82m 17m S0 2.1 0:31.87 /usr/lib/ontv/o 1621 gustaf20 0 304m 76m 14m S0 1.9 0:42.91 gnome-terminal 6070 clamav20 0 92524 76m 380 S0 1.9 0:00.00 clamd 30432 gustaf20 0 616m 66m 28m S1 1.7 2:37.08 pidgin Firefox and epiphany are complete pigs when it comes to memory use. On amd64 machines not running those things look absolutely fine. Firefox causes plenty of swapping on i386 as well. $ free -m total used free sharedbuffers cached Mem: 3934 3891 43 0331 1186 -/+ buffers/cache: 2373 1561 Swap: 3859 5 3853 Real memory usage of my very recently booted machine (I haven't started even a small subset of the apps I usually run): 2373 MB! Firefox probably consumes over 50% of the ram of all the applications you are likely to run. Memory is allocated in 4KB pages (since that is what the hardware supports unless you think 2MB pages are a good idea, or 1GB pages). What memory address the pages are mapped to is completely irrelevant. Hence alignment should not affect
Re: Debian release versioning
On Sat, Jul 12, 2008 at 06:09:09PM +0200, martin f krafft wrote: So lenny will be Debian 5.0. Many people have questioned this choice, given how we onconsistently went ...-2.0-2.1-2.2-3.0-3.1-4.0 in the last decade, but it's the RM's choice and not to be debated. Looks consistent to me. 1.0-1.1-1.2-1.3 2.0-2.1-2.2 3.0-3.1 4.0 It does give a problem for 5 since by this pattern there can never be another release so actually 5.0 doesn't fit. Oh dear. :) What is to be debated is how to move on from here. I propose that we get rid of our r-releases and simply let the first stable update to lenny be 5.1, followed by 5.2, and so on. lenny+0.5 would logically be 5.5, since it's unlikely that we will have five stable updates out within 1.5/2=0.75 years, and if we do, then lenny+0.5 is late. lenny+1 would be released as 6.0. This would add sense to our versioning scheme (and help avoid those discussions in the future). Well releases are further apart than they once were, so perhaps a new major every release makes sense. Instead of long flamewars and floods of AOL posts, I suggest you update http://doodle.ch/8zauai3nqges2ur8 if you're in favour or you oppose. You can use http://doodle.ch/syndication/8zauai3nqges2ur8 to track submissions. If you do have something to say, then reply. Well hopefully I did say something. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian release versioning
On Sat, Jul 12, 2008 at 11:21:30PM +0200, Lucas Nussbaum wrote: I'm not sure sure that we want to have a hole in our versioning scheme. Since lenny+1/2 is just another stable update, let's just number it like a stable update. So we don't end up with users thinking You released 5.0, 5.1, 5.2, 5.3, 5.5. Where is 5.4 ? Do VMware users complain about 6.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.5, etc? The .5 for them is a major update with some new features, but still a free upgrade while the next .0 costs money. The half update is bigger than normal revisions, so a jump in version number seems perfectly reasonable there. Of course I don't see anything wrong with the XrY style currently used either, although maybe some people find it strange. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian release versioning
On Sun, Jul 13, 2008 at 07:46:54AM +0200, Frans Pop wrote: Agreed. Also, I really dislike the use .5 for -and-a-half releases in the original proposal. For one thing you cannot exclude the risk that 5 point releases would be needed for one reason or another before an +1/2 release. And it also makes it impossible to distinguish between the regular point update part of such a release and the +1/2 part. But .5 is a half isn't it? :) -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Considerations for lilo removal
On Mon, Jun 16, 2008 at 11:54:52AM +0300, Shachar Shemesh wrote: Lilo has one killer feature that is totally missing from GRUB - the -R option. It allows me to upgrade a kernel on remote servers, knowing that if the upgrade fails, I will get the original kernel after a few minutes without asking a local hand to push the reset button. Until Grub has something similar, removing Lilo entirely seems like a bad idea to me. grub does have that. man grub-reboot. Boot a specific entry next time but only once. I think it is a debian specific patch and not a generic grub feature. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Non-related 'Recommends' dependencies - bug or not?
On Fri, Jun 13, 2008 at 12:15:59AM +0300, Eugene V. Lyubimkin wrote: Thanks, you hinted me to discover it: $ aptitude why banshee synaptic p banshee Recommends brasero p brasero Recommends gnome-mount p gnome-mount Dependslibeel2-2.20 p libeel2-2.20 Recommends synaptic A library recommends synaptic? That just sesems downright stupid. That is not where I would have expected to see that dependancy. I guess it provides a widget that calls synaptic, so someone thought it would make sense to have synaptic installed even though many other widgets in the package might be perfectly useful with out it. I think at most synaptic deserves a suggests in that case. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Non-related 'Recommends' dependencies - bug or not?
On Thu, Jun 12, 2008 at 09:57:13PM +0300, Eugene V. Lyubimkin wrote: Recently I've noticed that 'Recommends' chain for package 'banshee' leads to packages, non-related with media-playing at all, for example, 'synaptic'. Then I filed the minor bug [1]. Bug was closed by maintainer with note it is not a bug. Is he right? Well I guess if the chain is: banshee - ... - gnome - synaptic, then I would say that's proper. What is the chain to get there? -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How to build only linux-image-2.6.18-6-686
On Sat, Jun 07, 2008 at 11:51:04AM +0200, Marc Haber wrote: Shouldn't that be easier to do, and - most of all - documented? Playing with source packages isn't normal. It used to be much worse (2.6.8 in sarge involved building multiple packages, one which depended on the other). The package is designed to make the life of the kernel image developers easy, not to make the life of people doing weird things for themselves easier. You should be installing the linux-source-x.x.x package instead and using make-kpkg to build custom kernels. The source package (rather than the linux-source-x.x.x binary package) is not meant to make custom kernels. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How to build only linux-image-2.6.18-6-686
On Sat, Jun 07, 2008 at 08:31:47PM +, Tzafrir Cohen wrote: My problem with make-kpkg has always been that I could never rely on its generated -headers packages to actually work. Odd, the headers it generated allways worked for me. So it was fine to build a kernel. But if I wanted to build some modules for that kernel, I still have a problem. My ugly workaround is to keep the source directory and hope for the best. Nothing wrong with doing that if you are compiling the kernel anyhow. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How to build only linux-image-2.6.18-6-686
On Sat, Jun 07, 2008 at 09:35:28PM +, Tzafrir Cohen wrote: Taken to another system? I don't remember if I did or not. The problems I remember: 1. the source and build links pointed to an incorrect place. An invalid build link is a problem. Where do they point? 2. If I actually changed the source to make the base supplied linux-headers package not good enough, I found no way to generate a complete one (linux-headers-2.6.18-6 vs linux-headers-2.6.18-6-686). So running make-kpkg binary-arch, doesn't build a useful kernel and header package set? Doesn't work well if you want to allow building stuff on other computers :-( This is to say that I got spolit by how well 'm-a a-i' works. Well I must admit I don't build custom kernels for my main computers. I only do so far a router I work with at work, and that one I do by patching and changing the source package since I run it through a mini build server. You should not need the linux-headers-2.6.18-6 when using make-kpkg as far as I can tell, although perhaps make-kpkg isn't always doing the right thing when building from source. Not sure about that. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How to build only linux-image-2.6.18-6-686
On Sun, Jun 08, 2008 at 12:41:01AM +0200, Marc Haber wrote: I find that attitute totally unacceptable. Well it looks that way to me. In fact I would say that is true of ever source package. The goal is to make the maintainers job easy, since they are the ones that deal with the source package. Most people only ever deal with the binary packages and as long as the maintainer does a good job, it works fine. For building custom kernels I think the real issue is to find out what is wrong with make-kpkg if it isn't doing what it is supposed to be doing. I am not a kernel team person (or a debian developer at all for that matter, although I do now assist with the nvidia driver package), but to me the current kernel package works great and I do deal with the source package. It is way better than what we used to have before Etch. It is one of the most complex packages to deal with since it has to work on every architecture while at the same time dealing with completely different configurations for each architecture as well as multiple flavours per architecture. I can't think of any other package that comes close. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How to build only linux-image-2.6.18-6-686
On Fri, Jun 06, 2008 at 10:14:09PM +0200, Mauro Ziliani wrote: Hi all. I need to rebuild only the linux-image-2.6.18-6-686 deb package from source code. How can I do that without rebuild all packages in linux-2.6.18 sources (xen, k7,vserver)? Remove the other ones from debian/arch/i386/defines, and then run debian/rules setupm which should regenerate the control file and exit, then you should be able to run the normal package build. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: What should postrm purge actually do?
On Tue, Jun 03, 2008 at 08:58:54PM +0200, Jeffrey Ratcliffe wrote: Fine. Although it always annoyed me that my $HOME filled up with spurious dotfiles whose origin I'm not necessarily sure of, and that a good installer could know to remove them if the package were purged. If you run a shared /home by NFS mount, how would your installer know if the package is still in use on a nother system using the same /home? Or is there a better solution? (I'm not sure it is gconf) Stuff in /home is not something to worry about. That's for the user and only the user to worry about. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 37.5% boot time reduction in Lenny is possible (recipe)
On Mon, Jun 02, 2008 at 08:51:49AM +0200, Goswin von Brederlow wrote: Single core? Slow disks? Unless you have idle times the multiple threads won't help. Works best with things like portmapper that does sleep 1. Geode LX800 with comapct flash, so yes and yes. Who says you can't change it? There is OpenBios. :) Well that would require figuring out how to load that onto the system without bricking it. I have been tempted to try it though. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 37.5% boot time reduction in Lenny is possible (recipe)
On Sun, Jun 01, 2008 at 05:40:04PM +0200, Petter Reinholdtsen wrote: Right. Are you talking about CONCURRENCY=startpar or something else? Never seen that myself, so I am curious how you get it. Could it be wrong init.d script dependencies in some of the packages you have installed? Please provide 'ls /etc/rc*.d/' for the machine in question, to give me a chance to reproduce this. What kind of hardware was this? Can you provide the bootchart graphs for the parallel and non-parallel boot? Yeah I was using the CONCURRENCY= to do it. As for hardware, well, RuggedCom RX1000 v2. That is Geode LX800, 256MB RAM, 256MB silicon systems compact flash on the IDE port, running UDMA, capable of about 9MB/s read. Some parts of startup have to be done in certain orders to to dependancies, and account for a big part of the startup time, so I am not sure going parallel actually has much chance to help. There aren't very many services to start. It seems the extra overhead it needed to manage the parallel startup along with forking multiple shells to handle them, actually added a couple of seconds to the startup much to my surprise. Not that 30 seconds is really that bad. Turning on more sercices might make the situation different, but I decided that there wasn't enough potential parallelization possible to be worth the risk of screwing up the boot in this case. A typical PC or server probably has a better chance of getting a benefit from parallel init. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 37.5% boot time reduction in Lenny is possible (recipe)
On Mon, Jun 02, 2008 at 08:01:50PM +0200, Petter Reinholdtsen wrote: Right. Did you see if readahead helped? No, I never tried that. I could try that out and see. I suspect the makemode of startpar might work better. It is not enabled yet. I have to spend some time to test it, as it require a rewrite if init.d/rc. It checks the system load and will start things based on the declared dependencies, not the sequence number. This is how insserv and startpar is used by the authors, so I suspect it is the optimal way to do it. :) I guess I better make sure all init scripts declare dependancies correctly. That can sometimes be hard given how various network related things can affect each other. I am still curious on your boot sequence, though. Well I have this for rcS.d: lrwxrwxrwx 1 root root 18 May 16 16:09 S01glibc.sh - ../init.d/glibc.sh lrwxrwxrwx 1 root root 21 May 16 16:09 S02hostname.sh - ../init.d/hostname.sh lrwxrwxrwx 1 root root 24 May 16 16:09 S02mountkernfs.sh - ../init.d/mountkernfs.sh lrwxrwxrwx 1 root root 26 May 16 16:09 S04mountdevsubfs.sh - ../init.d/mountdevsubfs.sh lrwxrwxrwx 1 root root 18 May 16 16:09 S05bootlogd - ../init.d/bootlogd lrwxrwxrwx 1 root root 22 May 16 16:09 S10checkroot.sh - ../init.d/checkroot.sh lrwxrwxrwx 1 root root 20 May 16 16:09 S11hwclock.sh - ../init.d/hwclock.sh lrwxrwxrwx 1 root root 17 May 16 16:09 S12mtab.sh - ../init.d/mtab.sh lrwxrwxrwx 1 root root 24 May 16 16:09 S18ifupdown-clean - ../init.d/ifupdown-clean lrwxrwxrwx 1 root root 21 May 23 11:45 S20loadmodules - ../init.d/loadmodules lrwxrwxrwx 1 root root 27 May 16 16:09 S20module-init-tools - ../init.d/module-init-tools lrwxrwxrwx 1 root root 18 May 16 16:09 S20modutils - ../init.d/modutils lrwxrwxrwx 1 root root 23 May 23 11:45 S21remove8139too - ../init.d/remove8139too lrwxrwxrwx 1 root root 26 May 16 16:09 S25libdevmapper1.02 - ../init.d/libdevmapper1.02 lrwxrwxrwx 1 root root 20 May 16 16:09 S30checkfs.sh - ../init.d/checkfs.sh lrwxrwxrwx 1 root root 22 May 23 11:45 S30getinventory - ../init.d/getinventory lrwxrwxrwx 1 root root 19 May 16 16:09 S30procps.sh - ../init.d/procps.sh lrwxrwxrwx 1 root root 21 May 16 16:09 S31chassisdraw - ../init.d/chassisdraw lrwxrwxrwx 1 root root 24 May 23 11:45 S31earlybootclean - ../init.d/earlybootclean lrwxrwxrwx 1 root root 17 May 23 11:45 S31lpcfpga - ../init.d/lpcfpga lrwxrwxrwx 1 root root 21 May 16 16:09 S32alertdclean - ../init.d/alertdclean lrwxrwxrwx 1 root root 17 May 23 11:45 S32hwprobe - ../init.d/hwprobe lrwxrwxrwx 1 root root 16 May 16 16:09 S33alertd - ../init.d/alertd lrwxrwxrwx 1 root root 23 May 23 11:45 S33enablesystems - ../init.d/enablesystems lrwxrwxrwx 1 root root 16 May 23 11:45 S33hwprep - ../init.d/hwprep lrwxrwxrwx 1 root root 21 May 16 16:09 S35mountall.sh - ../init.d/mountall.sh lrwxrwxrwx 1 root root 21 May 16 16:09 S36configwatch - ../init.d/configwatch lrwxrwxrwx 1 root root 24 May 23 11:45 S36iprouteconvert - ../init.d/iprouteconvert lrwxrwxrwx 1 root root 31 May 16 16:09 S36mountall-bootclean.sh - ../init.d/mountall-bootclean.sh lrwxrwxrwx 1 root root 21 May 23 11:45 S36mountvarlog - ../init.d/mountvarlog lrwxrwxrwx 1 root root 16 May 16 16:09 S37setkey - ../init.d/setkey lrwxrwxrwx 1 root root 18 May 16 16:09 S37watchdog - ../init.d/watchdog lrwxrwxrwx 1 root root 18 May 16 16:09 S38pppd-dns - ../init.d/pppd-dns lrwxrwxrwx 1 root root 17 May 23 11:45 S38ptpfpga - ../init.d/ptpfpga lrwxrwxrwx 1 root root 19 May 23 11:45 S38purgebist - ../init.d/purgebist lrwxrwxrwx 1 root root 19 May 16 16:09 S39dns-clean - ../init.d/dns-clean lrwxrwxrwx 1 root root 18 May 16 16:09 S39ifupdown - ../init.d/ifupdown lrwxrwxrwx 1 root root 23 May 16 16:09 S39shorewallstop - ../init.d/shorewallstop lrwxrwxrwx 1 root root 20 May 16 16:09 S40networking - ../init.d/networking lrwxrwxrwx 1 root root 28 May 16 16:09 S40networking-wanpipe - ../init.d/networking-wanpipe lrwxrwxrwx 1 root root 19 May 27 10:47 S40shorewall - ../init.d/shorewall lrwxrwxrwx 1 root root 15 May 16 16:09 S41ipsec - ../init.d/ipsec lrwxrwxrwx 1 root root 18 May 16 16:09 S42end2endb - ../init.d/end2endb lrwxrwxrwx 1 root root 18 May 23 11:45 S42gre.init - ../init.d/gre.init lrwxrwxrwx 1 root root 18 May 23 11:45 S42ledboard - ../init.d/ledboard lrwxrwxrwx 1 root root 17 May 16 16:09 S43portmap - ../init.d/portmap lrwxrwxrwx 1 root root 21 May 16 16:09 S45mountnfs.sh - ../init.d/mountnfs.sh lrwxrwxrwx 1 root root 31 May 16 16:09 S46mountnfs-bootclean.sh - ../init.d/mountnfs-bootclean.sh lrwxrwxrwx 1 root root 20 May 16 16:09 S47lm-sensors - ../init.d/lm-sensors lrwxrwxrwx 1 root root 19 May 16 16:09 S50l2tunneld - ../init.d/l2tunneld lrwxrwxrwx 1 root root 19 May 16 16:09 S50serserver - ../init.d/serserver lrwxrwxrwx 1 root root 21 May 16 16:09 S55bootmisc.sh - ../init.d/bootmisc.sh lrwxrwxrwx 1 root root 15 May 23 11:45 S55irigb - ../init.d/irigb lrwxrwxrwx 1 root root 17 May 16 16:09 S55urandom -
Re: 37.5% boot time reduction in Lenny is possible (recipe)
On Mon, Jun 02, 2008 at 09:34:09PM +0200, Petter Reinholdtsen wrote: The provided boot sequence look sane enough, but there are quite a lot of scripts I do not recognize. The sequence is not reordered based on dependencies and thus not fit for concurrent booting. Did you run parallel booting with this sequence, or did you enable insserv first? Parallel booting do not work without reordering based on dependencies. The sequence numbers used in many packages do not reflect their dependencies, as not all scripts with the same sequence number can start in parallel. I used that order yes, since it seemed it would run all scripts with the same number in parallel. The unrecognized scripts are probably some of our own for detecting the hardware modules present and validating that against was expected to detect failed hardware or misconfigurations and such. Running /usr/share/insserv/check-initd-order from the insserv package might warn for some of these, but it ignore those that happen to get the correct ordering with non-parallel booting because of ascii sorting of the script. I might try that out too. Unfortunately some things just have to wait for other things to start, and some things take a few seconds to start all by themselves, which is hard to change. Programming an FGPA through jtag can take 3 or 4 seconds, and if you need the FPGA up to access some hardware, you have to wait for it. I still wonder how hard it would be to replace the BIOS with openbios or linuxbios and whether that could reduce the 20 to 30 seconds the BIOS takes before hitting grub. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 37.5% boot time reduction in Lenny is possible (recipe)
On Sat, May 31, 2008 at 12:22:36PM +0200, Goswin von Brederlow wrote: Benefits may greatly varry. For this slowest 2+GHz 8 core server of the world make that 5 minutes 48 seconds to 5 minutes 30 seconds here or 5% speed increase. And yes, this slowest 2+GHz 8 core server of the world does take that long to boot, mainly bios ram test and scsi probing. Never seen a server count its ram so slow. And you get that once or twice a year. Not sure if the risk of breaking things is worth that optimization. Parallel startup actually made a system I work with take longer to start up so I certainly turned that off again. Ignoring the time spent by the bios (can't change that after all), it went from 29 to 33 seconds with parallel startup scripts. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Mouse configuration during installation needs improvement
On Thu, May 29, 2008 at 08:16:28AM +1000, Ben Finney wrote: Where is your data for this assertion? The number of people that have no idea how to get to the console from X. Personally I hate dealing with machines that don't have gpm installed, but I don't want to bloat the base install either. This argument would also see the removal of 'login', since that's not needed by your putative majority of people who don't log in over text-only interfaces. The default system does not have X. Not having login prevents you from working. Not having gpm is at worst slightly inconvinient. This argument fails for the same reason: just because *few* people use it is not sufficient reason to drop it from the install. If you don't want 'gpm' installed, you need a different argument. Don't install stuff that is non essential. gpm certainly is not essential, so don't install it. login is essential, so do install it. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Re: Mouse configuration during installation needs improvement
On Thu, May 29, 2008 at 07:35:20AM -0700, Stephen Powell wrote: Thanks for the update on mouse sharing in newer kernels. I didn't realize that this support had been added. That does take away part of my supporting argument for configuring X to use gpm. It was a very nice improvement. I realize that PS/2 mice were not intended to be hot swapped, but stuff happens. Sometimes the connector is loose and falls out, sometimes a mischievous co-worker unplugs it as a practical joke, sometimes the mouse fails, sometimes someone trips over the cord, sometimes the dog chews on it, sometimes an inquisitive toddler unplugs it, etc. Being able to recover from these things without requiring a reboot (or at least restarting the X server) is a nice feature, one that gpm provides. For a PS/2 port, there is NOTHING software can do to recover. The hardware on the majority of PCs requires a reset for the PS/2 port to come back to life. gpm is of no help here. X does mouse handling just as well as gpm does. Well, as Scotty of Star Trek fame says, The more they overtink the plumbing, the easier it is to stop up the drain. (Star Trek III: The Search for Spock) But then again, you could make that argument for the new kernel support for mouse sharing too. Yes, adding another layer of software also adds another thing that can go wrong. The key is to make the benefits greater than the cost. I can only say that I have used gpm on several different machines under several different releases of Linux, and I have never had a bit of trouble with it. In some cases I seem to remember it allowing the mouse to work when X couldn't drive it directly (the fups2 protocol came to the rescue). And it has saved my hindquarters when the mouse got unplugged somehow. /dev/input/mice actually has the kernel convert all known mouse formats to one protocol as far as I know, so all those mouse protocol issues are gone too. I'm not sure how one would know that most people don't use the console. I, for one, use it a lot. But even it it's true, I don't see why a device driver for a device that is present on the system shouldn't be installed. Should you not install serial port support because most people don't use the serial port? It won't HARM people who DON'T use the console, will it? We're talking about basic hardware support here, something that many applications can use -- not an application. Please reconsider. gpm is NOT a driver. It is a tool that can use the mouse interface in the kernel and do useful things with the terminal. Other programs could do the same if they wanted to. it is not a driver though. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Mouse configuration during installation needs improvement
On Thu, May 29, 2008 at 12:22:15PM -0500, Ron Johnson wrote: Which I did many years ago. But it would still make it easier for us dual-use people, and not affect only-gooey users, if gpm were the default. I would like ssh installed by default before gpm, but I don't think we need to go back to doing that either. The admin installing the machine should decide what to add to the base system. For that matter, gpm is only useful on machines with a mouse, some machines just have serial consoles. They have great use of login and getty, and such, but gpm and X just waste space. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Mouse configuration during installation needs improvement
On Wed, May 28, 2008 at 06:49:17AM +0200, Christian Perrier wrote: Not to mention the various remarks that have been made, I would like to enhance that ppl who use the Linux console on a regular basis (which is usually what motivates activating a mouse on it) are perfectly able to know that gpm is the package to install if one wants support for the mouse in the console. So, do we really want to install/configure GPM for any user of the system, knowing that a very large part of users will never use itand those who might need it are perfectly able to figure out how to do aptitude install gpm?? In short, I don't think that having gpm by default is worth it. Given most people don't use the console ever, installing a service that is only for console use by default is simply wrong. The less services need to be enabled by default the better. And installing gpm later if needed still allows it to get along with X so there is no problem, contrary to the original post. There was in the past, but the kernel has improved since then. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Mouse configuration during installation needs improvement
On Tue, May 27, 2008 at 12:57:11PM -0700, Stephen Powell wrote: Per the suggestion of J?r?my Bobbio when he closed Bug # 481514 against installation-reports, I am posting this item to the debian-devel mailing list. The Debian installer needs some improvement when it comes to mouse configuration. Currently, if the user requests a standard system and a desktop environment in the Debian installer, the X Window System will be installed in such a way that it drives the mouse directly, rather than going through gpm; and gpm is not installed. I recommend that gpm be installed whenever a mouse is detected on the system; and if the X server is also installed, it should always be configured to get mouse events from the gpm daemon rather than drive the mouse directly. This will allow the use of the mouse both in a virtual console and in X. Not only that, but hot swapping the mouse will be far less disruptive for X users. When the X server drives a standard PS/2 mouse directly, if the user unplugs the mouse and plugs in another one while the system is running, he must stop and restart the X server, losing all of his X applications in the process, in order to regain the use of the mouse. But when using gpm, all he must do is stop and re-start the gpm daemon to make the mouse work again. The X server is unaffected and the X applications are unaffected. With this recommendation, you should also move gpm to CD-ROM number 1. With current kernels, if you use /dev/input/mice, the port can be shared by gpm and X at the same time, and all mice you connect (no matter what) show up in that device. Of course PS/2 mice can not be connected while the system is on, since the hardware simply is not designed for that (I believe it can actually be damaged by trying although I have no seen it happen.). On a few systems it seems to work if you plug in a ps/2 mouse on the fly, but on the vast majority it does not work until you reset the system. USB mice of course are hot plug and hence much simpler. I like gpm, and use it, but I no longer point X at it like I used to now that the kernel allows mouse sharing at all times (as long as you don't try to use the obsolete /dev/psaux device to access the mouse). gpm would also be on the first CD already, if lots of people used it. Apparently they do not. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: conglomeration packages (Re: Will nvidia-graphics-drivers ever transition to testing?)
On Thu, May 15, 2008 at 06:28:36PM -0400, Filipus Klutiero wrote: I don't see your point. I can have libfoo1 and libfoo2 installed and used at the same time so both applications compiled for libfoo1 and libfoo2 can be used at the same time. I can recompile my applications for libfoo2 as I get around to it. When everything is recompiled libfoo1 can be removed. For kernel modules, I have to recompile all the kernel modules in order to move to a new kernel since I can't use a mixture of kernel modules for two different kernel versions since I can only be running one kernel at a time. We were talking about nvidia-graphics-drivers. Prebuilt nvidia LKM packages are already built by dedicated source packages. No, the nvidia package generates nvidia-glx, nvidia-glx-dev, nvidia-kernel-source and such. It does NOT know anything about building modules for specific kernel variants. That is done manually by someone so far (and hence seems to be a bit infrequent). The linux-modules-*-2.6 package makes it possible to simply have the buildd's take care of that job. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: conglomeration packages (Re: Will nvidia-graphics-drivers ever transition to testing?)
On Wed, May 14, 2008 at 08:13:53PM -0400, Filipus Klutiero wrote: Your second parenthesis is wrong. Just like LKM-s when the stock kernels' ABINAME is bumped, applications need to be rebuilt when the ABI of one of the libraries they link to changes in a way which is not backwards-compatible. You can check http://wiki.debian.org/OngoingTransitions for examples of library transitions. The old libraries don't go away right away though. With the kernel you can only be running one at a time, so you have to recompile the modules to make them work. libraries also change ABI versions a lot less often than the kernel changes. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: conglomeration packages (Re: Will nvidia-graphics-drivers ever transition to testing?)
On Tue, May 13, 2008 at 10:32:07PM -0400, Filipus Klutiero wrote: I don't follow you. iceweasel, for example, is not independent from, say, libnspr. If they come from one source package, then they all build together. If they do not, then it's a dynamicly linked library and each can be built and updated independantly. kernel modules have to be rebuilt if the sources change (just like any application of course) but also if the kernel is changed (which an application does not, not even when libraries change) which is hence a rebuild requirement external to the package itself. That's what is different. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Will nvidia-graphics-drivers ever transition to testing?
On Mon, May 12, 2008 at 09:42:31PM -0400, Filipus Klutiero wrote: No, a more frequent change is disabling/enabling modules [on some arch]. Even if you were right, adding new module packages doesn't justify updating other modules. Reusing the ice* example, suppose that Debian would have such an icezoo source package and Mozilla would release a new IRC client. Adding, say, icebear, to the packages generated by icezoo wouldn't make me happy, because I'd have to update iceweasel even if I wouldn't use icebear. Otherwise, I wouldn't like iceweasel updates to be blocked just because icebear has a serious regression. You can't compare something stupid like that, with something useful like building the kernel modules. The kernel modules are special in that they depend both on the current kernel build and on the driver providing the source. If either changes a rebuild has to be done. Normal packages that build all the binary packages in one go with no external dependancies have no reason for being build by calls in an external package. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: conglomeration packages (Re: Will nvidia-graphics-drivers ever transition to testing?)
On Tue, May 13, 2008 at 09:27:31PM -0400, Filipus Klutiero wrote: Packaging icebear wouldn't necessarily be useless. I defined it as yet another IRC client for the sake of the example. You can imagine it as yet another media player if you think that's more useful. You can't compare packaging two unrelated and independant programs which can be built completely independantly with the kernel modules which can not be built completely without the kernel source (which varies independantly of the module sources). A change to either the kernel sources or module sources will require a rebuild. So any comparison of an application to kernel modules is completely irrelevant and pretty much pure nonsense. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Will nvidia-graphics-drivers ever transition to testing?
On Sun, May 11, 2008 at 03:39:15PM -0400, Filipus Klutiero wrote: Yes, the problems with conglomeration packages are the same as you'd get by merging 2 somewhat related source packages together, say iceweasel with icedove. Although the source packages would probably share a bit of code, if there's a libpng transition and only iceweasel is ready, you need to drop icedove, but your only choices are to drop icedove and iceweasel or re-upload with a disabled icedove. Transitions get longer and/or versions are bumped constantly. For example, linux-modules-extra-2.6 was uploaded 7 times to unstable in 2008, while iceweasel was only uploaded 5 times. linux-modules-extra-2.6 only did one Linux ABI transition during that time. If nvidia prebuilt modules are merged in linux-modules-nonfree-2.6, they'll be tied to kqemu prebuilt modules. This would hurt both nvidia LKM-s and kqemu LKM-s, which are already in bad enough shape. linux-modules-extra-2.6 was uploaded many times since more and more modules were getting added to its list to build. There is no source code for any of the modules in the package however, just a list of which ones should be built from their own *-source packages. It is only changed when new module packages should be supported by it, and when a new kernel comes out so that it can explicitly build modules for that new kernel. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Will nvidia-graphics-drivers ever transition to testing?
On Sat, May 10, 2008 at 12:38:29PM -0700, Mike Bird wrote: [Moved from debian-release to debian-devel] WHY ON EARTH should we intentionally require that packages install successfully with known unmet dependencies which will cause failure at runtime? Well nvidia-kernel should soon be built from linux-modules-nonfree-2.6, so perhaps that will ensure that a complete set of modules are in unstable so that everything can move to testing together. That should solve any dependancy issue preventing things moving to testing at least. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RFH: Multiarch capable toolchain as release goal
On Tue, Apr 15, 2008 at 09:03:54PM +0200, Andreas Barth wrote: * Goswin von Brederlow ([EMAIL PROTECTED]) [080415 20:34]: Description: The toolchain should be ready to handle libraries and include files in the multiarch locations. Bug-Url: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=369064 State: All done except for binutils. Patch exists. Binutils are frozen for Lenny, so please no additional changes. I suspect by the time a fully working multiarch is done, x86 won't need it anymore because everything will be fully 64bit. :) Now I suppose sparc and others might still like it if they have performance advantages of 32bit code over 64bit code, in which case keeping 64bit for only those programs where the extra address space is worth it would be great. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: (English-speaking) Canadian users: default to US or Canadian multilingual keymap?
On Sun, Apr 13, 2008 at 08:28:12AM +0200, Christian Perrier wrote: I'm seeking advices for #475482. console-data recently got a new keymap, namely ca-multi, which features the Canadian multilingual keymap. This keymap is standardized by standard bodies in Canada and seems to be available from several hardware vendors. My understanding is that local official bodies are required to use this keymap (particularly in Quebecbut the standard seems to apply to the entire country). On the other hand, it is pretty leikely that English-speaking users in Canada traditionnally use the US keymap (see #475482). The point is whether the *default* keymap preselected in D-I should be us or ca-multi when English+Canada is chosen. That keymap choice also influences the X keymap choice. That could be a sensitive choice (linguistic topics in Canada always are), which is why I'm seeking for more advice, particularly from Canadian users (or users living in Canada). Well I would say that I haven't seen a french canadian keyboard outside of quebec, although perhaps the government does use them. Certainly for the typical user that actually gets to install Debian on their machine, those that pick english as the language and canada as the region, will also pick US keyboard layout in 99.99% of cases, since that's what they have. If they were to pick french as their language and be in canada, then the french-canadian layout would almost certainly be what they have, since typing french on a US keyboard is quite a pain. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A suggestion
On Thu, Apr 03, 2008 at 11:03:51AM +0100, Matthew Johnson wrote: _you_ may want more up to date packages, but a lot of people are entirely happy with etch on their desktop. For example, both me and my mother. I'd also go as far to say that most corporate Linux desktops, to pick another example, would welcome the lack of change for 18 months. Given how much uproar there is about Microsoft's desire to retire Windows XP while many people would rather stick with it that go to Vista, perhaps the idea that everyone wants the latest and greatest is no longer true. Many people don't want breakage. Linux hobbiests may want the constant latest version, and they can cope with testing (or, often, run ubuntu or gentoo for that reason), but most computer users would rather things _didn't_ change regularly, and those people are also our target audience. I run unstable on my home machines, and stable on my work machines. At work I am trying to get things done, not play with my software. At home is different. I don't need to spend 4 hours figuring out why X no longer works after an upgrade at work. At home I don't mind (although unstable rarely breaks stuff). -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A suggestion
On Thu, Apr 03, 2008 at 04:13:19PM +0100, Dave Holland wrote: Strange; I run stable at home, because if I spent 4 hours figuring out why X no longer works, my wife would kill me (a) for breaking X, and (b) for wasting 4 hours. ;-) My wife would only be upset if I messed with her machine, or if I broke the mythtv box (so I only upgrade it every few weeks if everything looks good on mine). -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A suggestion
On Fri, Mar 28, 2008 at 09:02:49PM +0530, Unni wrote: Thank you for all those replies.. I am really excited to read those. The latest problem I am experiencing is that I have a recently purchased Dell Inspiron 1420 laptop and I tried installing debian etch in it. Most of the drivers were not detected like display, audio, modem etc. I tried posting my problem in LQ but didnt get a result to solve my problem. But I saw Dell recomending ubuntu for the same machine. I tried it and with satisfacoty result. But, I think ubuntu doesnt provide necessary packages for development in its bacis CD. Even I cant use Xwindows programming in it. I have the full CD set of Etch. And I wanted it to work well. Well Etch was released about a year ago, with a kernel probably 6 months older than that. A recent Dell (who is probably the company most eager to use new components in their machines) is just not likely to be supported. Now if you were to grab the weekly build of Lenny (debian testing) you would likely have a much easier time. When the Etch+1/2 release happens (which is certainly a nifty new idea for Debian) in maybe 6 months or whatever the plan is, with a newer kernel, then all of a sudden there is a chance Etch would install on your new Dell. This is what inspired me to give that suggestion. So I cannot contribute for the development. I am trying to learn more. If I can do something new, I will surely contribute. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A suggestion
On Wed, Mar 26, 2008 at 12:32:33PM +0530, Unni wrote: I have been using debian for the last 2 years. ( i was unaware of linux b4 that) I installed debian etch in many machines with varying configurations. Most of the time I was only able to install the base system with no sound, poor resolution, no video etc. This was especially in laptops. Recently I tried Ubuntu. Surprisingly to me, after installation the sound card, video card etc are detected automatically. And more.. the appearence is much good compared to etch, i think. Why the debian can be more interesting? More graphics, more drivers etc. I think this can be done without a big effort ( correct me if I am wrong). I suggest to make this change possible. If this is not a good suggestion please discard it. I love to use debian and I wanted it to be more that Ubuntu. Most machines I have installed Etch on detected everything perfectly too. Even more so for recent attempts with Lenny. Ubuntu does seem to try harder to make auto detection of x86 hardware work well, while Debian tries to make sure all architectures work well, although given the scope of all that hardware, it may not be quite as automatic. Of course Ubuntu also updates their releases way more often than Debian, and willl work easier on new hardware then Debian as a result, since you can't expect a 2 your old distribution to work perfectly on a 1 year old machine since the hardware hadn't even been designed when the software was released. Try installing Windowx XP on a new machine without having some driver disks to help it with all that new hardware. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: A suggestion
On Wed, Mar 26, 2008 at 08:35:51AM -0700, Mike Bird wrote: The OP makes an important point. Debian is losing users and relevance. Although Debian supports a wider range of architectures than Ubuntu, the reality is that Debian now targets a much narrower audience - the old hardware crowd. Popcon[0] records 97% of hits from just the i386 and amd64 architectures, yet packages are frequently delayed for weeks or even months while architectures enjoying 0.1% popcon scores struggle to catch up. Lenny is still only at linux kernel 2.6.22, which means little support for hardware up to a year old! Sid is not suitable for most people, and most people lack the skills or inclination to install and maintain a mix of Lenny and Sid. Stable has linux kernel 2.6.18, which means little support for hardware up to two years old, and six months still to go before the next version. Ubuntu has a much better handle on the issue of producing timely releases, but Ubuntu is also quirky and very much my way or the highway. I would hate to be unable to continue using Debian. The next DPL should have a solid plan for reversing Debian's decline. If this means that some architectures fall by the wayside for lack of interest then so be it. Better to lose several 0.1% architectures than for Debian as a whole to continue the slide towards irrelevance. Part of what is making Debian relevant is that it does support so many architectures so consistently. Debian aims to be complete and as good as possible. Ubuntu aims to do most things for most people, and who cares about the rest. This makes Ubuntu a subset of Debian, and hence much easier to maintain. If Ubuntu serves you better, go ahead and use that. Many debian developers work on both Ubuntu and Debian as far as I can tell, and it seems to be improving lots of things for both systems. I would hate to see Debian drop stuff just because the majority of users find it inconvinient to be delayed at times by the minority. Often the delays caused by other architectures cause bugs to be found and fixed that would otherwise have gone unnoticed for much longer. It does seem that some architectures are being demoted to second class in future releases, so perhaps you will get your wish at some point. Remember that what you consider my way or the highway in Ubuntu is very much what would be happening if the x86 crown gets to ditch the other architectures in Debian because they slow things down. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: BitTorrent and ISP interference
On Thu, Mar 20, 2008 at 03:46:44PM +, brian m. carlson wrote: I think it's a good idea. I'm not a DD, but I don't appreciate getting dupe messages from the list. procmail is not set up to handle them, and I read the vast majority of the lists I post to. I only wish non-Debian lists had the same policy. The larger the volume on a list gets, the more useful duplicate messages get. That is why lists like LKML require reply-all unless explicitly requested not to (which seems to be just about never). It also is quite useful for lists that don't require subscribing to post. Ignoring that particular provision, it contains useful tips such as only post in English (on non-localized lists), use the proper list, and don't send HTML email. These are all things I think we should encourage on Debian lists, and on lists in general. I for one would rather get two copies of a message than none. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Using standardized SI prefixes
On Mon, Jun 11, 2007 at 05:49:18PM -0600, Wesley J. Landaker wrote: Well, in SI units, KB never means kilobyte, and is not ambiguous at all; it's a kelvin??bel. Nope. kelvin is a unit, not a prefix. K as a prefix means kilo, so KB is kilo bell. You better have small values or you are dealing with something very very loud. So yes already trying to apply SI units to bytes has gone all wrong and they shouldn't have even tried in the first place. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Using standardized SI prefixes
On Tue, Jun 12, 2007 at 04:20:42AM +0100, Alex Jones wrote: Then why bastardise an SI prefix? This surely serves only to confuse people. Why don't we invent a new word? Should we call it the thousandbyte? Because computer people have always bastardised everything. Booting, window, mouse, etc. It is simply a convenient accident that 2^10 ~= 10^3. As I'm sure you're well aware, this approximation starts to become way off as you approach tera-. In fact, that's about 10% error, which is simply unacceptable. It's time to move on and accept that the approximation fails with big numbers. And I suppose you think that differences such as that between the American and the English ton are acceptable and desirable. Let it be known that I strongly disagree with you here. :) The drive makers screwed up. They are the ones that should be fixed. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Using standardized SI prefixes
On Tue, Jun 12, 2007 at 06:25:22PM +0200, Josselin Mouette wrote: Prefixes are case-sensitive. Kilo is k. (This is also why there is much less ambiguity with K used for kibibytes.) Hmm, I used to think both k and K were accepted for kilo, but I can't find anything that says K is accepted for use as kilo. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Using standardized SI prefixes
On Mon, Jun 11, 2007 at 07:05:23PM +0200, Magnus Holmgren wrote: Why - besides pronunciation? Aren't the names enough? :) Maybe they could have called them Kilobin bytes and Megabin bytes or something somewhat less awful sounding that they came up with. Did they even talk to anyone that might actually use the units? -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Is there a way to positively, uniquely identify which Debian release a program is running on?
On Tue, Jun 05, 2007 at 09:37:58AM -0400, Kris Deugau wrote: ... by making reasonable assumptions about what is on the system based on a standard install of $version of $distribution. Well too many seem to assume that you are running some version of redhat, and that redhat equals linux and there are no reasons to consider anything else. Asking enterprise vendors to support your (customised, hacked-up, non-standard) OS install is, um, unlikely. Unless you're paying them enough for them to completely mirror your environment in the dev lab and certify their product on *your* particular combination of software. (Of course, most people running mixed-version Debian systems are unlikely to be buying enterprise software like Oracle. g) Sure. If I say I run debian sarge, and I install the .deb for debian sarge, it should work, unless I have mixed in stuff that isn't part of sarge, in which case that would be my problem. So yes as long as they provide a proper .deb targeted at sarge, that would be fine. Of course Etch would be more interesting than Sarge by now. (This is drifting off from my original question: what simple test(s) for uniqueness can I use to determine which version of which distribution I'm on? FWIW, it seems that for my purposes, the contents of /etc/debian_version and the full version+release string from the base-files package are sufficiently unique.) Certainly the /etc/debian_version is what I would rely on. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Is there a way to positively, uniquely identify which Debian release a program is running on?
On Sun, Jun 03, 2007 at 11:16:08PM +0200, Javier Fern?ndez-Sanguino Pe?a wrote: Think about Enterprise (non-free) software like Oracle, HP Openview, Tivoli, Remedy... Do you expect vendors of this software to understand^Wimplement package management based dependencies for *all* Linux distributions? LSB tries to simplify the Linux environment for such software. Lsb_release is defined as the an answer to the question which distribution am I running in and which release is it? For the kind of cash the enterprise vendors tend to charge, yes actually now that you ask, I think I can expect them to figure out dependancies and making proper packages. Opera seems to manage, and they are giving away their non-free software for free. Managing to package and test your code on most major distributions is actually a good way to ensure the programmers didn't go do something stupid that is going to cause problems later. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian desktop -situation, proposals for discussion and change. Users point of view.
On Thu, May 17, 2007 at 07:56:57AM +0200, Mgr. Peter Tuharsky wrote: Yes, I have written it there too. Kernel is, IMO, the best thing to upgrade few times during release cycle, with quite little risk. Upgrading the kernel is quite high risk. Features come and go and change with each new kernel. Drivers break in some releases, although usually only for less common hardware that no one tested during the development of that release, or new features are added that require updated user space tools, etc. For example 2.6.16 and higher tag all netkey ipsec packets with a policy tag of 'ipsec'. Before 2.6.16 they didn't. So going to 2.6.16 or higher broke shorewall in sarge since it didn't know about the new policy, and it required a newer version of iptables since it too had to support this new behaviour. Do you think people with ipsec tunnels would be happy if it stopped working just because of a kernel upgrade added to support all the people who just have to have support for their latest machine in debian's stable release that was made before the hardware in their new machine even existed? Yes, Debian was the last distro using Xfree86 I know. Of course the transition was complex! Sure seems much better with x.org than xfree86 though. That should be changed anyway, since security upgrades occasionally break things too. Downgrades are in general imposible to do, unless you put in a lot of useless code that will never be used except when downgrading, which of course will be used so rarely that it will be full of bugs due to not ever being tested by anyone. Remember upgrades sometimes have to convert files to a new format. A new package can do this because at the time it was made, the maintainer knew about the older versions already made. If you try to install an older package, there is no way at the time that older package was made to know how to convert from a newer file format back to the old one. So to solve this you would now have to add some kind of downgrade feature to the scripts of the new package that could be called before going to an older package. Sometimes data is no longer used and dropped from a file format, or new stuff is added. If stuff was dropped how are you going to restore it on the downgrade? If stuff was added I guess you can just throw it away on a downgrade. But overall supporting downgrades requires a time machine and lots of generally untested support code. I wouldn't want to try to support that. Of course often there is no change to the data or config files, and you can simply install the old package again using whatever package tool you like to use by telling it what version to install. So unofficially downgrading is possible most of the time, but when it isn't, supporting it isn't worth trying. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian desktop -situation, proposals for discussion and change. Users point of view.
On Thu, May 17, 2007 at 08:10:21AM +0200, Mgr. Peter Tuharsky wrote: Yes, and security upgrades never change behaviour of software and never break things. That's the way it OUGHT to be. The reality has its own turbulences. I don't remember security upgrades ever breaking anything in testing. I am sure it must have happened at some point, but the security team appears to take their work very very seriously. Well, I might have been out of luck. Maybe it hasn't been hudreds, just a full screen of (didn't count them and wouldn't remember anyway). That changes nothing on assertion, that using the testing routinely is not official, nor advisable way for ordinary users. See below. My original intention was not, and still is not, to discuss capabilities of testing. I want to discuss possibilities, how could the stable be more attractive for ordinary user, how to make it usable on hardware newer-than-3-years-old, how could the user be blessed with fresh software rather than 2-years old, how to allow him to easily and effectively participate on bug reporting, and how to avoid the work of backporting security fixes to ancient software. The answer to all of those is 'testing'. That is all stuff stable is definitely not meant to do. If You and several people claim they haven't met such problems with testing, I can live with that. I also heard people whose experience was different, and my personal one is closer to them. That's all. All it takes is one package that has a dependancy problem to prevent hundreds of other packages from upgrading or installing fully. It looks like everything is broken, when all it really is is just one missing or broken package. When you know how to read what the upgrade system tells you you can usually deal with it or put the right things on hold for a few days while the missing package makes it in to testing. In unstable there are occationally bad packages uploaded that break things enough that you just have to wonder if the maintainer even tried to install it themselves. :) Usually there will be an answer to how to go back or fix it on the debian irc channel already. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Building packages twice in a row
On Wed, May 16, 2007 at 08:10:44AM +0200, Martin Zobel-Helas wrote: as a QA effort the whole archive was rebuilt yesterday to catch build-failures, whether a package can be build twice in a row (unpack, build, clean, build). We found about 400 packages not having a sane clean target. To cite http://www.debian.org/doc/debian-policy/ch-source.html#s-debianrules clean This must undo any effects that the build and binary targets may have had, except that it should leave alone any output files created in the parent directory by a run of a binary target. What about packages that automatically pull in updated autoconf files as part of the build, or regenerate .po files which were already there, but due to a new version of the tools generates a different .po file from what was already there? The result is that doing the build caused the sources to change and be different from the source when extracted. Some packages also leave around .orig files due to patches applying but with offsets or fuzz, which also don't get cleaned up and leave the sources changed. Neither of these cases cause build failures, but they are quite annoying when trying to diff for any changes one may be trying to make to a package. I know of at least a few packages that had these issues under Sarge and I believe also under Etch: quagga, dhcp3, openswan: Generate changed .po files ntp: changes autoconf files grub: changes autoconf files and reruns automake generating new .in files. It would certainly make life a little easier for me if these kinds of changes were simply not permitted. If a package can't be built and cleaned and end up exactly like it was when extracted, then there is something wrong with how it builds or how it cleans. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Building packages twice in a row
On Wed, May 16, 2007 at 07:57:33PM +0200, Armin Berres wrote: I may be wrong, but IIRC removing those generated files in the clean target is the solution if you want a clean .diff.gz. But dpkg-buildpackage will then spit out lots of warnings about being unable to store the deletion of a binary file in the diff. So it will look ugly, but work I guess. diff will show the files as having disappeared of course, versus leaving them there and dpkg-buildpackage telling you it can't store the changes to binary files in the diff, and diff will tell you the binary files changed. So leaving the regenerated files there or deleting them, both result in the same amount of noise from diff and dpkg-buildpackage for binary files. Saving the files, then restoring them as part of cleaning would be probably the only way to keep diff and dpkg-buildpackage from making any noise about changes since it is the only way there aren't changes. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
UTF-8 encoding of changelog files
I am currently trying to find a valid UTF8 encoded text file to check my terminal settings and such to make sure I have it working right. So to do this I figured I would just find some changelog file that claims to be in UTF8 format and see if it views correctly. Well so far no luck. Every single one I have looked at that claims to be in UTF8 format in accordance with policy 3.6.0 and higher, appear to contain no UTF8 characters, but do contain illegal characters by UTF8 rules, and look exactly like one of the older western european encodings instead. So should changelog files for debian packages be UTF8 and if so are there any that actually are and does lintian have a simple check to ensure no illegal characters are in the file as per UTF8 rules? Should be pretty simple to check after all. Sure looks like a lot of files are wrong unless my understanding of UTF-8 is broken (which I must admit is possible, although I have managed to view a UTF-8 file successfully now, while none of the weird charecters in the changelog files I looked at so far are considered printable characters with the same setup). -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: UTF-8 encoding of changelog files
On Wed, May 16, 2007 at 02:53:09PM -0700, Russ Allbery wrote: Lennart Sorensen [EMAIL PROTECTED] writes: Yes. Hmm, well I haven't found one yet and I think I checked 10 so far, all of which have non ascii characters but none of which appeared valid to me. lintian's is, at least. Many others are these days as well. All of my packages have UTF-8 changelog files, although not all actually have non-ASCII characters. Well of course non ascii characters don't have to appear in most changelog files. Yup. Has for years. debian-changelog-file-uses-obsolete-national-encoding is the tag. I wonder how many packages are triggering that right now. So is this valid utf-8? I don't believe so. If it is I have to go reread the UTF-8 spec again. :) 4b c4 99 73 74 75 74 69 73 (K..stutis) (I found this one in base-config from sarge since I happened to have that one open in a hex editor for checking). -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Mysterious NMU (Bug #423455)
On Tue, May 15, 2007 at 12:20:22PM -0400, Roberto C. S?nchez wrote: Yes, but in reality what is the likelihood that either a security update or NMU would introduce an incompatible change? I would say that such a possibility is extremely low. Why couldn't a security change require making incompatible changes to something? A bin NMU would hopefully never change any such thing though. Perhaps the policy should change so that security uploads are done with x.y.z-(w++)~lenny1? That is, the Debian version number gets incremented. But isn't the idea that the version in test/unstable should _always_ be higher than the version in stable including security versions? That makes incrementing 'w' not an option. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: sp: News
On Tue, May 15, 2007 at 05:16:54PM -0400, Confirmation from Deen Foxx wrote: The message that you sent to me (Deen Foxx) has not yet been delivered: From: debian-devel@lists.debian.org Subject: sp: News Date: Tue, 16 May 2007 01:15:44 -0300 (MSK) I am now using Vanquish to avoid spam. This automated message is an optional feature of that service, which I have enabled. Please accept this one-time request to confirm that the above message actually came from you. Your confirmation will release the message and allow all future messages from your address. Click here to confirm: http://confirm.vanquish.com/?U=lpKUi9n4VxfNkwO7bPrNxg Vanquish respects my privacy and yours. Your confirmation gets your mail delivered to me now and in the future. It does not serve any marketing purpose. Learn how privacy is assured: www.vanquish.com/privacy Hmm, seems someone subscribed while using one of those totally braindead email confirmation systems that only serversto cause more spam and infinite loops in mail systems. This almost falls under the same category as 'no vacation mail' although not quite the same. Maybe a new item should be added to the mailing list 'Code of conduct'. :) -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: CDD: GastroLinux (RFC)
On Fri, May 11, 2007 at 02:41:41PM +0200, RalfGesellensetter wrote: Dear list, Custom Debian Distributions are getting en vogue: After Debian-Edu (Skolelinux) and Debian-Med, Debian-Office had been proposed. Now, I bear this idea in mind: Combine too major tasks that pubs and restaurants need in one Distro: 1) use computer as media player: There are already companies that sell dedicated PCs to restaurants. Anybody should be able to beat their rates by using free software like Amarok. 2) use computers as cash station: More and more (good and bad) systems can be seen, most of which using touch screens. I've seen quite a few ugly and unhandy pieces of GUIs, however I think that this task is quite pretentious: - it must be stable and have backends to a database or even a cash line - it might be desired to handle credit card readers; some cash systems use cheap hardware (embedded). I'd advise to focus on 1) but also offer 2). The default screen saver should make some P.R. for gnu/linux/debian. In a pub, this will get some publicity ;) Screen savers on LCDs really make no sense, and if it is supposed to be a cash register then the screen saver will be amazingly annoying to the user since they will first have to do something to get the screen saver to go away before hitting the buttons on the touch screen. So if you do 2, absolutely no screen saver seems about the only sane configuration. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Use bz2 not gz for orig.tar ?
On Fri, Apr 13, 2007 at 06:30:53AM -0400, Roberto C. S?nchez wrote: Which is by far a minority situation. You are much more likely to end up with someone on a 384k or 512k DSL (or even slower ISDN link) with an opteron, xeon, athlon64 or the like. I'm not saying that your situation is not possible, simply that trading size for compression/decompression time would benefit far more people than it would hurt. I run a 486DX2/66 on a 3MBit DSL link. I don't compile stuff on it anymore though. I would hate to have package installs get any slower though. :) Of course it isn't a big deal really, and certainly for some people bandwidth is a bigger issue than cpu power. Does the Packages file still come in both gz and bz2? Does it still come uncompressed for that matter? Does it matter now that we have the diffs as well? -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian Buzz and Rex binary packages
On Wed, Apr 11, 2007 at 11:01:10PM +0200, Cherubini Enrico wrote: I have the 1995 InfoMagic 5 cd set, and it contains Debian 1.0 with .deb packages and of course install floppy images :) Maybe I'll try it in some qemu/xen virtual system :) home:/cdrom/debian/debian-1.0# du -s binary/ 88290 binary/ Wasn't that the CD that caused debian to have to skip 1.0 because some idiots went and took the development code and called it a release? In other words, _not_ an official debian release? -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian Buzz and Rex binary packages
On Thu, Apr 12, 2007 at 08:10:43AM +1000, Hamish Moffatt wrote: No, binaries were provided for all official releases (buzz/1.1 onwards). There was an installer (known as boot-floppies); its appearance was fairly similar from buzz right through to woody when it was retired. There was no apt and no source dependencies, and early on even a different source package format, but there's still a lot of similarity. OK. I hadn't looked at it that carefully, since I didn't use debian until 2.0. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 64-bit transition deadline (Re: Etch in the hands of the Stable Release Managers)
On Tue, Apr 10, 2007 at 04:36:06PM +0100, Luis Matos wrote: Maybe software vendors will look at linux for more power for less hardware, using 64 bit solution. Talking about CAD and CAM, for example, they need too much of power, even if machines are currently enought. Having linux to complete use 64 bit solutions may open a door for software vendors to built their applications on linux. Free cad implementations are too simple for use in some industrial environments, when programs like CATIA or Solidorks, or inventor, Come in Mind. These programs are expensive and require power that can be better used in 64 bit platform. CATIA has unix versions ... i don't really know if they will ever have linux versions. Who knows. Given they probably have to do extensive testing to cerfity a platform with specific hardware and drivers and everything else with specific versions, it may simply not be feasable to do that for linux, at least not at this time. -- Len Sorensen -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]