design focus [was Large initrd, was booting problem (udev related?)]

2007-08-03 Thread Douglas Allan Tutty
On Fri, Aug 03, 2007 at 05:54:57PM +0300, Andrei Popescu wrote:
 On Thu, Aug 02, 2007 at 08:34:00PM -0400, Douglas Allan Tutty wrote:
  
  However, don't all those modules in the initrd end up staying in the
  kernel anyway, or do they get unloaded during boot?  If they stay, and
  'most' modules get added, how is that different than having a huge
  monolithic kernel?  It may not matter on a box with huge memory, but I
  have mostly small-memory boxes.
 
 I may be wrong, but I think that only the needed modules are actually 
 loaded.
 
  As for xorg-video-foo, that's why I don't install the xorg metapackage.
  I choose from its dependencies what I need.  
 
 Same here

All these extra packages together take a lot of disk space, a lot of
download bandwidth to install and maintain.

 
  /rant
  
  There's a growing kitchen-sink approach in Debian (perhaps all of Linux,
  I don't know).  There's the kernel/initrd size, there's the variable
  device name problems, to name two.  It suggests to me that there's a
  missing piece of infrastructure.  Perhaps the installer system should
  create a hardware inventory file that initrdtools (or whatever the
  nom de jure) can access to generate a tailord initrd, that apt can
  consult for what drivers to download, etc.  The installer rescue mode
  could offer a tool to regenerate the inventory file for times when one
  changes hardware.
  
  /end rant
 
 True, but you have to consider the competition. 

I guess the problem is related to this notion of trying to compete with
MS.  If people 'buy' brand A because they like features x,y, and z, and
brand B has the goal of gaining market share, it will tend to morph into
a clone (feature-wise) of brand A.  However, it will tend to take on
some of the compromises of brand B that go with features x, y, and z.  

I stick with debian on my big box because of inertia, the debian policy,
the debian security support for all packages in debian/main, and the
absolute ease of applying bug fixes with aptitude.  Debian also supports
my trackball mouse's scroll wheel (IMPS/2) whereas OpenBSD does not.
However, my older computers are transitioning away from Debian to BSD
because of the newer debian (perhaps all linuxes) being so much slower
on them than either older debians or new BSDs.



 If you plug a new device 
 into a Windows machine the driver gets installed automatically or you 
 get prompted for the drivers if Windows doesn't have them. You have to 
 admit that this is pretty convenient functionality which has been there 
 at least since Windows 2000 (how this is cluttering the registry and the 
 fact that it isn't always working is a totally different topic).

That convenience comes at a huge price in terms of system resource
utilization on boxes with few resources.  Compare it to OpenBSD, for
example, where there is no such thing as eth0, but network interfaces
based on driver name (eg. ne) and configuration; my 486 has one NIC as
ne1.  Its not convenient to have to look up in a file for the supported
configurations of different hardware to ensure that your NIC is set up
to match one of them then configure networking based on ne1.  However,
its only done once.

 
 The big advantage on linux (and especially Debian) is that power users 
 still have the possibility to customize the setup (like using a 
 different mkinitrd, different options, purge unneeded packages, ...) 
 that a Windows user doesn't have. 
 

True, but rather than hotplugging, I would prefer a program that can be
run as needed each time a new piece of hardware is attached for the
first time, which would create the device node and load the appropriate
module and parameters.  Once done, it would get out of the way.  On
subsequent attachment of a device, everything would be pre-existing.

It all comes down to the notion of competition and market share.  If
Debian is going to focus on market share and competing with MS it will
have to target MS's target market.  Since I'm not in that market, Debian
will be shifting its focus on the market I'm in.  It won't be that I'm
drifting away from Debian but that Debian is drifting away from me.

Doug.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: design focus [was Large initrd, was booting problem (udev related?)]

2007-08-03 Thread David Brodbeck


On Aug 3, 2007, at 9:25 AM, Douglas Allan Tutty wrote:

I guess the problem is related to this notion of trying to compete  
with
MS.  If people 'buy' brand A because they like features x,y, and z,  
and
brand B has the goal of gaining market share, it will tend to morph  
into

a clone (feature-wise) of brand A.  However, it will tend to take on
some of the compromises of brand B that go with features x, y, and z.

I stick with debian on my big box because of inertia, the debian  
policy,

the debian security support for all packages in debian/main, and the
absolute ease of applying bug fixes with aptitude.  Debian also  
supports

my trackball mouse's scroll wheel (IMPS/2) whereas OpenBSD does not.
However, my older computers are transitioning away from Debian to BSD
because of the newer debian (perhaps all linuxes) being so much slower
on them than either older debians or new BSDs.


I don't think it's so much Microsoft's influence as it is a  
difference in philosophy.  Linux distributions put a lot of effort  
into being convenient desktop OSs.  BSD tends to be aimed more at  
servers, where things like hotplugging aren't as important.  If you  
have to check dmesg for the right device node and then run 'mount' to  
access a USB flash drive on a server, it doesn't matter much because  
you aren't going to be doing that often.  If you have to do that on  
your desktop machine every time you plug in your digital camera, it  
gets old in a hurry.  For that matter, ten years ago Linux  
distributions were already doing fully automated installers while  
NetBSD and OpenBSD still required you to get out a calculator to  
figure out the cylinder boundaries for the slices on your hard disk.   
The two OSs just occupy different points on the easy of use vs.  
compactness scale.


You see this in hardware support, too.  Linux tries to support the  
newest stuff, because that's what's in desktop machines (and  
sometimes suffers instability because of it), while BSD tends to take  
a more conservative approach.  Hardware that's seen in desktops but  
rarely in servers often isn't supported or maintained well in BSD,  
because it's just not a priority.  (The 3c509 ethernet driver, for  
example, was buggy for *years* in FreeBSD.  It never really got  
fixed, the cards just became obsolete. ;)  Another example: The  
Marvell Yukon gigabit ethernet chipset, common in desktops but rare  
in servers, is much slower under FreeBSD than under Linux.)


It could be for your particular application, BSD is just the right  
tool for the job.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: design focus [was Large initrd, was booting problem (udev related?)]

2007-08-03 Thread Andrew Sackville-West
On Fri, Aug 03, 2007 at 12:25:15PM -0400, Douglas Allan Tutty wrote:
 On Fri, Aug 03, 2007 at 05:54:57PM +0300, Andrei Popescu wrote:
  On Thu, Aug 02, 2007 at 08:34:00PM -0400, Douglas Allan Tutty wrote:
   
   However, don't all those modules in the initrd end up staying in the
   kernel anyway, or do they get unloaded during boot?  If they stay, and
   'most' modules get added, how is that different than having a huge
   monolithic kernel?  It may not matter on a box with huge memory, but I
   have mostly small-memory boxes.
  
  I may be wrong, but I think that only the needed modules are actually 
  loaded.

I think this is correct, only the needed modules are actually loaded
into the kernel. The initrd makes the *available* for loading. And
when / pivots, I think the initrd memory gets freed. So its really
only an issue during the initial bootstrap. A really large initrd on a
memory-bound machine could get in the way. A really large initrd on an
I/O bound machine can take a long time to load in. But, IMO, for
general purpose machines, its not a big deal.

  
   As for xorg-video-foo, that's why I don't install the xorg metapackage.
   I choose from its dependencies what I need.  
  
  Same here
 
 All these extra packages together take a lot of disk space, a lot of
 download bandwidth to install and maintain.

yeah, the extra packages definitely are an issue. I'm not so sure tht
the extra kernel modules are all that big a deal in the long run. but
that's just a gut feeling.

 
  
   /rant
   
   There's a growing kitchen-sink approach in Debian (perhaps all of Linux,
   I don't know).  There's the kernel/initrd size, there's the variable
   device name problems, to name two.  It suggests to me that there's a
   missing piece of infrastructure.  Perhaps the installer system should
   create a hardware inventory file that initrdtools (or whatever the
   nom de jure) can access to generate a tailord initrd, that apt can
   consult for what drivers to download, etc.  The installer rescue mode
   could offer a tool to regenerate the inventory file for times when one
   changes hardware.
   
   /end rant
  
  True, but you have to consider the competition. 
 
 I guess the problem is related to this notion of trying to compete with
 MS.  If people 'buy' brand A because they like features x,y, and z, and
 brand B has the goal of gaining market share, it will tend to morph into
 a clone (feature-wise) of brand A.  However, it will tend to take on
 some of the compromises of brand B that go with features x, y, and z.  
 

I think that on the whole, debian strikes a decent balance. You get
the kitchen sink, but have the option to switch over to a bare pipe
sticking out of the wall for no charge other than your own labor. :)

A


signature.asc
Description: Digital signature