Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-26 Thread Andrew Sackville-West
On Thu, Jul 26, 2007 at 05:36:43AM +, s. keeling wrote:
> Bob Proulx <[EMAIL PROTECTED]>:
> >  David Brodbeck wrote:
> > > To me it always smacked a little of "me-too-ism", too ... the GNU  
> > > folks felt Linux wasn't GNU-ish enough, so they had to go write their  
> > > own kernel.
> > 
> >  The GNU Hurd has existed long before Linux existed.  Hurd has been in
> >  development for many years.  (Hurd is technology of the future.
> 
> aka. "Vapourware"?

well, no, because it does exist. It is unfortunately developing very
slowly and has run into real problems with parts of its structure
(namely the mach microkernel) being ultimately unsuitable for
implementing some of the stuff the want to do. IIUC.

A


signature.asc
Description: Digital signature


Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-25 Thread s. keeling
Bob Proulx <[EMAIL PROTECTED]>:
>  David Brodbeck wrote:
> > To me it always smacked a little of "me-too-ism", too ... the GNU  
> > folks felt Linux wasn't GNU-ish enough, so they had to go write their  
> > own kernel.
> 
>  The GNU Hurd has existed long before Linux existed.  Hurd has been in
>  development for many years.  (Hurd is technology of the future.

aka. "Vapourware"?


-- 
Any technology distinguishable from magic is insufficiently advanced.
(*) Linux Counter #80292
- -http://www.faqs.org/rfcs/rfc1855.htmlPlease, don't Cc: me.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Steve Lamb
Mike McCarty wrote:
> Just my $0.02. YMMV

> [*]
> $ uname -a
> Linux Presario-1 2.6.10-1.771_FC2 #1 Mon Mar 28 00:50:14 EST 2005 i686
> i686 i386 GNU/Linux
> 
> It took my machine 3 seconds to do a "copy" after selecting
> that text on my screen, because the disc ran that long after
> I clicked on the "Edit" button on the window frame.

Then there is something amiss with your machine.  I know, I know, that's
what you hear elsewhere but seriously, there is a problem in the machine.

[EMAIL PROTECTED]:~} free -m
 total   used   free sharedbuffers cached
Mem:   250242  7  0 26 80
-/+ buffers/cache:135114
Swap:  511105406
[EMAIL PROTECTED]:~} uname -a
Linux mania 2.6.20-10-generic #2 SMP Mon Mar 12 00:02:49 UTC 2007 i686 GNU/Linux

To copy and paste that took no time at all.  It was perfectly interactive.
 And that's on a 7-8 year old Dell Latitude CPx with 256Mb of RAM and a 667Mhz
CPU running things people on this list think it aught not run.  Like, KDE.
Running at the time: Thunderbird, Firefox, Konsole with 2 tabs.

Only differences than my far inferior hardware is the later kernel I am
running (2.6.20 vs. 2.6.10) and my laptop runs KUbuntu, not Debian or as in
your case, Fedora.

-- 
 Steve C. Lamb | But who decides what they dream?
   PGP Key: 8B6E99C5   |   And dream I do...
---+-



signature.asc
Description: OpenPGP digital signature


Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Mike McCarty

David Brodbeck wrote:


On Jul 24, 2007, at 12:36 PM, Mike McCarty wrote:


I have tried running some long-term computations in the background
using my machine, and found that nice was unable to deal with it.
Exactly the points he brings up...

momentary freezes of the display (5-10 seconds)
lots of ghosting of moving mouse pointers and windows
momentary freezing of the keyboard (up to 30 seconds)
difficulty switching "workspaces" using GNOME (minutes delay)
very extended load times for apps (minutes to load acrobat, e.g.)

This is on a 2.7 GHz machine with 250Meg of memory. Some of this
is explainable as memory thrashing, as evidenced by disc activity,
and memory pressure reported by top and similar tools.

But, why is my disc running when I try to move my mouse?



This is actually a VM manager issue, not a scheduler issue, I think.


It's a little of both, actually. The scheduler sets policy for
allocation of CPU resources. The VM sets policy for allocation of
memory resources. An app which has high visibility to the scheduler
*should* have high visibility to the VM manager. If a process
has high priority, then that should be because it has latency
requirements. If it has latency requirements, then it should have
some priority in memory as well. Something which has explicitly
been marked as having low latency requirements should be more
"swappable". I've found Linux using up to about 60% of my memory
for "disc cache". This, I trow, is part of the problem.


I suspect what's happening is the following:

- You stop using the machine.  The CPU goes idle, and nice decides to  
start running your background task.  Since you only have 250  megabytes 


The scheduler makes this decision, not nice. Nice simply informs the
scheduler what is considered to be fg vs bg.


of RAM, nice has to swap the task back in to run it.


Yes, true.

[snip]

I'm not sure how a different scheduler implementation could fix  this.  
A different VM implementation might.  Linux tends to be a  little too 
aggressive about what it swaps out, in my experience,  often favoring 
disk cache over applications.  The VMM was pretty  disastrously bad for 
much of the 2.4 kernel series and parts of 2.6.   You'd probably also be 
well served to use something lighter than  GNOME on such a limited 
machine -- icewm, perhaps.


I agree with this assessment. However, if you read the article, you'll
find that part of the topics discussed include "swap prefetch" bundled
with pure scheduling topics.

I'm actually surprised you're able to get background tasks to run  well 
on XP with such a small amount of RAM.  My experience with XP is  that 
with anything less than 512 megabytes of RAM, even the screen  saver 
turning on can cause long periods of disk thrashing.  I've  drummed my 
fingers for minutes at a time on such machines waiting for  XP to stop 
running the screen saver and give me my desktop back.


I haven't experience minutes of wait, but certainly seconds with XP.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread David Brodbeck


On Jul 24, 2007, at 2:41 PM, Mike McCarty wrote:

I've found Linux using up to about 60% of my memory
for "disc cache". This, I trow, is part of the problem.


There's been much debate about this among kernel developers, I  
understand.  On one side there are people who point out (quite  
correctly) that programs often allocate memory that goes unused most  
of the time, and this memory can be better used as disk cache.  On  
the other side there are people who point out, also quite correctly,  
that swapping out a program can cause interactivity problems.


The ultimate result of this was the addition of a "swappiness" knob  
to 2.6 kernels to allow people to tune this to their taste.  More here:

http://kerneltrap.org/node/3000

Ultimately I don't think there's any one answer.  On my NFS server, I  
don't want processes that are asleep most of the time hogging memory  
that could be used for disk cache.  But on my desktop machine, I  
don't want a temporarily idle Firefox session swapped to disk only to  
be swapped back in a few minutes later.


Generally I think recent 2.6 kernels strike a pretty good balance.  I  
haven't felt the need to tweak swappiness on my desktop machine from  
the default of 60, but it's also got 1 gigabyte of RAM.  Given cheap  
RAM prices these days, sometimes it's easiest just to put in RAM  
until the system stops swapping. ;)  A quick check shows that that  
system, which has been up a full day, is using a grand total of 152  
Kbytes of swap.



The scheduler makes this decision, not nice. Nice simply informs the
scheduler what is considered to be fg vs bg.


Yeah, right after I sent the message I realized that was poorly  
worded.  But I decided you'd get what I meant, and sending a  
correction after it would be unnecessary spam. :)


I'm actually surprised you're able to get background tasks to run   
well on XP with such a small amount of RAM.  My experience with XP  
is  that with anything less than 512 megabytes of RAM, even the  
screen  saver turning on can cause long periods of disk  
thrashing.  I've  drummed my fingers for minutes at a time on such  
machines waiting for  XP to stop running the screen saver and give  
me my desktop back.


I haven't experience minutes of wait, but certainly seconds with XP.


It depends on what else is running.  If there's another process  
consuming a lot of CPU or using the disk heavily, XP's UI slows to a  
crawl.  I think my record was something like 20 minutes to return  
from the screensaver on a system that had some kind of popup adware  
program running.  I could literally watch it draw each UI element  
individually.





--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Andrew Sackville-West
On Tue, Jul 24, 2007 at 04:02:39PM -0500, Mike McCarty wrote:
> Andrew Sackville-West wrote:
>
> [snip]
>
>> The modularity has some positives: a failure in one module will
>> not bring down the whole system. of course this is pretty rare in
>> linux these days too, but is certainly possible. It also provides some
>> serious security bonuses because a security failure in one
>> user-inserted module does not mean that the rest of the system is
>> compromised they way would be in the monolithic kernel model. I guess
>> some of these ideas are working their way into linux with the
>> inclusion of user-space drivers. 
>
> What you list as advantages of the microkernel approach are not
> all obvious to me. All kernel services should be necessary
[...]
> module be part of the kernel. Maybe it should be a demon or
> driver.
my bad language. what I was referring to here was drivers, I
guess. Though I think the terms are a little different with a
micro-kernel. So, wikipedia says that microkernels essentially only
provide address management, thread management and inter-process
communication. Everything else gets moved to userspace using
servers. So that means, the artical further states, that any crash can
be corrected by restarting the appropriate server to bring up whatever
service it was providing (network, display, device access) without
rebooting the machine.

>
> OTOH, one serious lack (IMO) with Linux is that drivers cannot
> be started, stopped, uninstalled, and installed on a running
> system. LynxOS supported that, ...

sounds very much like, at the user level, the same concept as the
microkernel approach. in fact LynxOS is mentioned 9along with minix)
as an example of a microkernel (with some added device drivers to aid
in booting)

>
> I am not an expert in microkernel architecture.

AOL big time!

>
>> There are also negatives: there is overhead in the communication
>> between the modules that might not be there in the monolithic
>> model. And, I suppose, having the system remain up when all the
>
> I have certainly read literature to this effect.
>
>> modules for the input methods go down is only of minor convenience,
>> but I really don't know what I'm talking about here. 
>
> :-)
>
> [snip]
>
>> A parallel conversation on /. (I know i know, its an addiction) was
>> discussing implementation of different lines for MS again, splitting
>> between a desktop-user oriented release and a more stable business
>> release. Who knows what that all means, but its an intriguing parallel
>> to the ck situation. He wanted a better desktop while linux is
>> pushing for more server oriented priorities. 

[..]

> Umm, you seem to have the impression that there are scheduling
> algorithms which are "good for desktop apps and bad for other
> types of apps" and scheduling algorithms which are "good for
> server apps, but bad for desktop or other apps". The truth is that there
> are predictable algorithms which give guaranteed results, but
> require tuning to get the results you want, including assigning
> what priorities are required by each app, and there are ad-hoc
> scheduling algorithms which try to give time to apps based on
> what kind of app they are. The former results in somewhat difficult
> to tune systems which give predictable performance. The latter
> gives systems which seem easier to tune because they mostly
> work acceptably, but which are actually impossible to tune to get any
> kind of guaranteed behavior.

I'm not sure actually which of my emails you're responding to, but I
went into more details on how *I* think a system should behave in a
rant about desktop OSes... 

I understand what you are saying about the different types of
schedulers. THanks for the insight.

>
> IOW, the former requires the user to characterize his needs, but
> then guarantees that those needs get met (or fails to, and lets
> you know if it doesn't have enough resources), the latter tries to
> guess what the user needs, and when it fails gives the user no recourse.
>
> The former also requires that apps be written to realize that,
> just because they are necessary, they shouldn't just run forever
> without either blocking or yielding. If they have latency requirements,
> then they need high priority, but shouldn't hog the CPU.

ISTM that some sort of merging between the two is where its
at. Certain things you want defineable behavior, other things, you
just don't care. For example, since this was all predicated on a
discussion of ck's interface responsiveness ideas, in a user
interface, its probably easy to quantify what the response time should
be. A human being can only type so fast or react to input so
quickly... a mouse only needs to show up so many times a second for a
human being to see it all the time... If you want to perfectly
responsive human interface then, you merely have to be looking for and
responding to input at that frequency. This should be easily doable
and tunable to get a response the user likes as that is 

Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/24/07 14:36, Mike McCarty wrote:
[snip]
> 
> [*]
> $ uname -a
> Linux Presario-1 2.6.10-1.771_FC2 #1 Mon Mar 28 00:50:14 EST 2005 i686
> i686 i386 GNU/Linux
> 
> It took my machine 3 seconds to do a "copy" after selecting
> that text on my screen, because the disc ran that long after
> I clicked on the "Edit" button on the window frame. It had to hit
> disc to load the image and/or software needed to copy the selected
> text. I am running Thunderbird, one instance of Netscape, one instance
> of acroread 7.0, and four xterm windows.

2.6.10 is pretty old.  There were significant improvements in the
VMM system back around .11 or .12.

Also, Tbird, Netscape & Acroread are RAM hogs.  (People say that
Tbird and FF are lean, but that's a crock.)

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGpm4dS9HxQb37XmcRAnfWAKCYc/U7NTzVa29nkq3Ey/Vs6Xx+/ACgxn2h
Uaq2tS6aaT1uajPV2U9ot1I=
=glRM
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Mike McCarty

Bob Proulx wrote:

David Brodbeck wrote:

To me it always smacked a little of "me-too-ism", too ... the GNU  
folks felt Linux wasn't GNU-ish enough, so they had to go write their  
own kernel.



The GNU Hurd has existed long before Linux existed.  Hurd has been in
development for many years.  (Hurd is technology of the future.
Always has been and some say always will be. :-)


Thanks for the tidbit of history. Also, the humor.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Mike McCarty

Andrew Sackville-West wrote:

[snip]


The modularity has some positives: a failure in one module will
not bring down the whole system. of course this is pretty rare in
linux these days too, but is certainly possible. It also provides some
serious security bonuses because a security failure in one
user-inserted module does not mean that the rest of the system is
compromised they way would be in the monolithic kernel model. I guess
some of these ideas are working their way into linux with the
inclusion of user-space drivers. 


What you list as advantages of the microkernel approach are not
all obvious to me. All kernel services should be necessary
services. One doesn't want to move apps into kernel space.[*]
So, even with microkernel architecture, a failure in a kernel
level module which leaves the microkernel "up" still results
in a serious system failure. Recovery in such a circumstance
seems unlikely to me. I suppose that there might be some
kernel services which could be restarted, something like a
demon is today, but I have not seen any literature on this,
and it seems, as I said, unlikely. If it were so, then I'd
cast a very jaundiced eye on the decision to make a certain
module be part of the kernel. Maybe it should be a demon or
driver.

OTOH, one serious lack (IMO) with Linux is that drivers cannot
be started, stopped, uninstalled, and installed on a running
system. LynxOS supported that, and I found it to be wonderful
to be able to make changes to a driver, stop the existing one,
uninstall it, and install the changed one without having to
reboot all the time. The biggest advantage, though, was being
able to have the drivers come up, look for hardware, and if
it wasn't present, then just leave, but later be able to install
if necessary without rebooting.

I am not an expert in microkernel architecture.


There are also negatives: there is overhead in the communication
between the modules that might not be there in the monolithic
model. And, I suppose, having the system remain up when all the


I have certainly read literature to this effect.


modules for the input methods go down is only of minor convenience,
but I really don't know what I'm talking about here. 


:-)

[snip]


A parallel conversation on /. (I know i know, its an addiction) was
discussing implementation of different lines for MS again, splitting
between a desktop-user oriented release and a more stable business
release. Who knows what that all means, but its an intriguing parallel
to the ck situation. He wanted a better desktop while linux is
pushing for more server oriented priorities. Maybe Hurd can actually
work out for both parties by simple implementation of different
low-level modules: one set of scheduler/IO/"interactivity" modules for
desktop versus another set for various server funcitons, or heavy
computing uses, whatever. Even better would be a kernel that could
switch modes on the fly based on what sorts of tasks were running at
the time...


Umm, you seem to have the impression that there are scheduling
algorithms which are "good for desktop apps and bad for other
types of apps" and scheduling algorithms which are "good for
server apps, but bad for desktop or other apps". The truth is that there
are predictable algorithms which give guaranteed results, but
require tuning to get the results you want, including assigning
what priorities are required by each app, and there are ad-hoc
scheduling algorithms which try to give time to apps based on
what kind of app they are. The former results in somewhat difficult
to tune systems which give predictable performance. The latter
gives systems which seem easier to tune because they mostly
work acceptably, but which are actually impossible to tune to get any
kind of guaranteed behavior.

IOW, the former requires the user to characterize his needs, but
then guarantees that those needs get met (or fails to, and lets
you know if it doesn't have enough resources), the latter tries to
guess what the user needs, and when it fails gives the user no recourse.

The former also requires that apps be written to realize that,
just because they are necessary, they shouldn't just run forever
without either blocking or yielding. If they have latency requirements,
then they need high priority, but shouldn't hog the CPU.

BTW, not all real time scheduling algorithms are priority
preemptive, and there is active research going on in RTOS scheduling,
but nearly all use priority preemptive along with Rate Monotonic
Scheduling analysis used to set the priorities. Other means for
determining the priorities for priority preemptive than RMS may be used,
of course. Also, there are various policies regarding how events
out of control of the scheduler get handled (like interrupts).

When multithreaded apps enter in, then a whole host of other
considerations come into effect, not least of which is whether
a thread is considered a kernel managed object, or the kernel
manages only processes, and they do int

Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread David Brodbeck


On Jul 24, 2007, at 1:20 PM, Mike McCarty wrote:


David Brodbeck wrote:
Linux *is* under the GPL.  But it's under GPL v2.  The FSF is  
pushing  hard for Linus to relicense it under GPL v3.  The two  
licenses are  not considered compatible.


Hmm. That's interesting. Care to elaborate? I thought that Linus
retained rights.


Do a Google search on "linux gpl v3" and you'll come across a lot of  
discussion.  The basic gist is that Linus can't relicense the kernel  
as a whole because not all of the code is his; every individual  
author/maintainer would have to agree to relicense.  Also, he doesn't  
like some of the anti-DRM provisions in GPL v3  -- he feels they  
attack the problem from the wrong angle, and that they'd make things  
like secure update servers untenable by requiring the distribution of  
private signing keys along with the source code:

http://trends.newsforge.com/article.pl?sid=06/02/02/1636216
http://lkml.org/lkml/2006/1/25/273




--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Mike McCarty

Kent West wrote:


It's my understanding that the Hurd pre-dates Linux; it's just that once 
Linux came along, the development on it moved at a much faster pace than 
on the Hurd, and Debian was ported to run on it while the Hurd project 
languished.


For those not up on the project, as I understand things...

Debian is an entire OS that can (at least theoretically) run on top of a 


Erm...

Debian is almost entirely GNU. Those parts which are recognizeably
different from any other distro are the only parts which are
specifically Debian. That is, some of the install and management
software. Everything else is either GNU or Linux (the kernel, IOW).

This is true of every distro.

number of different kernels. It originally was to run on the GNU Mach 
kernel as part of the Hurd project, but then Linux came along and 
outpaced Hurd development, so Linux became the new underlying kernel for 
mainstream Debian.


Interesting bit of history.

The big difference between Linux and the GNU Mach kernel is that with 
Linux, many things (hardware drivers, file system drivers, etc) are 
integrated into the kernel, whereas with a micro-kernel architecture 
like GNU Mach, the kernel is just a very small core piece of code, and 
then the drivers, etc are attached as "servers" (sort of like inserting 
a module into the Linux kernel, but different). These servers are more 
modular than Linux kernel modules, and can be attached by normal users 
rather than requiring admin access, because the modularity prevents them 
from tromping on each other.


Ah, so HURD is based on the microkernel model. That's a superior
approach to the integrated kernel in some ways. In others, of course,
it has some drawbacks. Some of the advantages of a microkernel are
not so useful on an upper-end system like most PCs. (I mean as
opposed to lower-end systems, like microwave ovens, and automotive
dashboard and engine control.)


Of course, I probably don't really understand things 


No comment :-)

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread David Brodbeck


On Jul 24, 2007, at 12:36 PM, Mike McCarty wrote:

I have tried running some long-term computations in the background
using my machine, and found that nice was unable to deal with it.
Exactly the points he brings up...

momentary freezes of the display (5-10 seconds)
lots of ghosting of moving mouse pointers and windows
momentary freezing of the keyboard (up to 30 seconds)
difficulty switching "workspaces" using GNOME (minutes delay)
very extended load times for apps (minutes to load acrobat, e.g.)

This is on a 2.7 GHz machine with 250Meg of memory. Some of this
is explainable as memory thrashing, as evidenced by disc activity,
and memory pressure reported by top and similar tools.

But, why is my disc running when I try to move my mouse?


This is actually a VM manager issue, not a scheduler issue, I think.

I suspect what's happening is the following:

- You stop using the machine.  The CPU goes idle, and nice decides to  
start running your background task.  Since you only have 250  
megabytes of RAM, nice has to swap the task back in to run it.


- You start moving the mouse.  Now your foreground process needs the  
CPU again, and nice lets it have it -- but it's been swapped out, so  
nothing can happen until the virtual memory manager swaps it all back  
in from disk.


I'm not sure how a different scheduler implementation could fix  
this.  A different VM implementation might.  Linux tends to be a  
little too aggressive about what it swaps out, in my experience,  
often favoring disk cache over applications.  The VMM was pretty  
disastrously bad for much of the 2.4 kernel series and parts of 2.6.   
You'd probably also be well served to use something lighter than  
GNOME on such a limited machine -- icewm, perhaps.


I'm actually surprised you're able to get background tasks to run  
well on XP with such a small amount of RAM.  My experience with XP is  
that with anything less than 512 megabytes of RAM, even the screen  
saver turning on can cause long periods of disk thrashing.  I've  
drummed my fingers for minutes at a time on such machines waiting for  
XP to stop running the screen saver and give me my desktop back.



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Mike McCarty

David Brodbeck wrote:


On Jul 24, 2007, at 11:12 AM, Mike McCarty wrote:

I wonder what those who support the GPL so strongly on Linux support


mail lists will do in response to that argument? I personally don't
like or use the GPL, so I really don't care. But ISTM that those
who have argued so fervently in favor of the GPL here should
either call for Linus to put Linux under GPL, abandon Linux
and go with the FSF kernel, or stop ranting against others who
don't like the GPL.



Linux *is* under the GPL.  But it's under GPL v2.  The FSF is pushing  
hard for Linus to relicense it under GPL v3.  The two licenses are  not 
considered compatible.


Hmm. That's interesting. Care to elaborate? I thought that Linus
retained rights.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Mike McCarty

Hugo Vanwoerkom wrote:

Hugo Vanwoerkom wrote:


Hi,

I don't really think this is OT, albeit not directly Debian related.
Con Kolivas, the kernel hacker who authored a better scheduler, 
recently decided to quit.


Loss for Linux (and Linus)

Here's his reasoning.



http://apcmag.com/6735/interview_con_kolivas


I'd like to preface this by saying that I am not a Linux hater.
Nor am I a MicroSoft hater. I won't go so far as to say that
there is any OS for my machine I actually *like*. Each OS I can
run has advantages and disadvantages. So far, I find that Linux
provides me what I want moreso than the others FOR THIS MACHINE.
On other machines, with other uses, I use other OSs. I have no
particular axe to grind with any of them.

I've written a couple of kernels, and supported perhaps four more,
for embedded real time systems. I found this particular quote about
the scheduling from that article very revealing:

[QUOTE MODE ON]

The option is to throttle the guessing, or not guess at all. The former 
option means you have a CPU scheduler which is difficult to model, and 
the behaviour is right 95% of the time and ebbs and flows in its 
metering out of CPU and latency. The latter option means there is no 
guessing and the behaviour is correct 100% of the time... it only gives 
what you tell it to give. It seemed so absurdly clear to me, given that 
interactivity mostly was better anyway with the fair approach, yet the 
maintainers demanded I address this as a problem with the new design. I 
refused. I insisted that we had to compromise a small amount to gain a 
heck of a great deal more. A scheduler that was deterministic and 
predictable and still interactive is a much better option long term than 
the hack after hack approach we were maintaining.


[QUOTE MODE OFF]

Any RTOS developer and maintainer will tell you that the only way
to be able to tweak and predict the results is to have reliable,
dependable, predictable scheduling algorithms. In fact, in any
system where one is trying to tweak CPU performance, not just RTOS
where one must meet absolute deadlines, having a non-real-time
scheduler just makes the job impossible.

His other commentary aside, some of which I agree with and some
I do not, this one point makes a lot of sense to me, as a non-
Linux kernel specialist (in both the sense of not being a specialist
in Linux kernel, and being a specialist in some non-Linux kernels).

I have myself run some CPU benchmarks on various OS my machine can run,
and Linux (Fedora Core 2 [*]) is by far the slowest of the ones I have
tried. That's ok, as it also has more services than most of the
ones I tried. (I found that XP is faster on my machine than Linux,
and I believe it supplies as many services I want as Linux does.)

I have tried running some long-term computations in the background
using my machine, and found that nice was unable to deal with it.
Exactly the points he brings up...

momentary freezes of the display (5-10 seconds)
lots of ghosting of moving mouse pointers and windows
momentary freezing of the keyboard (up to 30 seconds)
difficulty switching "workspaces" using GNOME (minutes delay)
very extended load times for apps (minutes to load acrobat, e.g.)

This is on a 2.7 GHz machine with 250Meg of memory. Some of this
is explainable as memory thrashing, as evidenced by disc activity,
and memory pressure reported by top and similar tools.

But, why is my disc running when I try to move my mouse?

Also, even when my machine is otherwise unloaded, I find that
prelink gets charged up, or updatedb, and I lose control of
my machine for up to 30 minutes at a time. I hear that prelink,
at least, has been modified somewhat to alleviate this. But
on my machine it sometimes takes 2-3 seconds for a character
to be echoed in a terminal window. On a 2.7GHz machine? WTH
can it be doing all that time?

With a proper scheduler, prelink would not be able to
eat my CPU like that. (One could also argue that prelink
shouldn't even exist, but that's a different discussion.)

Another well-known hog is yum, which I have myself seen eat my machine.
Again, with a proper scheduler this would not happen. (Yes,
I know that apt-get is the Debian way, but we're discussing the
kernel and scheduling. If I installed and ran yum on my girlfriend's
Debian machine, it would do the same thing, I trow. An application
should not be able to eat my machine and wrest control away from
the display manager and keyboard interface.)

There seem to me to be two areas where Linux could use some
serious improvment:

(1) better CPU resource management
(2) better memory resource management (machine thrashes when it
shouldn't)

When, on occasion, I have posted to various Linux mail lists
(I subscribe to three) about these performance issues, there
have been five cookie-cutter responses (one might almost say
knee-jerk)

(1) you are stupid, Windows eats lots of CPU as everyone knows, you are 
wrong, Linux is faster, your benchmark is wrong, the way 

Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Andrew Sackville-West
On Tue, Jul 24, 2007 at 01:46:29PM -0500, Kent West wrote:
> David Brodbeck wrote:

[...]

>>
>> To me it always smacked a little of "me-too-ism", too ... the GNU folks 
>> felt Linux wasn't GNU-ish enough, so they had to go write their own 
>> kernel.
>
>
> It's my understanding that the Hurd pre-dates Linux; it's just that once 
> Linux came along, the development on it moved at a much faster pace than on 
> the Hurd, and Debian was ported to run on it while the Hurd project 
> languished.
>
> For those not up on the project, as I understand things...
>
> Debian is an entire OS that can (at least theoretically) run on top of a 
> number of different kernels. It originally was to run on the GNU Mach 
> kernel as part of the Hurd project, but then Linux came along and outpaced 
> Hurd development, so Linux became the new underlying kernel for mainstream 
> Debian.
>
> The big difference between Linux and the GNU Mach kernel is that with 
> Linux, many things (hardware drivers, file system drivers, etc) are 
> integrated into the kernel, whereas with a micro-kernel architecture like 
> GNU Mach, the kernel is just a very small core piece of code, and then the 
> drivers, etc are attached as "servers" (sort of like inserting a module 
> into the Linux kernel, but different). These servers are more modular than 
> Linux kernel modules, and can be attached by normal users rather than 
> requiring admin access, because the modularity prevents them from tromping 
> on each other.
>
> Of course, I probably don't really understand things 

I think your summary is pretty accurate, general way, of describing
things.

The modularity has some positives: a failure in one module will
not bring down the whole system. of course this is pretty rare in
linux these days too, but is certainly possible. It also provides some
serious security bonuses because a security failure in one
user-inserted module does not mean that the rest of the system is
compromised they way would be in the monolithic kernel model. I guess
some of these ideas are working their way into linux with the
inclusion of user-space drivers. 

There are also negatives: there is overhead in the communication
between the modules that might not be there in the monolithic
model. And, I suppose, having the system remain up when all the
modules for the input methods go down is only of minor convenience,
but I really don't know what I'm talking about here. 

What would be interesting, from a ck perspective, is what the state
ofthe scheduler is in Hurd. Is the scheduler a seperate module (there
must be some other name for them...) like everything else? If so, can
you then plug in different schedulers for different purposes. ck wants
a responsive snappy desktop and is obviously willing to sacrifice
other things to acheive that. So he could develop a scheduler for that
purpose. Meanwhile others may want a scheduler with other ideas of
what is a priority. maybe they need a system that will prioritise
already underway tasks over new ones (don't knw what this is called)
and could fire-up the appropriate scheduler. 

A parallel conversation on /. (I know i know, its an addiction) was
discussing implementation of different lines for MS again, splitting
between a desktop-user oriented release and a more stable business
release. Who knows what that all means, but its an intriguing parallel
to the ck situation. He wanted a better desktop while linux is
pushing for more server oriented priorities. Maybe Hurd can actually
work out for both parties by simple implementation of different
low-level modules: one set of scheduler/IO/"interactivity" modules for
desktop versus another set for various server funcitons, or heavy
computing uses, whatever. Even better would be a kernel that could
switch modes on the fly based on what sorts of tasks were running at
the time...

just rambling aloud.

A


signature.asc
Description: Digital signature


Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Kent West

David Brodbeck wrote:


On Jul 24, 2007, at 8:34 AM, Kent West wrote:
It'd be nice if a coder of Con's caliber were to get interested in 
the HURD. I think that project has a lot of potential, but I'm 
afeared it has little future without some motivated developers.


HURD kind of suffers from being late to the party.  It would have to 
offer something really new and exciting to pull people away from Linux 
and BSD, I think.


To me it always smacked a little of "me-too-ism", too ... the GNU 
folks felt Linux wasn't GNU-ish enough, so they had to go write their 
own kernel.



It's my understanding that the Hurd pre-dates Linux; it's just that once 
Linux came along, the development on it moved at a much faster pace than 
on the Hurd, and Debian was ported to run on it while the Hurd project 
languished.


For those not up on the project, as I understand things...

Debian is an entire OS that can (at least theoretically) run on top of a 
number of different kernels. It originally was to run on the GNU Mach 
kernel as part of the Hurd project, but then Linux came along and 
outpaced Hurd development, so Linux became the new underlying kernel for 
mainstream Debian.


The big difference between Linux and the GNU Mach kernel is that with 
Linux, many things (hardware drivers, file system drivers, etc) are 
integrated into the kernel, whereas with a micro-kernel architecture 
like GNU Mach, the kernel is just a very small core piece of code, and 
then the drivers, etc are attached as "servers" (sort of like inserting 
a module into the Linux kernel, but different). These servers are more 
modular than Linux kernel modules, and can be attached by normal users 
rather than requiring admin access, because the modularity prevents them 
from tromping on each other.


Of course, I probably don't really understand things 

--
Kent


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread David Brodbeck


On Jul 24, 2007, at 11:12 AM, Mike McCarty wrote:

I wonder what those who support the GPL so strongly on Linux support

mail lists will do in response to that argument? I personally don't
like or use the GPL, so I really don't care. But ISTM that those
who have argued so fervently in favor of the GPL here should
either call for Linus to put Linux under GPL, abandon Linux
and go with the FSF kernel, or stop ranting against others who
don't like the GPL.


Linux *is* under the GPL.  But it's under GPL v2.  The FSF is pushing  
hard for Linus to relicense it under GPL v3.  The two licenses are  
not considered compatible.






--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Bob Proulx
David Brodbeck wrote:
> To me it always smacked a little of "me-too-ism", too ... the GNU  
> folks felt Linux wasn't GNU-ish enough, so they had to go write their  
> own kernel.

The GNU Hurd has existed long before Linux existed.  Hurd has been in
development for many years.  (Hurd is technology of the future.
Always has been and some say always will be. :-)

So it is unreasonable to say that the GNU folks decided to write Hurd
in response to Linux.  It is more accurate to say that the GNU folks
decided to continue to write Hurd regardless of the new presence and
entrenchment of Linux.

Bob


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Mike McCarty

David Brodbeck wrote:


HURD kind of suffers from being late to the party.  It would have to  
offer something really new and exciting to pull people away from  Linux 
and BSD, I think.


To me it always smacked a little of "me-too-ism", too ... the GNU  folks 
felt Linux wasn't GNU-ish enough, so they had to go write their  own 
kernel.  I'm sure the wars with Linus over the new GPL version  will 
only intensify that desire, because now they can point to Linux  and say 
it's not "pure" license-wise.


This is the first I've heard what HURD is[*], other than just having
seen the name.

I wonder what those who support the GPL so strongly on Linux support
mail lists will do in response to that argument? I personally don't
like or use the GPL, so I really don't care. But ISTM that those
who have argued so fervently in favor of the GPL here should
either call for Linus to put Linux under GPL, abandon Linux
and go with the FSF kernel, or stop ranting against others who
don't like the GPL.

[*] No pun intended.

Mike
--
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I can explain it for you, but I can't understand it for you.
I speak only for myself, and I am unanimous in that!


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread David Brodbeck


On Jul 24, 2007, at 8:34 AM, Kent West wrote:
It'd be nice if a coder of Con's caliber were to get interested in  
the HURD. I think that project has a lot of potential, but I'm  
afeared it has little future without some motivated developers.


HURD kind of suffers from being late to the party.  It would have to  
offer something really new and exciting to pull people away from  
Linux and BSD, I think.


To me it always smacked a little of "me-too-ism", too ... the GNU  
folks felt Linux wasn't GNU-ish enough, so they had to go write their  
own kernel.  I'm sure the wars with Linus over the new GPL version  
will only intensify that desire, because now they can point to Linux  
and say it's not "pure" license-wise.





--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Andrew Sackville-West
On Tue, Jul 24, 2007 at 09:14:33AM -0700, Andrew Sackville-West wrote:
> On Tue, Jul 24, 2007 at 10:34:51AM -0500, Kent West wrote:
> > Hugo Vanwoerkom wrote:
> >> Hugo Vanwoerkom wrote:
> >>> Hi,
> >>>
> >>> I don't really think this is OT, albeit not directly Debian related.
> >>> Con Kolivas, the kernel hacker who authored a better scheduler, recently 
> >>> decided to quit.
> >>>
> >>> Loss for Linux (and Linus)
> >>>
> >>> Here's his reasoning.
> >>>
> >>
> >> http://apcmag.com/6735/interview_con_kolivas
> 
> crap... slashdotted... but I have gotten through the first page... I
> can sympathise with him... read my pre-first-page-reading comments
> below.

I've finally finished the article and its pretty interesting, and a
little dis-heartening too. I've gone off now and read a bit of his
comments in the latest patch about how the scheduler works, and it
looks pretty interesting to me. I'll have to compile one and see what
I think... 

sorry to see him go.

A


signature.asc
Description: Digital signature


Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Luigi Bianca
Kent West ha scritto:

> Hugo Vanwoerkom wrote:
>
>> Hugo Vanwoerkom wrote:
>>
>>> Hi,
>>>
>>> I don't really think this is OT, albeit not directly Debian related.
>>> Con Kolivas, the kernel hacker who authored a better scheduler,
>>> recently decided to quit.
>>>
>>> Loss for Linux (and Linus)
>>>
>>> Here's his reasoning.
>>>
>>
>> http://apcmag.com/6735/interview_con_kolivas
>
>
> It'd be nice if a coder of Con's caliber were to get interested in the
> HURD. I think that project has a lot of potential, but I'm afeared it
> has little future without some motivated developers.
>
it's what i thought reading his reasoning :-)
Luigi


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Kent West

Hugo Vanwoerkom wrote:

Kent West wrote:

Hugo Vanwoerkom wrote:

Hugo Vanwoerkom wrote:

Hi,

I don't really think this is OT, albeit not directly Debian related.
Con Kolivas, the kernel hacker who authored a better scheduler, 
recently decided to quit.


Loss for Linux (and Linus)

Here's his reasoning.



http://apcmag.com/6735/interview_con_kolivas


It'd be nice if a coder of Con's caliber were to get interested in 
the HURD. I think that project has a lot of potential, but I'm 
afeared it has little future without some motivated developers.




Con is a terrific kernel hacker (judged from the results). What got 
him was the kernel decision making process, if you call it that.


His quitting isn't going to fix that.

If anything fixes it at all it is sticking with it: people with his 
kind of ability are sorely needed, as you note...




Yes, I'd rather he stick with the Linux kernel and make improvements 
there, since it's what all of us are currently using, but if he's not 
going to stay with Linux kernel-hacking, he could get some of the same 
feel for kernel-hacking[1], without so much of the kernel politics, by 
working on the Hurd, and maybe eventually (not anytime foreseeable I'm 
afraid) we could move from Linux to the Hurd for our Debian fix. I'm a 
great believer in modularity (while realizing that integration has its 
place, too), and think the Hurd could lead to some great things in 
Debian's future.


Footnotes:
1. At least, it's my understanding that hacking on Hurd servers is 
similar to hacking on the Linux kernel, and because of the Hurd's 
modular nature such hacking is relatively free of politics. But that's 
just my impression as a non-coder looking briefly at the Hurd through a 
window as I walk by.


===

At any rate, public thanks to Con for the work he has provided, and to 
all the developers who make Debian possible (even if there's still room 
for improvement).


--
Kent


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Andrew Sackville-West
On Tue, Jul 24, 2007 at 10:34:51AM -0500, Kent West wrote:
> Hugo Vanwoerkom wrote:
>> Hugo Vanwoerkom wrote:
>>> Hi,
>>>
>>> I don't really think this is OT, albeit not directly Debian related.
>>> Con Kolivas, the kernel hacker who authored a better scheduler, recently 
>>> decided to quit.
>>>
>>> Loss for Linux (and Linus)
>>>
>>> Here's his reasoning.
>>>
>>
>> http://apcmag.com/6735/interview_con_kolivas

crap... slashdotted... but I have gotten through the first page... I
can sympathise with him... read my pre-first-page-reading comments
below.


>
> It'd be nice if a coder of Con's caliber were to get interested in the 
> HURD. I think that project has a lot of potential, but I'm afeared it has 
> little future without some motivated developers.

I kept trying to dabble in the HURD, but keep not having the time to
get past a basic install... Somewhere I've got a basic qemu image for
it I'll have to fire it up again. I was thinking about this just
yesterday after reading the commentary on the death of LUGS on /. (I
know, I know...)
http://linux.slashdot.org/article.pl?sid=07/07/23/1357259

I've used linux for about 3 years now and I love it, but its already
becoming routine... I guess for my day-to-day work that's good as I
need a system that just works. But I want to keep those "I love
computing" fires stoked with something interesting and
challenging... In some ways, for me, linux is too successful as its
too good, too easy to use, too reliable. I've said it before, I often
find myself looking for things to do on my machines and not really
finding anything... 

Back to post-reading-first-page: So Con's got a big point in the first
page... the desktop PC has been co-opted and then ignored by the
enterprise folks. Don't ge tme wrong, I think its cool that I can set
up and run tiny versions of enterprise solutions on my little home
LAN, but it may explain my personal frustration with it in the long
run. I want to regain the excitement I had when when I figured out how
to swap the C-64 rom into ram (there was a parallel bank of ram and
you could flip some bit somewhere and you'd be suddenly running in ram
instead of the rom) and start tweaking the OS/BASIC interpreter. It
started with fun stuff like just changing the "READY" prompt to some
other five character word. But from
there, you could do anything... write your own OS one piece at a time
if you wanted. Or make it count BASIC lines by letters instead of
numbers, or ... meh... whoknows. BUt it was there and fun and now I'm
rambling.

later

A


signature.asc
Description: Digital signature


Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Hugo Vanwoerkom

Kent West wrote:

Hugo Vanwoerkom wrote:

Hugo Vanwoerkom wrote:

Hi,

I don't really think this is OT, albeit not directly Debian related.
Con Kolivas, the kernel hacker who authored a better scheduler, 
recently decided to quit.


Loss for Linux (and Linus)

Here's his reasoning.



http://apcmag.com/6735/interview_con_kolivas


It'd be nice if a coder of Con's caliber were to get interested in the 
HURD. I think that project has a lot of potential, but I'm afeared it 
has little future without some motivated developers.




Con is a terrific kernel hacker (judged from the results). What got him 
was the kernel decision making process, if you call it that.


His quitting isn't going to fix that.

If anything fixes it at all it is sticking with it: people with his kind 
of ability are sorely needed, as you note...


Hugo


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Hugo Vanwoerkom

Cybe R. Wizard wrote:

Hugo Vanwoerkom <[EMAIL PROTECTED]>  said:

Hi,

I don't really think this is OT, albeit not directly Debian related.
Con Kolivas, the kernel hacker who authored a better scheduler,
recently decided to quit.

Loss for Linux (and Linus)

Here's his reasoning.

Hugo


Did you forget the link?



I did and posted it later but here it is again:
http://apcmag.com/6735/interview_con_kolivas

sorry


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Cybe R. Wizard
Hugo Vanwoerkom <[EMAIL PROTECTED]>  said:
> Hi,
> 
> I don't really think this is OT, albeit not directly Debian related.
> Con Kolivas, the kernel hacker who authored a better scheduler,
> recently decided to quit.
> 
> Loss for Linux (and Linus)
> 
> Here's his reasoning.
> 
> Hugo
> 
Did you forget the link?

Cybe R. Wizard
-- 
When Windows are opened the bugs come in.
Winduhs


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Kent West

Hugo Vanwoerkom wrote:

Hugo Vanwoerkom wrote:

Hi,

I don't really think this is OT, albeit not directly Debian related.
Con Kolivas, the kernel hacker who authored a better scheduler, 
recently decided to quit.


Loss for Linux (and Linus)

Here's his reasoning.



http://apcmag.com/6735/interview_con_kolivas


It'd be nice if a coder of Con's caliber were to get interested in the 
HURD. I think that project has a lot of potential, but I'm afeared it 
has little future without some motivated developers.


--
Kent


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Hugo Vanwoerkom

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/24/07 09:31, Hugo Vanwoerkom wrote:

Hi,

I don't really think this is OT, albeit not directly Debian related.
Con Kolivas, the kernel hacker who authored a better scheduler, recently
decided to quit.

Loss for Linux (and Linus)

Here's his reasoning.


Forgot the link???



Beat me to it ;-)


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 07/24/07 09:31, Hugo Vanwoerkom wrote:
> Hi,
> 
> I don't really think this is OT, albeit not directly Debian related.
> Con Kolivas, the kernel hacker who authored a better scheduler, recently
> decided to quit.
> 
> Loss for Linux (and Linus)
> 
> Here's his reasoning.

Forgot the link???

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGphZfS9HxQb37XmcRAmxEAJ92quc4PIHutsxVvqdO4J8BnEIWKwCdF/Nm
SEphm425E7rEeRsVMgQAIjk=
=Vevj
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: [OT] Interview with Con Kolivas on Linux failures

2007-07-24 Thread Hugo Vanwoerkom

Hugo Vanwoerkom wrote:

Hi,

I don't really think this is OT, albeit not directly Debian related.
Con Kolivas, the kernel hacker who authored a better scheduler, recently 
decided to quit.


Loss for Linux (and Linus)

Here's his reasoning.



http://apcmag.com/6735/interview_con_kolivas



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]