Linux-Development-Sys Digest #175, Volume #7      Thu, 9 Sep 99 05:14:05 EDT

Contents:
  Re: Linux standards compliance (Peter Samuelson)
  Re: threads (Peter Samuelson)
  Re: Counting hardware Interrupt ("Dmitry A. Fedorov")
  Re: Linux standards compliance (Peter Samuelson)
  Re: Help needed on linking module.. (Karlo Szabo)
  Re: No process cleanup after a core in 2.2.9 (Tim Roberts)
  Re: Scheduling in Linux ([EMAIL PROTECTED])
  Linux System Engineer Wanted!!! (Xiaopong Tran)
  Re: survey linux project. (Karlo Szabo)
  Re: Problem porting to LINUX (Peter Samuelson)
  Re: acurate timing (Peter Samuelson)
  Re: increasing process limits (David Schwartz)
  Re: LispOS? (Harald Arnesen)
  Re: threads (David Schwartz)
  Re: threads (David Schwartz)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Peter Samuelson)
Subject: Re: Linux standards compliance
Date: 8 Sep 1999 23:34:16 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>

[Warren Young <[EMAIL PROTECTED]>]
> It occurs to me that it'd be in Linux's best interests to accept
> these patches into the kernel if they're of the sort that don't
> compromise the rest of the kernel's stability or speed.  (Probably
> also add a config option to disable UDI.)  Then, _don't_ port Linux
> drivers to UDI, but just _allow_ UDI drivers to be loaded.  Then
> Linux would be harvesting UnixWare drivers for free!

There was a day, and it wasn't so long ago, that Linux needed all the
device drivers it could get.  Coaxing out hardware specs from
manufacturers under reasonable free-software-use terms was in some
cases all but impossible.

That day has passed.  For better or for worse, Linux now has a solid
reputation as the choice web-server-grade platform for a lot of
customers, and any hardware vendor interested in that market is very
aware of this.  They are all scrambling to assure their customers that
they either support Linux or plan to Real Soon Now.

In short, there is enough vendor support for Linux these days that if
some manufacturer is dumb enough to ignore the market, the market can
well afford to ignore that manufacturer.  And we're not talking about
the desktop market here; the desktop market is quite distinct from that
served by UDI, platforms like Unixware and Solaris/x86.

So I don't think we need to fear the consequences of not supporting
UDI.  Vendors can no longer say "We don't support Linux, go use NT or
Solaris".  Market conditions make this response self-defeating and
those who don't yet realize this soon will.

If UDI were officially blessed by being in kernel source (be it Linus's
or Red Hat's or whatever) there is the temptation of a hardware vendor
to cut corners and release only a UDI driver and not a Linux driver for
their board.  This would make it easy to deceive the public into
thinking their hardware supports Linux to a greater degree than it
does.  And if the trend were further encouraged to the point that
*everyone* started writing UDI drivers, the relative inefficiency of
the API would slow down people's Linux boxes and reflect poorly on the
reputation Linux as a whole enjoys for its performance.

So count me against UDI or any endorsement thereof by anyone of
consequence in the Linux community.  It can't lead to any good things
that I can see, at least not for Linux.

-- 
Peter Samuelson
<sampo.creighton.edu!psamuels>

------------------------------

From: [EMAIL PROTECTED] (Peter Samuelson)
Subject: Re: threads
Date: 8 Sep 1999 23:59:31 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>

[David Schwartz <[EMAIL PROTECTED]>]
> One possible 'in-between' architecure would use a single
> multithreaded master processes that managed I/O (with another process 
> to restart it in case it died).  This process would handle send
> queues and receive queues and farm 'complex' requests out to a pool
> of processes that handle the more complex requests, probably using
> shared memory to communicate. One advantage of this architecture is
> that you can dramatically reduce process context switches if most of
> the requests are 'simple'.

What makes you think a process context switch is more expensive than a
thread context switch?  If I understand correctly, in Linux they're
almost the same (and quite low compared to, say, NT).  In any case they 
aren't "dramatically" different.

> Now, in the case of Apache, it may well not be worth the effort.

I guess that's what I'm saying.

> What is it about threads that prevents you from having one process
> per security context?

You're right, but in Samba's case I don't think having multiple threads
with multiple connections in one security context would help very much.
Typically each logged-in user has one security context.  Also
typically, he is only using Samba services from one computer at a time.
That one computer, running some variant of Windoze, only needs to have
one connection open to the Samba server, so "one process per security
context" is already, I submit, the common case.

> You seem to be thinking that there exist only two architectural
> models -- one where you have a single process with lots of threads
> and one where you have one process per connection.  Believe me, it's
> possible to be a lot cleverer than this.

True enough.  But what are the advantages of this "mixed" approach (or
should I commit jargon abuse by saying "MxN forking")?  A major
advantage of threads is ease of working with shared state.  A major
advantage of processes is increased isolation with its fault tolerance
and ease-of-programming implications.  A hybrid approach seems to
defeat both of these.  (Of course, if you use an OS whose processes are
too heavy, you can regain some efficiency by using threads where
possible.  That's not too relevant to Linux, though.)

-- 
Peter Samuelson
<sampo.creighton.edu!psamuels>

------------------------------

From: "Dmitry A. Fedorov" <[EMAIL PROTECTED]>
Subject: Re: Counting hardware Interrupt
Date: Thu, 09 Sep 1999 04:27:03 +0000

Simon Kwan wrote:
> 
>   I want to count the number of electrical pulses coming into the CPU IRQ
> line. The count need to increment throughout the life of the system. Hence,
> may be I can write the 'number of pulses' to a disk file as soon as it come
> it. So, when the machine was power down and re-boot some on the next day, it
> will read from disk the last count and continue.
> 
>   Do I need to go the the length of learning how to write a Linux device
> driver in order to perform the above task?  Understand that the device
> driver is allowed to interface with hardware interrupt line handling. Can it
> write to file?


Take the general purpose interrupt driver at

http://metalab.unc.edu/pub/Linux/kernel/irq-1.39-1.tar.gz

and all you left to do is to write user space daemon to save interrupt
counts
on disk.

------------------------------

From: [EMAIL PROTECTED] (Peter Samuelson)
Subject: Re: Linux standards compliance
Date: 8 Sep 1999 23:42:34 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>


  [[EMAIL PROTECTED]]
> > Mr. Becker has writen most Linux networking drivers as a volunteer,
> > if he had writen the SiS driver he clearly would not have made the
> > mistakes he pointed out.
[Phil Howard <[EMAIL PROTECTED]>]
> Or at least if he had, he would have noticed and fixed most of them
> before they are fully released, and for those bugs he misses (we all
> do miss some) he's there, still being a volunteer, ready to fix it.

Actually in this case he would never have put in those bugs at all.
The bugs he found with the SiS driver are really driver design
problems, the result of inexperience and misunderstanding about how to
write a Linux network driver.  He wouldn't have made those mistakes
because he has so much experience in the field.

-- 
Peter Samuelson
<sampo.creighton.edu!psamuels>

------------------------------

From: Karlo Szabo <[EMAIL PROTECTED]>
Subject: Re: Help needed on linking module..
Date: Thu, 09 Sep 1999 14:57:20 +1000

Thanks,

you have been a big help.

I have now one problem left:

unresolved symbol 

mod_use_count_

karlo

------------------------------

From: [EMAIL PROTECTED] (Tim Roberts)
Subject: Re: No process cleanup after a core in 2.2.9
Reply-To: [EMAIL PROTECTED]
Date: Thu, 09 Sep 1999 04:51:24 GMT

In article <[EMAIL PROTECTED]>, Markus M. Mueller wrote:
>I'm running Mandrake 6.0 with a 2.2.9 kernel.
>ulimit -c unlimited
>
>When a program core dumps no core file  is written ( file is there size
>is zero ), and that process HANGS!!! Also kill -9 has no effect only a
>reboot removes it from the process tree.
>
>Anybody the same behaviour?

Yes.  There have been 6 cases of this reported to the Linux kernel development
team.  I'm one of them.

The problem is caused by a bug in the Mandrake modifications to the 2.2.9 
kernel.  The solution is to download a virgin kernel source and rebuild it.

Why Mandrake would want to modify the kernel at this level is beyond me.

-- 
- Tim Roberts, [EMAIL PROTECTED]
  Providenza & Boekelheide, Inc.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Scheduling in Linux
Date: Thu, 09 Sep 1999 04:54:20 GMT

In article <7r6u1p$ub1$[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (ellis) wrote:
> In article
<[EMAIL PROTECTED]>,
> Sandeep Jain  <[EMAIL PROTECTED]> wrote:
>
> >I wanted to know about the scheduling policy beng used in Linux.
> >Can anyone help?
>
> Have you considered reading the source?
>
> --
> http://www.fnet.net/~ellis/photo/linux.html
>
 do man sched_setscheduler


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

Date: Wed, 08 Sep 1999 22:15:04 -0700
From: Xiaopong Tran <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Linux System Engineer Wanted!!!


Linux Gurus, please reply!

We are looking for Senior Linux Engineer with experiences
in the following fields:

- Linux server cluster management for 24/7 uptime
- SMP kernels expertise
- Web-server management
- High-volume client/server architecture
- Network performance benchmarking
- Up-to-date with current Linux development
- C/C++ development in multithreaded environment

Very competitive compensation with excellent benefits,
fun working environment with cutting-edge technologies.

Please send resume in as attachment to

[EMAIL PROTECTED]
Attn: Helen Li
650-424-0805


----
Disclaimer: this message was posted for a friend. Please
do not reply directly to me!

------------------------------

From: Karlo Szabo <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.linux.development.apps,linux.dev.gcc,linux.dev.kernel,linux.dev.x11
Subject: Re: survey linux project.
Date: Thu, 09 Sep 1999 16:04:49 +1000

Why not start a Linux DVD project?

"Kim,Taesung" wrote:
> 
> Hello!
> We( I and my friends) have plan to make soem application on linux.
> First of all, we want to survey on going project on linux.
> We want to know any kind of projects about linux.
> Where can we find?
> Thanks for regard.

------------------------------

From: [EMAIL PROTECTED] (Peter Samuelson)
Subject: Re: Problem porting to LINUX
Date: 9 Sep 1999 00:08:21 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>

  [Georg S. Lorrig]
> > "cc -c C_TEST.C" I get some _hundred_ error messages. It seems that
> > cc doesn't like almost any function declaration and the like.
[Warren Young <[EMAIL PROTECTED]>]
> gcc probably hates your K&R function declarations.  Welcome to 1999 --
> it's time to use ANSI C and prototypes.

As has already been observed, by using *.C (capital C) you are invoking
the C++ compiler.  K&R syntax is illegal in C++.  This could explain a
lot.

-- 
Peter Samuelson
<sampo.creighton.edu!psamuels>

------------------------------

From: [EMAIL PROTECTED] (Peter Samuelson)
Crossposted-To: comp.os.linux,comp.os.linux.misc
Subject: Re: acurate timing
Date: 9 Sep 1999 00:38:57 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>

[Steve D. Perkins <[EMAIL PROTECTED]>]
> I hate to sound ignornant, but is that REALLY a technical term?!?  I
> had always thought that a "jiffy" was just slang for a very quick
> period of time....

I don't know what you consider a "technical term", but in the Linux
kernel there is a global variable, something like

  volatile unsigned long jiffies;

This variable holds the number of timer ticks since boot, where the
system timer has been set to 100 Hz (except on an Alpha, where for
various reasons they use 1024 Hz).  Thus when discussing the Linux
kernel and its interfaces, the term "jiffy" has been adopted to refer
to a length of time equal to one timer tick, or (on most platforms) 10
milliseconds.

-- 
Peter Samuelson
<sampo.creighton.edu!psamuels>

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: increasing process limits
Date: Wed, 08 Sep 1999 23:22:29 -0700


        Yes, that's the only one I've ever been able to come up with. It still
might be better to implement asynchronous reads with your own timeout.
The question is which is worse, the cost of redoing all those
asynchronous reads, the cost of waiting too long, or the cost of the
context switches and memory all those threads would eat.

        At that point, though, it might just be better to use your own NFS
client code. Heck, I've already given up on using the system resolver --
I've never gotten more than about 30 resolves a second out of it
regardless of how many I pend.

        DS

Kaz Kylheku wrote:
> 
> On Wed, 08 Sep 1999 19:46:38 -0700, David Schwartz <[EMAIL PROTECTED]> wrote:
> >
> >       You asked a question, and I gave you the correct answer. I'm sorry you
> >don't like it. Free advice is seldom worth much more than you pay for
> >it.
> >
> >       My answer was not sarcastic, by the way. If you have a legitimate case
> >where more than 100 threads are necessary, I would love to hear about
> >it.
> 
> How about when you have to read 100 different files from 100 different NFS
> servers over slow WAN links without blocking the process? ;)

------------------------------

From: Harald Arnesen <[EMAIL PROTECTED]>
Subject: Re: LispOS?
Date: 08 Sep 1999 10:59:50 +0200
Reply-To: Harald Arnesen <[EMAIL PROTECTED]>

[EMAIL PROTECTED] (Mike McDonald) writes:

> In article <[EMAIL PROTECTED]>,
>       Harald Arnesen <[EMAIL PROTECTED]> writes:
> > [EMAIL PROTECTED] (Peter Samuelson) writes:
> > 
> >> Don't forget to disable "\C-x\C-c"....
> > 
> > No, that should reboot the system.
> 
>   Reboot the system? Why would you want to do that?

OK, it should shut it down, then. When I would want to move the
machine to another location, to upgrade the motherboard,...
-- 
Harald Arnesen, Apall�kkveien 23 A, N-0956 Oslo, Norway

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: threads
Date: Thu, 09 Sep 1999 01:30:48 -0700


        Forgive me for replying to myself, but it occured to me that this
really is an important point and I don't think I covered it in enough
detail.

        In my previous post I pointed out that a web server that uses one
process per connection will have to have 10 context switches to do a
little bit of work on 10 connections. I asserted that a threads
architecture could reduce this but didn't really go into detail.

        Imagine for a moment that you have a web server that does everything in
one big select loop. Obviously, this can handle any number of
connections with no context switches. From a performance standpoint,
this would seem optimal. But it has several problems:

        1) We have to gather together everything we might ever wait for in one
place. And if we ever have to wait for something we can't 'select' on,
such as disk I/O, we're in trouble.

        2) We could get ambushed. One disk I/O from a slow NFS server or one
page fault that takes a little too long to fix and our whoel server
toast.

        3) We have to keep 'saving our place' to go back to it. We can't easily
use the stack to keep track of what we're doing (as most progams do).

        4) Adding on new code could be difficult. Any accidental blocks will
freeze the whole server.

        Having a few more threads around allows you to solve both problems 1
and 2. While it doesn't solve 3 or 4, it does ease them in many ways.

        You can still keep one thread running, doing work for different
connections, for its entire time quantum. You will only need to switch
threads if you block somewhere, (and that'll mean a context switch in
_any_ architecture).

        One important thing to realize is that in a good thread architecture,
there aren't 'X threads' and 'Y threads'. Such a design would require a
thread switch to go from doing X to doing Y. Largely, you make it so
that any thread that happens to be running can do any work that happens
to need to be done, and can keep running until some operation blocks it
or there is nothing at all left to do.

        I hope this makes the point more clear. Rather than comparing threads
to a 'one process per connection' model, compare it to a 'one process
for everything' model. Then look at how it solves the problems with that
architecture without imposing too many penalties of its own.

        DS

------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: threads
Date: Thu, 09 Sep 1999 01:19:42 -0700


Peter Samuelson wrote:
> 
> [David Schwartz <[EMAIL PROTECTED]>]
> > One possible 'in-between' architecure would use a single
> > multithreaded master processes that managed I/O (with another process
> > to restart it in case it died).  This process would handle send
> > queues and receive queues and farm 'complex' requests out to a pool
> > of processes that handle the more complex requests, probably using
> > shared memory to communicate. One advantage of this architecture is
> > that you can dramatically reduce process context switches if most of
> > the requests are 'simple'.
> 
> What makes you think a process context switch is more expensive than a
> thread context switch?  If I understand correctly, in Linux they're
> almost the same (and quite low compared to, say, NT).  In any case they
> aren't "dramatically" different.

        That's a non-sequiter. I was talking about using threads to reduce
context switches of _all_ kinds. If you have ten processes, one handling
each connection, you will have to have context switches when you change
connections, no ifs ands or buts about it.

        If you have one multi-threaded program handling ten connections, it's
entirely possible that it can handle data on all ten connections without
a single thread context switch. You weren't imagining that I was talking
about one thread per connection were you?

> > What is it about threads that prevents you from having one process
> > per security context?
> 
> You're right, but in Samba's case I don't think having multiple threads
> with multiple connections in one security context would help very much.
> Typically each logged-in user has one security context.  Also
> typically, he is only using Samba services from one computer at a time.
> That one computer, running some variant of Windoze, only needs to have
> one connection open to the Samba server, so "one process per security
> context" is already, I submit, the common case.

        Perhaps. I suppose it depends upon exactly what's happening. I don't
now enough about Samba's internals to say. However, it would not
surprise me if someone with more knowledge came up with a way to do one
of the following using threads:

        1) Reduce context switches in cases where you currently need them

        2) Exploit parallelism between disk I/O, network I/O, read ahead, and
so on

        3) Improve performance on SMP machines

        But again, I'm not enough of an expert in Samba to say. Threads don't
make everything better no matter how well they are used. They're a tool
that is sometimes appropriate but not always.

> > You seem to be thinking that there exist only two architectural
> > models -- one where you have a single process with lots of threads
> > and one where you have one process per connection.  Believe me, it's
> > possible to be a lot cleverer than this.
> 
> True enough.  But what are the advantages of this "mixed" approach (or
> should I commit jargon abuse by saying "MxN forking")?  A major
> advantage of threads is ease of working with shared state.  A major
> advantage of processes is increased isolation with its fault tolerance
> and ease-of-programming implications.  A hybrid approach seems to
> defeat both of these.  (Of course, if you use an OS whose processes are
> too heavy, you can regain some efficiency by using threads where
> possible.  That's not too relevant to Linux, though.)

        That's not quite what I'm talking about. There's a difference between
how the operating system or library implements threads and how threads
are used by the application.

        Threads implementations can be one-to-one, many-to-one, or many-to-few.
But this doesn't say anything about what the application does with its
threads.

        I'm talking about trying to minimize _all_ context switches by doing as
much work as possible in a single thread in a single process. That's all
a matter of how you _use_ threads in an application, not so much how you
implemen the threading itself.

        DS

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to