Linux-Development-Sys Digest #994, Volume #6     Mon, 26 Jul 99 15:14:27 EDT

Contents:
  Re: IO Completion Ports (Mr Williams)
  Re: when will Linux support > 2GB file size??? (Philip Brown)
  Re: Script Q: determining Linux version (newbie) (Torbjorn Tallroth)
  Re: Control speaker without /dev/mixer (Stuart R. Fuller)
  Re: Unresolved symbol (John Hayward-Warburton)
  Re: Script Q: determining Linux version (newbie) (Torbjorn Tallroth)
  Re: Script Q: determining Linux version (newbie) (Torbjorn Tallroth)
  kernel compile (root)
  Re: Multiple kernels with different modules (bill davidsen)
  elf_prstatus, core files, Alpha RH 6.0 (James Cownie)
  Re: Script Q: determining Linux version (newbie) (bryant h marc)
  Re: when will Linux support > 2GB file size??? (Philip Brown)
  Re: High load average, low cpu usage when /home NFS mounted (Paul Kimoto)
  Re: Multiple kernels with different modules (Ken)
  Problems with NFSROOT and network modules ,,, (Thomas Binder)
  Re: Why ignore a theoretical possibility? (Alexander Viro)
  Re: Script Q: determining Linux version (newbie) (bryant h marc)
  Re: UP and SMP (difference in atomic operation and spinlock function?) (Kaz Kylheku)
  Re: when will Linux support > 2GB file size??? (Robert Krawitz)

----------------------------------------------------------------------------

From: Mr Williams <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: IO Completion Ports
Date: Mon, 26 Jul 1999 10:54:46 +0600

David Schwartz wrote:
> I don't think you would want this, in general. The reason people use
> I/O Completion ports on NT is because it is the fastest way to do
> network I/O.
David, this is a piece of bunk. I/O completion ports are used to keep
the expense of thread creation and the number of threads down--on one
hand, and to avoid unnecessary context switching, on the other. It's got
very little to do with network I/O per se, and can be used for anything
else. Functionally, there's nothing about linux that makes completion
ports unnecessary, it would in fact be good to have such a mechanism
available. But, they're simply not available there, that's all (well,
'select' kind of does a similar thing.)

> If a Linux implementation wasn't the fastest way to do
> network I/O, it wouldn't be particularly interesting. Under current
> Linux implementations, poll is the fastest way to do network I/O, so you
> should use it.
Hopefully the guy knows better than to take this thing at its face
value.

-- 
len
if you must email, reply to:
len bel at world net dot att dot net (no spaces, ats2@, dots2.)

------------------------------

From: [EMAIL PROTECTED] (Philip Brown)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Reply-To: [EMAIL PROTECTED]
Date: 26 Jul 1999 15:06:30 GMT

On 24 Jul 1999 21:00:50 -0700, [EMAIL PROTECTED] wrote:
>In article <[EMAIL PROTECTED]>,
>Philip Brown <[EMAIL PROTECTED]> wrote:
>[snip]
>>well, in theory, it is perfectly feasible to implement a filesystem
>>WITHOUT using memory mapping. It's just not as fast.
>>
>>So, the possibilities I see are:
>>
>>a) maybe the NTFS support doesn't use memory mapping ?
>
>Nononononono, the FILESYSTEM doesn't do memory mapping.  The PROGRAMS
>your run do. 

But doesn't the filesystem have to support passing it through?!!

>...
>Another thing mmap() is used for is shared libraries.  You just get
>a pointer to the location in memory where a shared library's routine
>is instead of using lseek()/read() to grub around in the file.
>All of a sudden, you can't do this anymore.

THe people complaining about lack of 64-bit filesystems are NOT going to be
using this on /usr :-) They want a very large data filesystem, and they could
care less about libraries on that filesystem. Which is why I suggested hacking
a NEW filesystem, that doesn't support memory mapping, but DOES support
having large files.

>>b) make an inhouse tweaked ext2 driver that uses simple buffer-to-buffer
>>   coping instead of memory mapping.
>
>Ext2 can handle 64-bit files.  ext2fs does not have to be changed. 


But other posted that it only works if you rewrite the whole 32-bit kernel
vm.

Givena choice between fiddling with a filesystem, and screwing with
memory management for my entire system... I'd choose fiddling with a
filesystem :-)



-- 
[Trim the no-bots from my address to reply to me by email!]
[ Do NOT email-CC me on posts. Pick one or the other.]
 --------------------------------------------------
The word of the day is mispergitude


------------------------------

Date: Mon, 26 Jul 1999 17:55:25 +0200
From: Torbjorn Tallroth <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Script Q: determining Linux version (newbie)


On 26 Jul 1999, bryant h marc wrote:
> Is there a reliable way to determine the Kernel version and
> the distribution type (RH/slackware/debian...) and version 
> from within a Linux script?

"uname -r" will display the kernel version. 
I don't think you want to know which distribution they are
using, because many people change things in their installation
themselves. Rather test the existance and version of the individual
commands and libraries, you're interested in.

/Torbjorn Tallroth




------------------------------

From: [EMAIL PROTECTED] (Stuart R. Fuller)
Subject: Re: Control speaker without /dev/mixer
Reply-To: [EMAIL PROTECTED]
Date: Mon, 26 Jul 1999 16:00:07 GMT

Frank v Waveren ([EMAIL PROTECTED]) wrote:
: /dev/mixer does not work for the pc speaker anyway.
: 
: Under X you can set speaker volume with Xset I believe. Many window
: managers allow you to control this in their settings/control-centers/etc.
: 
: I dunno for console.
: 
: In article <[EMAIL PROTECTED]>,
:       [EMAIL PROTECTED] (Arun Sharma) writes:
: > Hi,
: > 
: > I have a machine which has no sound card and hence no /dev/mixer. But
: > I want to be able to control the volume on my PC speaker. Is there
: > any Linux utility which lets me do this ?
: > 

While there is a utility called "xset", and while it purports to being able to
set the bell volume, it also says that it can only do so on hardware that
allows the bell volume to be changed.

Standard PC hardware does not allow the bell volume to be changed.

If you want to vary the bell volume, you'll likely need to insert a variable
resistor inline with the speaker wires.

        Stu

------------------------------

From: John Hayward-Warburton <[EMAIL PROTECTED]>
Subject: Re: Unresolved symbol
Date: Mon, 26 Jul 1999 15:11:51 +0000

[EMAIL PROTECTED] wrote:

> Hi all.
>
> I installed the Kernel v2.3.9.

> I try to load the fat.o module and the system don't link update_vm_cache symbol.

vfat is currently under development and, therefore, broken. You'll have to either
go back in the kernels a couple of revs (2.3.7?) or use 2.2.x.
JHW



------------------------------

Date: Mon, 26 Jul 1999 18:18:29 +0200
From: Torbjorn Tallroth <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Script Q: determining Linux version (newbie)


On Mon, 26 Jul 1999, bryant h marc wrote:
> Thanks!  
> The reason I wanted to obtain the distributaion info is
> because I need to modify the serial port configuration, and this
> configuration seems to be held in a different place depending
> on the distribution.  Is this not the best way to go about it?  
> Is there a reliable way to find out where the serial configuration
> file is located?  

You could always check the /etc/inittab for the name of the initialization
scripts for the current runlevel. Beware, that you have to track calls
to other scripts from within these scripts. 

Wouldn't it be easier to instruct the end user how to do the changes
manually? I, for instance would not like a script to make changes to what
I have made manually myself.

> In article <[EMAIL PROTECTED]> you wrote:
> : 
> : "uname -r" will display the kernel version. 
> : I don't think you want to know which distribution they are
> : using, because many people change things in their installation
> : themselves. Rather test the existance and version of the individual
> : commands and libraries, you're interested in.
> : 
> : /Torbjorn Tallroth
> : 
> : 
> : 
> 


------------------------------

Date: Mon, 26 Jul 1999 18:41:17 +0200
From: Torbjorn Tallroth <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Script Q: determining Linux version (newbie)


On 26 Jul 1999, bryant h marc wrote:
> I wanted to obtain the distribution info because I need to know where
> the serial port configuration file is, and this seems to depend on 
> the distribution.  Is there a better way to do this?  Is there a reliable
> way to find the location of the serial configuration file?

Answer to the last question: No. You can't even be sure that there exist's
one at all. The initialization may even have been done in a compiled
executable. 






------------------------------

From: root <[EMAIL PROTECTED]>
Subject: kernel compile
Date: Mon, 26 Jul 1999 13:11:07 -0400

I want to upgrade from kernel version 2.2.5-15 to version 2.2.10.  Is
all I have to do is this:

    run either  make xconfig or menuconfig or config
                        then make dep
                                  make clean
                                  make bzImage or zImage
                                  make modules
                                  make modules_install
                                  copy the new kernel to the boot
directory
                                  run lilo
                                  reboot ??


Is this correct anyone ??
are these the steps  that it needs to take to compile a kernel with out
errors and reboot the new kernel without errors ??

I tried it ones like this and I think i got errors when booting the new
kernel saying "incorrect version or system map"


Oai Luong


------------------------------

From: [EMAIL PROTECTED] (bill davidsen)
Crossposted-To: ucsc.comp.os.linux
Subject: Re: Multiple kernels with different modules
Date: 26 Jul 1999 16:42:15 GMT

In article <[EMAIL PROTECTED]>,
Tom M. Kroeger <[EMAIL PROTECTED]> wrote:
| 
| I'm modifying the 2.2.9 kernel to conduct some tests and
| want to have several versions of the same kernel on 
| one system.  My problem is that the modules are different 
| as well, and that I'd like to be able to at boot (maybe
| thought a lilo.conf variable) set where the modules 
| for the current kernel should be found
| eg..:

In the Makefile is a value named EXTRA_VERSION which can be changed for
each kernel permutation. If you run both uni and SMP kernels it's
important to have two sets of modules, since some modules use structs
which change.

In the old days before that was available, I used to add a leading zero
to the subversion, like 2.2.009, for SMP. That field needed to be a
number, there were modules furiously checking versions as code changed.

-- 
bill davidsen <[EMAIL PROTECTED]>  CTO, TMR Associates, Inc
  The Internet is not the fountain of youth, but some days it feels like
the fountain of immaturity.


------------------------------

Date: Mon, 26 Jul 1999 16:36:36 +0100
From: James Cownie <[EMAIL PROTECTED]>
Subject: elf_prstatus, core files, Alpha RH 6.0

On Red Hat 6.0 on the Alpha processor (kernel 2.2.3), there seems to be an 
inconsistency between
the file elfcore.h (which defines elf_prstatus) and the kernel (which dumps an 
elf_prstatus into 
a note in the core file).

When one compiles a trivial code which prints the sizeof (struct elf_prstatus), 
one gets the result 352, however the size of the NOTE in a core file (which is
set by the kernel to be sizeof (elf_prstatus) is 384.

The inconsistency appears to be caused by an extra 32 bytes of something
somewhere before the pr_reg structure, since one can easily write a program
to put known values in the registers and then core dump, allowing one to see
where the registers appear...

Anyone know anything about this or have any suggestions ?

-- Jim 

James Cownie    <[EMAIL PROTECTED]>
Etnus, Inc.     +44 117 9071438
http://www.etnus.com

------------------------------

From: [EMAIL PROTECTED] (bryant h marc)
Subject: Re: Script Q: determining Linux version (newbie)
Date: 26 Jul 1999 16:49:56 GMT

Thanks.  Even though that is not the answer I was hoping for, it
definitely answers my question.  I will probably just instruct the 
user on how to make the changes, instead of trying to make them
myself.  Thanks for the help

marc

Torbjorn Tallroth ([EMAIL PROTECTED]) wrote:
: 
: Answer to the last question: No. You can't even be sure that there exist's
: one at all. The initialization may even have been done in a compiled
: executable. 
: 
: 
: 
: 
: 

------------------------------

From: [EMAIL PROTECTED] (Philip Brown)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Reply-To: [EMAIL PROTECTED]
Date: 26 Jul 1999 15:02:56 GMT

On Mon, 26 Jul 1999 03:16:27 GMT, [EMAIL PROTECTED] wrote:
>In comp.os.linux.advocacy Donovan Rebbechi <[EMAIL PROTECTED]> wrote:
>
>: What about the Merced ? There is definitely "push", but not momentum yet.
>: Given that it's intel who are "pushing", I'd expect to see momentum in 
>: due course.
>
>Trouble is, Intel hasn't started mass-producing 64-bit CPUs yet. Once they
>do, then there'll be lots of affordable 64-bit stuff.


Umm.... a low-end alpha was affordable 2 years ago.



-- 
[Trim the no-bots from my address to reply to me by email!]
[ Do NOT email-CC me on posts. Pick one or the other.]
 --------------------------------------------------
The word of the day is mispergitude


------------------------------

From: [EMAIL PROTECTED] (Paul Kimoto)
Crossposted-To: comp.os.linux.misc,comp.os.linux.networking
Subject: Re: High load average, low cpu usage when /home NFS mounted
Date: 26 Jul 1999 11:46:51 -0500
Reply-To: [EMAIL PROTECTED]

In article <[EMAIL PROTECTED]>, 
Peter Steiner wrote:
> In article <[EMAIL PROTECTED]>, Kelly Burkhart wrote:
>>>     [from the proc(5) man page:]
>>>     loadavg
>>>     The load average numbers give the number of jobs in
>>>         the run queue averaged over 1, 5 and 15 minutes.

>> Really?  I thought processes waiting on IO were not in the run queue;
>> only processes that were "ready to run".

> The manpage is wrong.

> The loadavg shows the number of "active" tasks. Active does not only
> mean "running", but also "doing critical I/O".

[code from kernel/sched.c snipped]
> All tasks are counted that are either TASK_RUNNING,
> TASK_UNINTERRUPTIBLE or TASK_SWAPPING.

Okay, but try the following experiment on an NFS client:

#!/bin/sh
while /bin/true; do
   cat > /dev/null LIST_OF_LONG_NFS_MOUNTED_FILES
done

In my list of long files I have stuff like the emacs-20.4 source tar 
file (i.e., several MB long).  Each "cat" is taking ~1 minute, and
"top" reports its %CPU at ~10%, but my load average is slightly above
1.

-- 
Paul Kimoto             <[EMAIL PROTECTED]>

------------------------------

From: Ken <[EMAIL PROTECTED]>
Crossposted-To: ucsc.comp.os.linux
Subject: Re: Multiple kernels with different modules
Date: Mon, 26 Jul 1999 11:22:48 -0700

Don't know if this is generally true with all distributions, but in the
kernel sources I got with RH5.2, one can create the file
/usr/src/linux/.name with a string to be appended to the version number.

bill davidsen wrote:
> 
> In article <[EMAIL PROTECTED]>,
> Tom M. Kroeger <[EMAIL PROTECTED]> wrote:
> |
> | I'm modifying the 2.2.9 kernel to conduct some tests and
> | want to have several versions of the same kernel on
> | one system.  My problem is that the modules are different
> | as well, and that I'd like to be able to at boot (maybe
> | thought a lilo.conf variable) set where the modules
> | for the current kernel should be found
> | eg..:
> 
> In the Makefile is a value named EXTRA_VERSION which can be changed for
> each kernel permutation. If you run both uni and SMP kernels it's
> important to have two sets of modules, since some modules use structs
> which change.
> 
> In the old days before that was available, I used to add a leading zero
> to the subversion, like 2.2.009, for SMP. That field needed to be a
> number, there were modules furiously checking versions as code changed.
> 
> --
> bill davidsen <[EMAIL PROTECTED]>  CTO, TMR Associates, Inc
>   The Internet is not the fountain of youth, but some days it feels like
> the fountain of immaturity.

-- 
Ken
mailto:[EMAIL PROTECTED]
http://www.sewingwitch.com/ken/
http://www.215Now.com/

------------------------------

From: Thomas Binder <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.networking,linux.redhat.misc
Subject: Problems with NFSROOT and network modules ,,,
Date: Wed, 21 Jul 1999 08:52:42 +0200


Dear Linux Gurus!

We have a system of several Linux clients and one Server. The clients mount the
root and usr partitions upon boot from the server.

The client-kernel has no network drivers compiled in, all necessary network 
modules are attached as initrd. To load the kernel we use either a bootrom or a 
floppy disk.

>From RedHat 4.0 to RedHat 5.2 this worked out of the box.

As of RedHat 6.0 we're having some troubles now. First I thought that it might 
have to do something with DHCP (RedHat decided to remove BOOTP from the 
distribution) and that we misconfigured something. As I get deeper and deeper 
into the subject I now doubt that it's directly related to DHCP.

I discovered that the BOOTP/RARP requests are no longer in the file nfsroot.c 
but were moved to ipconfig.c. On boot I now get a complaint from IP-Config that 
no network cards were found. When the boot continues initrd gets mounted and the 
network modules are properly inserted. However, the kernel complains about no
available NFS Servers. Obviously the modules were inserted too late...

I started to play around a bit and managed to put a call to `ip_config' right 
after the mount of initrd and just before the mount of the actual (NFS) root-
partition. This now works fine for me, but I'm not sure of any side effects...

Since we have several machines with different hardware it would be quite 
convenient to have *everything* compiled as a module.

        
Is there a `straight' solution to that?
Did I miss some important documentation?

Any response would be appreciated.
Tom
-- 
"Computers are like air conditioners - they stop working properly when you open
 Windows"
                            \\\|///
                          \\  ~ ~  //
                           (  @ @  )
/------------------------oOOo-(_)-oOOo-----------------------------------------\
| Thomas Binder                        |                                       |
| Institute for  Microelectronics      | phone: ++43/1/58801-36036             |
| Technical University Vienna          |                                       |
| Gusshausstrasse  27-29 / E360        | fax  : ++43/1/58801-36099             |
| A-1040  Vienna                       |                                       |
| A U S T R I A                        | email: [EMAIL PROTECTED]        |
\---------------------------------Oooo.----------------------------------------/
                        .oooO     (   )
                        (   )      ) /
                         \ (      (_/
                          \_)

------------------------------

From: [EMAIL PROTECTED] (Alexander Viro)
Subject: Re: Why ignore a theoretical possibility?
Date: 26 Jul 1999 14:10:13 -0400

In article <[EMAIL PROTECTED]>,
Benedetto Proietti  <[EMAIL PROTECTED]> wrote:
>Hi,
>my name is Benedetto and I am an italian mathematics student.
>Some time ago my information science professors and I have started
>thinking about the theoretical possibility to realize a compiler like
>this:
>- a source code is compiled for the first time and some information is
>saved about it.
>- following modifications of said source could be handled so that a very
>little portion of the source is recompiled, and the generated code is
>properly inserted into the (previously) compiled file. By little portion
>I mean single functions, statement or less!

        First of all, minimal modifications of source may seriously change the
parse tree. Morevore, with even less efforts you can change the common
subexpressions and lifetimes and *that* will tear down a lot of things done
by any decent optimizer. So you'll have to redo it anyway. Yes, you can do it
on per-function basis. If the function is not inlined, that is.
        The main question being: what for? Incremental compile? That had been
supported by any C compiler since the original ones (IIRC there were several
toy compilers on bitty-boxen that required to keep everything in the single
file, but who cares?). Just don't make your modules excessively large and
that's it. And use make. It's C, not PASCAL.

>It is obviously a non-trivial task (=design and implementation),
>nevertheless it could be done. Following recompilations are estimated to
>take about 1 tenth of sec or less.

Estimated how? You'll have to redo all optimizations.

>My first questions to you, that surely know very well compilers and
>development systems, are:
>Question 1- Could such kind of compiler be useful to you and your job?
        No.

>Question 2- Would you use this kind of "incremental" or "real time"
>compiler?
        To look what it is doing - maybe, for real use - extremely unlikely.

>Question 3- What other functionalities a compiler like that should have?
>
>Some time ago I had the luck to meet the "Research & Development"
>Director of a very famous compiler company (Borland International).
>I told him about the possibility to build (with "some" effort) this kind
>of compiler and he answered me "Why have we to build this "incremental"
>compiler when we have computer every day faster and more powerful?"
>I understand that companies like Microsoft, Borland, ..., and especially
>Intel, dislike very powerful software (like compilers) that do not need
>fast machine to run on.
        Non-sequitur. The system you are going to write will eat more
resources.

>Question 4:Why should we buy a new processor when our compiler takes 1
>tenth to compile normal programs and half second the big ones?
>Also, couldn't we use the time we save in the compilation for other
>things like optimizing code and use more abstract and powerful
>languages?
        WHAT optimizations? Minimal changes on lexical level grow to huge ones
after the first passes of optimizer. You'll have to redo almost everything
on each compile anyway. Abstract and powerful languages? *That* assumes
different compiler from the very beginning, doesn't it?

>Question 5:Can you imagine what kind of optimizations the compiler could
>do in those tens of seconds saved? I really cannot!

        See above. BTW, if compile time dominates in your work... Well,
all that I can say is that you are compiling way too much. It will make
a difference only if you are working that way: build, touch a line or
two in a lot of modules, build again. It means that your have much
worse problems than compile time - exceptionally badly structured program,
for one.

>Are we so addicted to obsolescence mentality not to recognize new nice
>technologies? to avoid new technologies and new ideas at all?

        Puhlease. Incremental compile is not a new technology at least since
70s. Incremental compile on per-function basis is an odd hack, but it
can be done with a trivial modifications to compiler (the simplest variant:
tear the module to separate files (one for each function) and change the
behaviour of 'static' + modify linker). Usefulness is more than dubious.
That's what modules are for. Doing that on a per-opertaor basis means that
you have to redo the optimizing phase on each compile anyway.

-- 
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid.  Get yourself a better computer" - Dilbert.

------------------------------

From: [EMAIL PROTECTED] (bryant h marc)
Subject: Re: Script Q: determining Linux version (newbie)
Date: 26 Jul 1999 16:11:06 GMT

I wanted to obtain the distribution info because I need to know where
the serial port configuration file is, and this seems to depend on 
the distribution.  Is there a better way to do this?  Is there a reliable
way to find the location of the serial configuration file?

thanks,
marc


Torbjorn Tallroth ([EMAIL PROTECTED]) wrote:
: 
: "uname -r" will display the kernel version. 
: I don't think you want to know which distribution they are
: using, because many people change things in their installation
: themselves. Rather test the existance and version of the individual
: commands and libraries, you're interested in.
: 
: /Torbjorn Tallroth
: 
: 
: 

------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.hardware
Subject: Re: UP and SMP (difference in atomic operation and spinlock function?)
Date: Mon, 26 Jul 1999 18:08:09 GMT

On 26 Jul 1999 02:42:53 GMT, robert_c <[EMAIL PROTECTED]> wrote:
> In my opinion, to avoid race condition, we can use four methods in LINUX
> 
> 1. bit operation: (just for binary mutex) we can use test_set, test_and_set,
> ...
> 
> 2. atomic integer operation: like atomic_inc, atomic_dec_and_test...
> 
> 3. spin_lock: like spin_lock, spin_unlock, spin_lock_irqsavc...
> 
> 4. Kernel semaphore
> 
> But, what's these application (or situation) for above four methods are
> used?
> 
> My thought is like following.
> 
> 1. [bit operation]: used in UP and binary lock
> 2. [atomic integer operation]: used in UP and mutiple integer lock.
> 3. [spin_lock]: bit and atomic operations are belong to spin_lock subset, so
> spin_lock can do anything [bit and atomic operations] they can do. Besides,
> spin_lock are mainly used in SMP. So I can use spin_lock in UP and SMP.
> (but, if it is used in UP, the perfomance will be little bad)

No, if you compile for a single-processor system without -D__SMP__, the
spin_lock_irqsave and spin_lock_irqrestore primitives turn into
save_flags();cli() and restore_flags().  In other words, they become the usual 
mechanisms for critical region in a UP kernel.  
And spin_lock() (the non-IRQ version) simply becomes

        do { } while (0)

in other words, a no-operation. That's because there is nothing to do; because
there is only one processor, so there are no other processors to keep out
of the critical region, and the semantics of spin_lock() allow interrupts,
so cli() is not needed. Have a look at the asm/spinlock.h header to
see how it handles conditional compilation on __SMP__.

So I wouldn't worry about losing performance from spin_lock() or
spin_lock_irqsave() on a single processor system. :)

By the way, I always use spin_lock_irqsave(). I never use the variants that do
not disable interrupts. This way, a given processor's time in the critical
region is minimized, and the atomicity of the code is guaranteed against all
possibilities: other processors as well as interrupts, bottom-half callbacks
etc.

> 4. [kernel semaphore]: ?

A semaphore can only be waited on by a process. Semaphores are useful for
synchronization between tasks, or between tasks and interrupt (in which case
the interrupt signals only, and only the task waits).  A semaphore is useful if
process must be delayed for some potentially lengthy period of time until
something happens, during which time it's desirable to run other processes.
It's not suitable for brief critical regions.

------------------------------

From: Robert Krawitz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 26 Jul 1999 12:49:18 -0400

[EMAIL PROTECTED] (bill davidsen) writes:

> In article <[EMAIL PROTECTED]>,
> Robert Krawitz  <[EMAIL PROTECTED]> wrote:
> 
> | That's exactly what they've done, except without breaking binary
> | compatibility; mmap64() is a new system call.
> 
> Which breaks source compatibility, which is probably worse.  Better I
> think to have a hacked gcc which uses 64 bit int and offset_t, and links
> to another library. That way source code should not need to be changed.

Why?  It doesn't break back compatibility, and it only affects
applications that care about large files.  The majority of apps that
don't won't care.  The apps that are written properly (with off_t and
such) simply need to #define _FILE_OFFSET_BITS 64 (or the equivalent
in their makefiles) and then the standard open() etc. calls work
(they're converted to open64() and such by means of #define's -- yuck
-- or pragmas -- better).  Apps that merely do open()/read()/write()
and never lseek or stat shouldn't have any problems simply being
recompiled.

> The performance implications are not obvious, I've used long long
> without any notable slowdown, but this was NOT a CPU bounce application.

File offset stuff isn't going to be CPU intensive (in the file offset
stuff, at any rate).
-- 
Robert Krawitz <[EMAIL PROTECTED]>      http://www.tiac.net/users/rlk/

Tall Clubs International  --  http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail [EMAIL PROTECTED]

"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to