Linux-Development-Sys Digest #995, Volume #6 Mon, 26 Jul 99 23:13:49 EDT
Contents:
Re: IO Completion Ports (Andi Kleen)
Re: when will Linux support > 2GB file size??? (Graffiti)
Re: Strange compilation problems under Redhat 6.0 (David B Anderson)
Re: Fields of the task_struct struct in sched.h (*puntero_loco)
Re[2]: HP CD-RW Supported by RH 6.0? (shihang)
Re: Help! Cannot set thread's priority under Linux, even not as root! (peter hatch)
Re: kernel compile ("Phil")
Re: Linux on PS/2 MCA ESDI???? (Chris Mahmood)
hang at "finding module dependencies" (Yung-Hsiang Lu)
Re: when will Linux support > 2GB file size??? (bill davidsen)
Re: Why not C++ (Johan Kullstam)
Re: High load average, low cpu usage when /home NFS mounted (Peter Steiner)
Formating from C code (Neal Richter)
Re: when will Linux support > 2GB file size??? (Robert Krawitz)
Re: hang at "finding module dependencies" (David T. Blake)
Re: Intercepting network calls (peter hatch)
Intercepting network calls (Anand Paka)
Re: HELP: how to measure hard disk access performance on Linux? (Errin Watusikac)
----------------------------------------------------------------------------
From: Andi Kleen <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: IO Completion Ports
Date: 26 Jul 1999 15:46:15 +0200
David Schwartz <[EMAIL PROTECTED]> writes:
I don't think you would want this, in general. The reason people use
> I/O Completion ports on NT is because it is the fastest way to do
> network I/O. If a Linux implementation wasn't the fastest way to do
> network I/O, it wouldn't be particularly interesting. Under current
> Linux implementations, poll is the fastest way to do network I/O, so you
> should use it.
Actually queued SIGIO is faster in 2.2 (at least when a huge number of
fds is involved)
-Andi
--
This is like TV. I don't like TV.
------------------------------
From: Graffiti <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 26 Jul 1999 12:30:29 -0700
In article <[EMAIL PROTECTED]>,
Philip Brown <[EMAIL PROTECTED]> wrote:
[snip]
>But doesn't the filesystem have to support passing it through?!!
All the filesystem does is read from the disk and send it to the
application. The *KERNEL VM SYSTEM* mmaps() the data, NOT the
filesystem driver. As far as the filesystem is concerned, think
of it as a really, really, really high-level interface to
read()/write()/lseek() on a raw disk. (I know, this isn't
accurate, but I'm trying to get a point across. :-)
>>Another thing mmap() is used for is shared libraries. You just get
>>a pointer to the location in memory where a shared library's routine
>>is instead of using lseek()/read() to grub around in the file.
>>All of a sudden, you can't do this anymore.
>
>THe people complaining about lack of 64-bit filesystems are NOT going to be
>using this on /usr :-) They want a very large data filesystem, and they could
>care less about libraries on that filesystem. Which is why I suggested hacking
>a NEW filesystem, that doesn't support memory mapping, but DOES support
>having large files.
Common database operation:
fd = open("/opt/bloated_commercial_database/data/big_fat_table.dat");
foo = mmap(start, 100000000000, PROT_READ|PROT_WRITE, MAP_SHARED, fd, offset);
Surprise. If you can't use lseek()/read() on 64-bit files, this will break.
And again, I tell you, THE FILESYSTEM DOES NOT SUPPORT MEMORY MAPPING. (Okay,
in some circumstances, the filesystem can *prevent* mmap() from working. But
the fs does not do any mmap()ing for the application. Period.)
ext2fs supports large files. Writing a new fs won't help. You'll still be
kept back by the limit of the kernel vm system.
>>Ext2 can handle 64-bit files. ext2fs does not have to be changed.
>
>
>But other posted that it only works if you rewrite the whole 32-bit kernel
>vm.
Yup.
>Givena choice between fiddling with a filesystem, and screwing with
>memory management for my entire system... I'd choose fiddling with a
>filesystem :-)
So would I. IF IT WORKS. It won't.
-- DN
------------------------------
From: [EMAIL PROTECTED] (David B Anderson)
Subject: Re: Strange compilation problems under Redhat 6.0
Date: 26 Jul 1999 17:36:20 GMT
In article <7ngqqp$5ic$[EMAIL PROTECTED]>,
Mumit Khan <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>Zewei Chen <[EMAIL PROTECTED]> wrote:
>>Here's a simple program that does not compile under the
>>stock installation of Redhat 6.0.
>>
>>#include <stdio.h>
>>
>>FILE *f = stdout;
>>
>>The error message I get is:
>>
>>foo.c:3: initializer element is not constant
>
>That's because your code is ill-formed. stdin/out/err are not guaranteed
The explanation is exactly right, here is
the text from the glibc 2.1 FAQ (in the glibc source):
3.9. I get compiler messages "Initializer element not constant" with
stdin/stdout/stderr. Why?
{RM,AJ} Constructs like:
static FILE *InPtr = stdin;
lead to this message. This is correct behaviour with glibc since stdin is
not a constant expression. Please note that a strict reading of ISO C does
not allow above constructs.
One of the advantages of this is that you can assign to stdin, stdout, and
stderr just like any other global variable (e.g. `stdout = my_stream;'),
which can be very useful with custom streams that you can write with libio
(but beware this is not necessarily portable). The reason to implement it
this way were versioning problems with the size of the FILE structure.
To fix those programs you've got to initialize the variable at run time.
This can be done, e.g. in main, like:
static FILE *InPtr;
int main(void)
{
InPtr = stdin;
}
or by constructors (beware this is gcc specific):
static FILE *InPtr;
static void inPtr_construct (void) __attribute__((constructor));
static void inPtr_construct (void) { InPtr = stdin; }
========================================
There are other interesting things in the glibc FAQ:
I'd suggest you install the source and read it (if
you have the disk space available...).
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (*puntero_loco)
Subject: Re: Fields of the task_struct struct in sched.h
Date: Mon, 26 Jul 1999 21:27:47 +0200
El Fri, 23 Jul 1999 16:04:49 +0200, Massimiliano Gugnali <[EMAIL PROTECTED]>
escribi�:
|Hello.
|I'd like to know then meaning of the two fields
| "signal" and "blocked"
|of the struct
| task_struct
|in the file
| sched.h
|
|Thank you very much,
|bye
|
|
unsigned long signal: Signals that are waiting to be processed by the task (each bit
of the long integer is a signal). The process can see those signals with the system
call sigpending.
unsigned long blocked: Hidden signals (signals that will be ignored by the
process).
--
------------------------------
From: shihang <[EMAIL PROTECTED]>
Subject: Re[2]: HP CD-RW Supported by RH 6.0?
Date: 26 Jul 1999 10:28:41 +0800
Reply-To: [EMAIL PROTECTED]
unsubscribe
--
**** Bentium Mailing List Server --�������Լ����ʼ��б� ****
��֪���飬��� http://www.bentium.net/
------------------------------
From: peter hatch <[EMAIL PROTECTED]>
Subject: Re: Help! Cannot set thread's priority under Linux, even not as root!
Date: Mon, 26 Jul 1999 15:02:06 -0500
Udo Giacomozzi wrote:
>
> Hi.
>
> I've made a simple LinuxThreads test program (using FreePascal). I can
> successfully create new threads but I cannot set the thread's priority. I
> tried several methods (setting attribute, changing when thread already
> exists, thread changes itself,..), even set explicit parameters. But it
> still remains SHED_OTHER at priority 0. I know I must run the program as
> 'root' but it doesn't help at all. Well, that means, it helps a bit. As
> normal user I get some errors when trying to change priority (as expected)
> but as 'root' all functions return OK but the priority isn't affected at all
> (when checking priority status).
> You may say, it's a bug in the program, but someone else tried the same
> program on another machine and it worked fine!?
>
> Are there some settings that deny all priority manipulations, even in root
> mode?
>
> I'm using SuSE 5.3, Kernel 2.0.35.
> On request I can send you the program.
>
In order to set the priorities, you have to be executing in real-time
mode. That's about the extent of my knowledge of the problem though....
> Thank you in advance for any help!
> Udo Giacomozzi
>
> --
> * http://come.to/jampy
> * [EMAIL PROTECTED]
> * UIN: 17745247 (@pager.mirabilis.com)
------------------------------
From: "Phil" <[EMAIL PROTECTED]>
Subject: Re: kernel compile
Date: Mon, 26 Jul 1999 12:37:14 -0700
You also need to copy the System.map from /usr/src/linux to /boot
(Slackware 4.0). When I'm bringing up new kernels, I keep the working
kernel in lilo.conf and add an entry for the new kernel until I'm satisfied
with the new kernel. Makes it easier to get the system back when I have a
problem. :)
Regards,
Phil
root <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> I want to upgrade from kernel version 2.2.5-15 to version 2.2.10. Is
> all I have to do is this:
>
> run either make xconfig or menuconfig or config
> then make dep
> make clean
> make bzImage or zImage
> make modules
> make modules_install
> copy the new kernel to the boot
> directory
> run lilo
> reboot ??
>
>
> Is this correct anyone ??
> are these the steps that it needs to take to compile a kernel with out
> errors and reboot the new kernel without errors ??
>
> I tried it ones like this and I think i got errors when booting the new
> kernel saying "incorrect version or system map"
>
>
> Oai Luong
>
------------------------------
From: Chris Mahmood <[EMAIL PROTECTED]>
Subject: Re: Linux on PS/2 MCA ESDI????
Date: 20 Jul 1999 18:04:30 -0700
There's support in 2.2 for it--see {Linux src}/Documentation/mca.txt
-ckm
------------------------------
From: [EMAIL PROTECTED] (Yung-Hsiang Lu)
Subject: hang at "finding module dependencies"
Date: 26 Jul 1999 21:09:48 GMT
Hi, Everyone,
Does anyone know what can make Linux hang at "finding module
dependencies"? I believe I did not change anything related to
modules. I am using Linux 2.2.5 (redhat 6.0).
Thank you very much.
--
Sincerely,
Yung-Hsiang Lu
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (bill davidsen)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 26 Jul 1999 21:56:29 GMT
In article <[EMAIL PROTECTED]>,
Robert Krawitz <[EMAIL PROTECTED]> wrote:
| [EMAIL PROTECTED] (bill davidsen) writes:
|
| > In article <[EMAIL PROTECTED]>,
| > Robert Krawitz <[EMAIL PROTECTED]> wrote:
| >
| > | That's exactly what they've done, except without breaking binary
| > | compatibility; mmap64() is a new system call.
| >
| > Which breaks source compatibility, which is probably worse. Better I
| > think to have a hacked gcc which uses 64 bit int and offset_t, and links
| > to another library. That way source code should not need to be changed.
|
| Why? It doesn't break back compatibility, and it only affects
| applications that care about large files. The majority of apps that
| don't won't care. The apps that are written properly (with off_t and
| such) simply need to #define _FILE_OFFSET_BITS 64 (or the equivalent
| in their makefiles) and then the standard open() etc. calls work
| (they're converted to open64() and such by means of #define's -- yuck
| -- or pragmas -- better). Apps that merely do open()/read()/write()
| and never lseek or stat shouldn't have any problems simply being
| recompiled.
If it is done in the Makefile it would be fine for properly written
programs in terms of file io, obviously mmap() is not going to use off_t
subscripts, but since only 32 bits are promised by X3J11, I can't see
that as a real portability issue.
| > The performance implications are not obvious, I've used long long
| > without any notable slowdown, but this was NOT a CPU bounce application.
|
| File offset stuff isn't going to be CPU intensive (in the file offset
| stuff, at any rate).
Yeah, it's the other uses of int which might be a problem for
performance on some applications. But the compiler using 64 bits for int
and off_t would certainly allow more programs to work which aren't quite
as well written.
There are no clean solutions, so we're discussing which is least
repulsive;-)
--
bill davidsen <[EMAIL PROTECTED]> CTO, TMR Associates, Inc
The Internet is not the fountain of youth, but some days it feels like
the fountain of immaturity.
------------------------------
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.networking
Subject: Re: Why not C++
From: Johan Kullstam <[EMAIL PROTECTED]>
Date: 26 Jul 1999 18:52:41 -0400
David Schwartz <[EMAIL PROTECTED]> writes:
> > > It is not the language's place to decide when you need to break the
> > >rules. That's your job.
> >
> > Except when, like millions of C and C++ programmers, you don't know the better
> > portion of the rules, so that you don't know that you are breaking them.
> > Constructs along the schema of
>
> I do not believe it is possible to create a useful tool that cannot be
> misused.
true enough. however, mis- or dangerous- usage should be slightly
more awkward than safe usage. this is not so with C/C++.
it often takes more work to look at return values, errno and catch
exceptions than it does to simply ignore them. in C the default
handler is no handler at all. C++ helps somewhat but i still find it
a royal pain in the ass to check and handle everything. i seem
especially prone to extra calls to dtors for my classes. many people
(myself included) assume they have disk space and do not always check
fwrite or putc.
--
J o h a n K u l l s t a m
[[EMAIL PROTECTED]]
Don't Fear the Penguin!
------------------------------
From: [EMAIL PROTECTED] (Peter Steiner)
Crossposted-To: comp.os.linux.misc,comp.os.linux.networking
Subject: Re: High load average, low cpu usage when /home NFS mounted
Date: Tue, 27 Jul 1999 00:29:57 +0200
In article <[EMAIL PROTECTED]>, Paul Kimoto wrote:
>> All tasks are counted that are either TASK_RUNNING,
>> TASK_UNINTERRUPTIBLE or TASK_SWAPPING.
>
>Okay, but try the following experiment on an NFS client:
>
>#!/bin/sh
>while /bin/true; do
> cat > /dev/null LIST_OF_LONG_NFS_MOUNTED_FILES
>done
Your tasks are doing I/O. They are most likely in the state
TASK_UNINTERRUPTIBLE and thus increasing the load. That's expected
behaviour. Load does not mean "CPU load" but more generally "system
load".
Peter
--
_ x ___
/ \_/_\_ /,--' [EMAIL PROTECTED] (Peter Steiner)
\/>'~~~~//
\_____/ signature V0.2 alpha
------------------------------
From: Neal Richter <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Formating from C code
Date: Mon, 26 Jul 1999 17:20:54 -0600
Hello,
Could someone please direct me to some sample code and/or documentation
of formating drives from C/C++ code... Partitioning would be good too.
I'm hoping that the basic command-line utilities are avaliable in a
library somewhere..
Thanks!
Neal
------------------------------
From: Robert Krawitz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 25 Jul 1999 09:23:55 -0400
Graffiti <[EMAIL PROTECTED]> writes:
> While people can argue that adding a 64-bit API that has to be
> called directly will alleviate this problem, unless you're willing
> to re-write *ALL* programs, libraries, etc. to explicitly use this
> new API (and incurr a rather nasty performance hit for the "common"
> case of files that will fit in a 32-bit representation of size),
> you can use the new API only in extremely limited circumstances.
> have to religiously find ways to keep the 32-bit and 64-bit areas
> from interacting, and find a way to "synchronize" them, whatever
> that may entail (i.e. keep your buffer cache coherent, etc.).
For better or for worse, the approach of other 32-bit Unices has been
to add 64-bit API's (everything that uses or returns file offsets or
sizes, plus open -- open64, read64, write64, mmap64, [lf ]stat64, lseek64,
mmap64, munmap64, ftruncate64; I don't remember if there are others)
for applications that want to be able to deal with large files.
There's also a #define that can be set to alias all of the standard
file operations to their 64-bit equivalents, for code that is
otherwise 64-bit clean.
What this means is that programs that are not capable of handling
large files get a new value of ERRNO whenever they try to operate on a
file >2GB, while programs that are 64-bit aware can now handle any
files.
As for this supposed rather nasty performance hit: demonstrate it.
Most programs simply don't do that much arithmetic on file offsets.
--
Robert Krawitz <[EMAIL PROTECTED]> http://www.tiac.net/users/rlk/
Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail [EMAIL PROTECTED]
"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton
------------------------------
From: [EMAIL PROTECTED] (David T. Blake)
Subject: Re: hang at "finding module dependencies"
Date: 27 Jul 1999 00:20:08 GMT
Reply-To: [EMAIL PROTECTED]
Yung-Hsiang Lu <[EMAIL PROTECTED]> wrote:
> Hi, Everyone,
>
> Does anyone know what can make Linux hang at "finding module
> dependencies"? I believe I did not change anything related to
> modules. I am using Linux 2.2.5 (redhat 6.0).
Calling depmod -a while the root filesystem is mounted read
only tends to do that. See /etc/rc.d/rc.sysinit
--
Dave Blake
[EMAIL PROTECTED]
------------------------------
From: peter hatch <[EMAIL PROTECTED]>
Subject: Re: Intercepting network calls
Date: Mon, 26 Jul 1999 20:39:49 -0500
I would suggest looking at tcpdump(8) and socklist(8).
You've got to be root to use tcpdump. It dumps packet data into stdout.
Socklist displays all open sockets and the processes that own them. You
need to be root to see all of the process names.
Anand Paka wrote:
>
> Hi,
>
> I'm new to Linux and am looking around for a mechanism to intercept all
> the sockets calls made. My intention is to monitor the socket traffic,
> their types, usage etc. for all application using the network resources.
> (This should be transparent to the application). I want to be able to say
> which application is using which sockets at a given time, together with
> information like how much data has passed in/out on that socket, the
> state of the socket (listening, connected..) etc.
>
> If any of you have done this or can point me to the resources or have
> any suggestions, please let me know.
>
> Thanks,
> -Anand
------------------------------
From: Anand Paka <[EMAIL PROTECTED]>
Subject: Intercepting network calls
Date: Mon, 26 Jul 1999 20:50:34 -0400
Hi,
I'm new to Linux and am looking around for a mechanism to intercept all
the sockets calls made. My intention is to monitor the socket traffic,
their types, usage etc. for all application using the network resources.
(This should be transparent to the application). I want to be able to say
which application is using which sockets at a given time, together with
information like how much data has passed in/out on that socket, the
state of the socket (listening, connected..) etc.
If any of you have done this or can point me to the resources or have
any suggestions, please let me know.
Thanks,
-Anand
------------------------------
Subject: Re: HELP: how to measure hard disk access performance on Linux?
From: Errin Watusikac <[EMAIL PROTECTED]>
Date: 26 Jul 1999 19:11:02 -0700
[EMAIL PROTECTED] (John McKown) writes:
> hdparm -t -T /dev/hda
WARNING: I found I got a very different result doing this
from an X term and from a VT with the X server not running.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************