Linux-Development-Sys Digest #939, Volume #7 Wed, 7 Jun 00 22:13:09 EDT
Contents:
Re: Yamaha XG724 soundcard driver? (Patrice Belleville)
dump core or not dump core ("Ruppert R. Koch")
Re: Process Control (Kaz Kylheku)
Re: [ext2fs] what are fragments ? ("Norman Black")
Re: finding filename from file descriptor (13mb50000-JonesB(10053580212g77))
Sound Card (Bo - Sun)
Re: finding filename from file descriptor (Ulrich Weigand)
Size of mem_map array? (Timur Tabi)
Re: UNIX upgrading (Johan Kullstam)
Re: Size of mem_map array? (Kaz Kylheku)
kernel rebuild - which one? ("Kent A. Signorini")
Re: finding filename from file descriptor (Alexander Viro)
Re: dump core or not dump core (Juergen Heinzl)
Re: Size of mem_map array? (Timur Tabi)
Re: books on linux programming (Sebastien Dessimoz)
Re: kernel rebuild - which one? ("David ..")
exit()/pthreads bug on Linux PPC (Peter Burka)
Re: RedHat 6.2 Autofs Broken (David Highley)
Re: exit()/pthreads bug on Linux PPC (Kaz Kylheku)
Re: Linux TCP/IP stack internal (Stefaan A Eeckels)
----------------------------------------------------------------------------
From: Patrice Belleville <[EMAIL PROTECTED]>
Subject: Re: Yamaha XG724 soundcard driver?
Date: 07 Jun 2000 13:10:51 -0700
"Erich Mercado" <[EMAIL PROTECTED]> writes:
> Do anyone know where can I get the drivers for this particular
> soundcard?
The Advanced Linux Sound Architecture (ALSA) drivers now support it, as of
version 0.5.8a. I installed the driver yesterday on my Sony VAIO PCFG-430
and everything seems to work fine (a previous attempt from the cvs sources
had failed). Their URL is:
http://www.alsa-project.org
Patrice
--
**------------------------------------------------------------------------
** Patrice Belleville ([EMAIL PROTECTED]) (604) 822-9870
** Instructor and Departmental advisor, Department of Computer Science
**------------------------------------------------------------------------
------------------------------
From: "Ruppert R. Koch" <[EMAIL PROTECTED]>
Subject: dump core or not dump core
Date: Wed, 07 Jun 2000 10:26:01 -0700
Howdy,
Some of my programs dump core, others don't. I tried
it on RedHat 6.1 and Mandrake 7.0 with kernels 2.2.14
and 2.2.15. I tried tcsh and bash with coredumpsize
set to 1GB. It seems to be application specific.
Is there any other criteria when suppressing apart from
the type of signal and the size limitation?
The application that doesn't dump core contains signal
handlers for some signals, but not for SEGV or ABRT.
Ruppert
-----------------------------------------------------------------
Ruppert Koch [EMAIL PROTECTED]
Dept of Elec. & Comp. Engr. http://alpha.ece.ucsb.edu/~ruppert
University of California Phone: (805) 893-7788
Santa Barbara, CA 93106 Fax: (805) 893-3262
-----------------------------------------------------------------
------------------------------
From: [EMAIL PROTECTED] (Kaz Kylheku)
Subject: Re: Process Control
Reply-To: [EMAIL PROTECTED]
Date: Wed, 07 Jun 2000 20:12:51 GMT
On Tue, 6 Jun 2000 13:08:00 -0700, M Dipperstein <[EMAIL PROTECTED]> wrote:
>I have program which creates several processes using the clone command.
>However, it is not necessary that every process run all of the time. I
>would like to allow some processes to be idle until one of the other
>processes recognizes an event which requires one of the idle processes to
>do something.
>
>Does Linux provide a means for one process to suspend and resume another
>process?
Yes; you can use the kill() function to send the SIGSTOP and SIGCONT signal to
a process, if you know its process ID. The SIGSTOP signal cannot be caught by
the process; the process will just pause dead in its tracks no matter what it
was doing.
--
#exclude <windows.h>
------------------------------
From: "Norman Black" <[EMAIL PROTECTED]>
Subject: Re: [ext2fs] what are fragments ?
Date: Wed, 7 Jun 2000 13:24:22 -0700
Reply-To: "Norman Black" <[EMAIL PROTECTED]>
> that because of either a bad allocation policy (MS-DOS FAT), or
> because the disk is nearly full (ext2), files will be scattered in
> small parts, thus the access will be less efficient.
I have 8GB free on a 9.5GB partition and I have fragmented files. Files can
get fragmented/non-continuous no matter how much free space there is. The
likelihood of multiple fragments in a file is low with lots of continuous
free space, and high with little continuous free space.
--
Norman Black
Stony Brook Software
the reply, fubar => ix.netcom
"Marc SCHAEFER" <[EMAIL PROTECTED]> wrote in message
news:8hkuom$n5d$[EMAIL PROTECTED]...
> Juergen Heinzl <[EMAIL PROTECTED]> wrote:
> : Correct me - not yet used by ext2fs and comes from BSD. Instead
> : of using one block per file the FS can use fragments to decrease
> : the waste of space.
>
> This is not supported on ext2. On ext2, the typical message of
> fsck about fragmentation (which was changed recently to `non
> contiguous') is really the MS-DOS meaning of fragmentation: the fact
> that because of either a bad allocation policy (MS-DOS FAT), or
> because the disk is nearly full (ext2), files will be scattered in
> small parts, thus the access will be less efficient.
>
------------------------------
From: [EMAIL PROTECTED] (13mb50000-JonesB(10053580)212g77)
Subject: Re: finding filename from file descriptor
Date: 7 Jun 2000 20:23:17 GMT
Reply-To: [EMAIL PROTECTED]
>From article <8hm6jv$du8$[EMAIL PROTECTED]>, by [EMAIL PROTECTED]:
> My intent is very simple.
> I have an API that takes an fd (file descriptor) as input.
>
> Once I have received an fd I want to open the fd (I mean the same file)
> in another process.
> So I should be either able to share the fd or be able to open the file
> in the second process.
> the two processes have no parent child relationship.
>
If you have a file descriptor, such as what is returned from open(2), then
the file is already opened, and all you have to do is use the descriptor.
If you want to associate a new stream with the FD, see fdopen(3).
It still isn't clear what you are trying to do, or what you want to do.
If the problem is getting the FD to the other process, then you need to
worry about IPC (inter-process communications), which in this case could
be handled with either a shared memory region or by passing a message from
one process to the other. Since there is no parent/child relationship,
then you have to use the standard IPC mechanisms (which also include
semaphores, but they are useful primarily for synchronization.
If you have two independent processes using the file descriptor, and if
both processes are writing, the results could be really interesting, and
quite possibly non-deterministic.
--
Brian Jones aka [EMAIL PROTECTED]
------------------------------
From: Bo - Sun <[EMAIL PROTECTED]>
Subject: Sound Card
Date: Wed, 7 Jun 2000 15:08:35 -0500
hi:
I am using Turbo linux on my machine, and I now want to play .mp3.
I first use xmms, but get the error messages that the sound card driver
is not compiled into the kernel. and some other kinds of error messages.
I installed linux from CD-ROM, so I don't know how to config sound card.
Any ideas for me? Thanks in advance,
Sun Bo
------------------------------
From: [EMAIL PROTECTED] (Ulrich Weigand)
Subject: Re: finding filename from file descriptor
Date: 7 Jun 2000 23:28:04 +0200
[EMAIL PROTECTED] writes:
>My intent is very simple.
> I have an API that takes an fd (file descriptor) as input.
>Once I have received an fd I want to open the fd (I mean the same file)
>in another process.
>So I should be either able to share the fd or be able to open the file
>in the second process.
> the two processes have no parent child relationship.
How do the two processes communicate? If you use a AF_UNIX domain
socket for communication, you can pass an open file descriptor via
the socket as 'ancillary data' of type SCM_RIGHTS.
For details, check out the sendmsg/recvmsg man pages.
--
Dr. Ulrich Weigand
[EMAIL PROTECTED]
------------------------------
From: Timur Tabi <[EMAIL PROTECTED]>
Subject: Size of mem_map array?
Date: Wed, 07 Jun 2000 21:21:29 GMT
I want to write a driver that traverses the mem_map array so that I can
better understand the VM manager. However, I can't seem to figure out
how big mem_map is. In fact, based on what I've seen, I'm beginning to
suspect that mem_map is not one, contiguous array. I've scoured the
2.3 kernel source code, but I can't find anything. Please help!
--
Timur Tabi
Interactive Silicon - http://www.interactivesi.com
Remove "nospam_" from my email address when replying
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
Subject: Re: UNIX upgrading
From: Johan Kullstam <[EMAIL PROTECTED]>
Date: Wed, 07 Jun 2000 21:41:00 GMT
<[EMAIL PROTECTED]> writes:
> Hey all,
> I'm new to the Linux world, so this might be a very naive question.
> But here we go....
> After suffering through the wasteland of DOS/Windows for a long time,
> how often will it be necessary to upgrade my Redhat Linux?
as often as you like.
> In other
> words, if I still had Windows 3.1 on my system, I would be out of luck as
> far as new software is concerned. Yet if I keep RH 6.1 on my system for a
> while, will it always be able to run newer software with its regular
> kernal?
most of it. libc versions are a bigger hurdle than kernel.
> As far as older kernals, are they still viable for running new
> software? Thanks for your help!
if you have decent ftp access, you can easily download updates to the
software. redhat (and all distributions) are forever issuing new
versions. it's easy to upgrade. if you have a good connection, you
needn't buy a distribution again. if not, get a redhat cd from
cheapbytes for about $8 (*including* shipping and handling).
--
J o h a n K u l l s t a m
[[EMAIL PROTECTED]]
Don't Fear the Penguin!
------------------------------
From: [EMAIL PROTECTED] (Kaz Kylheku)
Subject: Re: Size of mem_map array?
Reply-To: [EMAIL PROTECTED]
Date: Wed, 07 Jun 2000 21:47:44 GMT
On Wed, 07 Jun 2000 21:21:29 GMT, Timur Tabi <[EMAIL PROTECTED]>
wrote:
>I want to write a driver that traverses the mem_map array so that I can
>better understand the VM manager. However, I can't seem to figure out
>how big mem_map is. In fact, based on what I've seen, I'm beginning to
>suspect that mem_map is not one, contiguous array. I've scoured the
>2.3 kernel source code, but I can't find anything. Please help!
Have you looked at the elf_core_dump() function in fs/binfmt_elf.c?
This thing iterates over the memory of a process in order to dump the
contents to a core file. In fact it make several passes over it,
the first of which determines what to dump and how much:
/* Count what's needed to dump, up to the limit of coredump size */
segs = 0;
size = 0;
for(vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) {
if (maydump(vma))
{
unsigned long sz = vma->vm_end-vma->vm_start;
if (size+sz >= limit)
break;
else
size += sz;
}
segs++;
}
#ifdef DEBUG
printk("elf_core_dump: %d segs taking %d bytes\n", segs, size);
#endif
--
#exclude <windows.h>
------------------------------
From: "Kent A. Signorini" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux,comp.os.linux.questions,comp.os.linux.redhat
Subject: kernel rebuild - which one?
Date: Wed, 07 Jun 2000 22:05:43 GMT
I need to rebuild the Kernel for my RH 6.2 distro because I need to add
UDMA/66 (Promise PCI card) support. I've had little luck patching any of
the 2.2.X kernels and want to go to 2.3.X.
Which is the most stable? I think I tried a 2.3.19 and it locked up
continuously.
Please e-mail me your responses:
Kent A. Signorini
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (Alexander Viro)
Subject: Re: finding filename from file descriptor
Date: 7 Jun 2000 18:09:04 -0400
In article <8hm6jv$du8$[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
>My intent is very simple.
> I have an API that takes an fd (file descriptor) as input.
>
>Once I have received an fd I want to open the fd (I mean the same file)
>in another process.
>So I should be either able to share the fd or be able to open the file
>in the second process.
> the two processes have no parent child relationship.
Let one of them create an AF_UNIX socket and bind it to some name.
Another should connect() to that name and then they can pass opened
files to each other - sendmsg() with SCM_RIGHTS cookie will do nicely.
--
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid. Get yourself a better computer" - Dilbert.
------------------------------
From: [EMAIL PROTECTED] (Juergen Heinzl)
Subject: Re: dump core or not dump core
Date: Wed, 07 Jun 2000 22:30:17 GMT
In article <[EMAIL PROTECTED]>, Ruppert R. Koch wrote:
>Howdy,
>
>Some of my programs dump core, others don't. I tried
>it on RedHat 6.1 and Mandrake 7.0 with kernels 2.2.14
>and 2.2.15. I tried tcsh and bash with coredumpsize
>set to 1GB. It seems to be application specific.
>
>Is there any other criteria when suppressing apart from
>the type of signal and the size limitation?
>The application that doesn't dump core contains signal
>handlers for some signals, but not for SEGV or ABRT.
[...]
Some options :
- an application can inhibit the generation of core dumps
using setrlimit(2). Some do for security reasons as else
passwords or other sensitive data might end up in a core file.
Ftp servers often do that for instance.
- an application can change its current working directory. If
the new working directory does not allow the writing of files
for the effective user id / group id the application is running
with, then no core dump will be written either. Daemons often
change their working directory to / before chaning their euid
to something other than 0.
- an application cannot dump a core if there is another file
named core and the permissions are such, that it cannot be
overwritten.
More here, but these two came to my mind right now 8-)
Cheers,
Juergen
--
\ Real name : J�rgen Heinzl \ no flames /
\ EMail Private : [EMAIL PROTECTED] \ send money instead /
------------------------------
From: Timur Tabi <[EMAIL PROTECTED]>
Subject: Re: Size of mem_map array?
Date: Wed, 07 Jun 2000 22:26:37 GMT
In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] wrote:
> Have you looked at the elf_core_dump() function in fs/binfmt_elf.c?
Yes, and I don't see how this is related to the mem_map array. I don't
want to iterate over the memory of a process, I want to iterate over
all of physical memory. From what I understand, each mem_map_t in array
mem_map represents one page of physical memory.
--
Timur Tabi
Interactive Silicon - http://www.interactivesi.com
Remove "nospam_" from my email address when replying
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
Date: Wed, 07 Jun 2000 16:39:06 -0700
From: Sebastien Dessimoz <[EMAIL PROTECTED]>
Subject: Re: books on linux programming
- Linux kernel internals, M. Beck, H. Bohme, M. Dziadzka, U. Kunitz, R.
Magnus, D. Verworner, Addison-Wesley,
1998
- Linux device drivers, Alessandro Rubini, O'Reilly
Thomas Berkefeld wrote:
> Hi,
>
> I am looking for good books on linux - programming (especially I/O and
> system-related stuff, system calls etc.)
>
> Any suggestions?
------------------------------
From: "David .." <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux,comp.os.linux.questions,comp.os.linux.redhat
Subject: Re: kernel rebuild - which one?
Date: Wed, 07 Jun 2000 19:15:33 -0500
"Kent A. Signorini" wrote:
>
> I need to rebuild the Kernel for my RH 6.2 distro because I need to add
> UDMA/66 (Promise PCI card) support. I've had little luck patching any of
> the 2.2.X kernels and want to go to 2.3.X.
>
> Which is the most stable? I think I tried a 2.3.19 and it locked up
> continuously.
>
> Please e-mail me your responses:
>
> Kent A. Signorini
> [EMAIL PROTECTED]
2.3.X is a development kernel.
2.2.15 is the newest stable kernel at present.
--
Registered with the Linux Counter. http://counter.li.org
ID # 123538
------------------------------
From: Peter Burka <[EMAIL PROTECTED]>
Subject: exit()/pthreads bug on Linux PPC
Date: Thu, 08 Jun 2000 00:18:06 GMT
I'm porting a program to Linux PPC (Yellow Dog 2.1) and
I've encountered a strange bug regarding exit(). The
same program works fine under Linux 386 and Linux ARM,
and on other PPC platforms (Neutrino and AIX).
The program does the following:
- create several pthreads
- one of the pthreads blocks on a mutex
- one of the pthreads waits on a condvar
- the main thread calls exit()
I put a printf right before the call to exit(), and it
is being reached. However the program doesn't always
exit. It will usually (about 90% of the time) hang at
this point.
Running it under gdb shows that the main thread has
terminated, but that the two blocked threads are still
hanging around:
(gdb) info threads
5 Thread 28734 0xfdb9224 in __syscall_rt_sigsuspend () at soinit.c:59
* 3 Thread 28732 0xfdb9224 in __syscall_rt_sigsuspend () at soinit.c:59
(gdb) where
#0 0xfdb9224 in __syscall_rt_sigsuspend () at soinit.c:59
#1 0xfdb7f18 in __sigsuspend (set=0x7f7ffad4)
at ../sysdeps/unix/sysv/linux/sigsuspend.c:48
#2 0xfeaea68 in __pthread_wait_for_restart_signal (self=0x7f7ffe40)
at pthread.c:785
#3 0xfeaae28 in pthread_cond_wait (cond=0x1001a2e4, mutex=0xfd5a0cc)
at restart.h:26
. . .
(gdb) thread 5
[Switching to thread 5 (Thread 28734)]
#0 0xfdb9224 in __syscall_rt_sigsuspend () at soinit.c:59
59 in soinit.c
(gdb) where
#0 0xfdb9224 in __syscall_rt_sigsuspend () at soinit.c:59
#1 0xfdb7f18 in __sigsuspend (set=0x7f3ffab0)
at ../sysdeps/unix/sysv/linux/sigsuspend.c:48
#2 0xfeaea68 in __pthread_wait_for_restart_signal (self=0x7f3ffe40)
at pthread.c:785
#3 0xfeaae28 in pthread_cond_wait (cond=0x1004ba84, mutex=0xfd5a0cc)
at restart.h:26
. . .
It looks like the system failed to shut down the pthread library
properly.
The system is a Mac G3 with a Yellow Dog 2.1 distro; glibc 2.1.3 (??
I thought the latest version was 2.1.2). I also tried this on a
slightly older RedHat(?) distro using glibc 2.1.1 and encountered
exactly the same behaviour.
Has anyone seen this before? Any ideas what's happening?
I'll be reading the ng for followups, but I'd appreciate a cc
my email, too.
thx/peter
------------------------------
From: David Highley <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.portable
Subject: Re: RedHat 6.2 Autofs Broken
Date: Wed, 07 Jun 2000 17:49:11 -0700
Bob Tennent wrote:
> On Wed, 07 Jun 2000 06:43:34 -0700, David Highley wrote:
> >The auto.master file has this entry:
> >/home auto.home
> >
> Try
>
> /home /etc/auto.home
>
Not if it is being defined by NIS.
>
> And then an /etc/auto.home should be created to specify
> the mount points, fs types and devices. And remember to
> create the specified mount points in /home.
>
> Bob T.
--
Regards,
David Highley
Highley Recommended, Inc.
2927 SW 339th Street
Federal Way, WA 98023-7732
Phone: (206) 669-0081
FAX: (253) 838-8509
Email: [EMAIL PROTECTED]
WEB: http://www.highley-recommended.com
------------------------------
From: [EMAIL PROTECTED] (Kaz Kylheku)
Subject: Re: exit()/pthreads bug on Linux PPC
Reply-To: [EMAIL PROTECTED]
Date: Thu, 08 Jun 2000 01:41:02 GMT
On Thu, 08 Jun 2000 00:18:06 GMT, Peter Burka <[EMAIL PROTECTED]> wrote:
>I'm porting a program to Linux PPC (Yellow Dog 2.1) and
>I've encountered a strange bug regarding exit(). The
>same program works fine under Linux 386 and Linux ARM,
>and on other PPC platforms (Neutrino and AIX).
>
>The program does the following:
>
>- create several pthreads
>- one of the pthreads blocks on a mutex
>- one of the pthreads waits on a condvar
>- the main thread calls exit()
Sounds like this program is small enough to include in a posting,
hint hint.
>I put a printf right before the call to exit(), and it
>is being reached. However the program doesn't always
>exit. It will usually (about 90% of the time) hang at
>this point.
>
>Running it under gdb shows that the main thread has
>terminated, but that the two blocked threads are still
>hanging around:
This should not happen; the main thread ought to block until the other two
threads terminate (that is LinuxThreads behavior not required by POSIX).
When your main thread calls exit(), the exit() function will eventually call a
registered handler within LinuxThreads. This handler spits a special process
exit request into the thread manager's pipe and waits for a reply. The manager
kills all the other threads and then unblocks the requesting thread which
then goes on to complete the process termination. (Disclaimer: I haven't
looked at that code in a few weeks, but looks like that's about to change.)
So it is very odd that the exiting thread should disappear, but the other
threads are still running. It could be a kernel problem.
>The system is a Mac G3 with a Yellow Dog 2.1 distro; glibc 2.1.3 (??
>I thought the latest version was 2.1.2). I also tried this on a
No, 2.1.3 came out in March or so. It has many LinuxThreads
related fixes.
>slightly older RedHat(?) distro using glibc 2.1.1 and encountered
>exactly the same behaviour.
Hmmm. That could be relevant information, good that you tried.
As far as glibc-2.1.3 goes, we know that this bug happens on one PPC
installation of a particular distro and that you can't repro it on other PPC
installations (also Yellow Dog 2.1?). So nothing can be ruled out, including a
corrupt installation or installation media.
Nevertheless, you might want to submit a report with glibcbug.
--
#exclude <windows.h>
------------------------------
From: [EMAIL PROTECTED] (Stefaan A Eeckels)
Crossposted-To: comp.os.linux.networking
Subject: Re: Linux TCP/IP stack internal
Date: Thu, 8 Jun 2000 02:08:27 +0200
In article <[EMAIL PROTECTED]>,
Sebastien Dessimoz <[EMAIL PROTECTED]> writes:
> This is a multi-part message in MIME format.
>
> I was just measuring the "internal" throughput of the Linux TCP/IP stack
> (kernel 2.2.12) with NetPipe 2.3.
> It means that I run a client and a server on the same host (localhost)
> and that I measure the throughput between them. I did that for different
> block sizes (the data size for a write() system call), socket buffer
> sizes and MTU. An example of graph is given in attachment.
>
> A very interesting result is that the maximum throughput is at about 48
> kB whatever the MTU or the socket buffer size is.
Since when is 450Mb/s (450 mega-bits per second) 48 kilo-bytes?
Your graph _does_ show a maximum throughput of 450Mb/s with packets
of 32kB and 64kB, and by my arithmetic, that's about 45 mega-bytes
per second.
> I think that this magic number has something to do with the Linux
> internal, as it doesn't depend on the MTU or socket buffer.
> Maybe is it related to a cache size or the MMU? Does somebody know why?
> Please can you help me?
Write a little program that reads from memory, writes to memory,
and gives an idea of the memory to memory transfer speed. Compare
that with your throughput using sockets, then wonder if a lot of
process switching might not explain the apparent performance
of the TCP/IP stack.
--
Stefaan
--
--PGP key available from PGP key servers (http://www.pgp.net/pgpnet/)--
Ninety-Ninety Rule of Project Schedules:
The first ninety percent of the task takes ninety percent of
the time, and the last ten percent takes the other ninety percent.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************