Linux-Development-Sys Digest #736, Volume #8 Sun, 20 May 01 16:13:10 EDT
Contents:
Re: Sharing the virtual address space? ("Karl Heyes")
Re: mmap() and msync() extensions? (At150bogomips)
Re: Sharing the virtual address space? (At150bogomips)
Does /sbin/dump in RedHat 7.1 ever hang for anyone? (Anonymous)
Re: SIGSEGV is not blocking (Victor Wagner)
Re: xml api (Victor Wagner)
Re: SIGSEGV is not blocking (David Schwartz)
unresolved symbol: open_namei (Sumithra Ramanathan)
fork, pthread_create and the FP stack ("Dale Johnson")
Re: PID and fork() ("morpheus")
Re: Time to read the Disk (Joern Engel)
deadlocks on slow machine (486) (Hans)
debugging init function in shared library? (Wong Ka Chun)
Re: USB Development
Re: How to get a number of processors (John Beardmore)
Re: Best PC config for linux/crosscompiler development (John Beardmore)
----------------------------------------------------------------------------
From: "Karl Heyes" <[EMAIL PROTECTED]>
Subject: Re: Sharing the virtual address space?
Date: Sat, 19 May 2001 21:11:58 +0100
In article <[EMAIL PROTECTED]>, "At150bogomips"
<[EMAIL PROTECTED]> wrote:
> Josef Moellers <[EMAIL PROTECTED]> wrote:
>>This is called "shared memory".
>>Also, applications executing the same code share the same virtual address
>>space of the code.
> I thought shared memory 'merely' shared the mapping for each code page.
> I.e. different programs use address space 0x00..00 and following. I am
> referring to having a single page table. This would in theory allow all
> code to be a common library--accessible to any program. Paul A. Clayton
> just a fool, not in love
>
Correct, shared memory would be listed in different page tables. It sounds
like what you are describing is a thread, although the stack which is also in
the page table is not shared.
karl.
------------------------------
From: [EMAIL PROTECTED] (At150bogomips)
Date: 19 May 2001 22:00:43 GMT
Subject: Re: mmap() and msync() extensions?
Dragon Cvetkovic <[EMAIL PROTECTED]> wrote:
>You mean something like madvise(3C)?
[snip]
Yes !!! . . . and no.
That Solaris feature is in some ways more than what I was talking about (being
able to communicate more information to the kernel) and less--it does not
indicate how an application can tell the kernel not to immediately truly map
all of a file into memory.
(I suppose this is another reason why Solaris is considered worth big-bucks.)
E.g., suppose one had a database file, it might be nice if one could map the
whole into virtual memory and let the OS worry about how much real memory
should be used. I suspect that mmap() tries to place the entire requested
region into memory at one time. Rather, it would be nice if one could simply
indicate that one wishes to use virtual addresses to handle file access rather
than a file handle (with kernel buffers).
Paul A. Clayton
just a fool, not in love
------------------------------
From: [EMAIL PROTECTED] (At150bogomips)
Date: 19 May 2001 22:22:42 GMT
Subject: Re: Sharing the virtual address space?
Karl Heyes <[EMAIL PROTECTED]> wrote:
>Correct, shared memory would be listed in different page tables. It sounds
>like what you are describing is a thread, although the stack which is also in
>the page table is not shared.
Well, not exactly sharing page tables (the permissions could be different). I
am referring to sharing the address translation. If the virtual address space
was shared, translation resources could be shared and virtual caches would not
have to be flushed at process switches.
This would also urge using _permission_ segments to indicate what permissions a
process has in particular memory areas. This would reduce the problems of TLB
misses. A typical process might have only a common/heap segment and a stack
segment (much fewer entries than required by a TLB--and the entries would be
simpler: read/write permission bits and segment size rather than providing an
address translation).
(This would not have made sense on earlier machines because the cache was
off-die and relatively small. More recent machines have two levels of on-die
cache and most memory accesses do not really need to be translated--if one
could be assured that virtual addresses would not be aliased.)
WRT instructions, this would simplify permission--one could have a single code
segment with execute permission for all contents.
Just a wacky idea.
Paul A. Clayton
just a fool, not in love
------------------------------
Date: Sat, 19 May 2001 18:47:36 -0400
Subject: Does /sbin/dump in RedHat 7.1 ever hang for anyone?
From: Anonymous <[EMAIL PROTECTED]>
Sometimes /sbin/dump hangs when I dump a filesystem.
especially when the dump target is a file located
on an NFS server on another machine.
dump -T "Wed Dec 31 19:00:00 1969" -B 1500000 -M -f
/auto/backup-c700-1/dump/time-range/from-1969-12-31/19-00-00--0500/to-2001-05-19/14-48-26--0400/celeron600-0.xxxxxxxx.com/windows.dump
/windows
Has anyone ran into a similar problem with dump in
RedHat 7.1 and found a solution?
Thanks.
--------== Posted Anonymously via Newsfeeds.Com ==-------
Featuring the worlds only Anonymous Usenet Server
-----------== http://www.newsfeeds.com ==----------
------------------------------
From: [EMAIL PROTECTED] (Victor Wagner)
Crossposted-To: comp.os.linux.development.apps,comp.programming.threads
Subject: Re: SIGSEGV is not blocking
Date: 18 May 2001 08:40:00 +0400
Linus Torvalds <[EMAIL PROTECTED]> wrote:
:>What bothers me is that SIGSEGV was blocked, and yet still
:>delivered, and I want to know why.
: Because the kernel had two choices: deliver it despite the programmer
: telling it not to, or killing the process outright because the process
: is no longer doing anything useful.
: Now, you might say that killing is better. I might agree. We can vote on
: it, I don't personally particularly care one way or the other. But you
: only get those two choices: kill the dang thing outright, or just ignore
: the blocking. We just don't have any other good choices.
>From the kernel point of view - definitely.
>From the application point of view there is possibility to deal with
SIGSEGV - non-local exit. I.e. application should define exception
handler somewhere, which knows what to do if particular routine fails
with SIGSEGV, and provide signal handler which would transfer control
there.
In C it is done with setjump/longjump functions, in higher-level
languages (C++, Java) with exceptions.
But better way to cope with SIGSEGV is to ensure that it will never
happen. I.e. use high enough level languages, where one doesn't need
to cope with pointers and allocation issues - perl, tcl, python, scheme,
ML, Haskell, erlang.
--
ECRC hat keine lynx komp. seiten, sowas MUSS ja pleite gehen ;-)=
-- Getty on #LinuxGER
------------------------------
From: [EMAIL PROTECTED] (Victor Wagner)
Crossposted-To: comp.os.linux.development.apps
Subject: Re: xml api
Date: 18 May 2001 08:58:10 +0400
Karim A. <[EMAIL PROTECTED]> wrote:
: Hi all,
: I'm looking for a XML api that exist both on Linux/Unix and Windows.
: Does anyone know ?
It is much harder to find XML libraries for Unix which are not ported
for Windows. Only kind of XML libraries which are not portable is
windows-only ones.
So, define what you mean under XML api, and search any library which
satisifes the requirement. Typically XML handling is just a manipulation
with text string which doesn't require anything OS specific.
Start with expat.sourceforge.net
--
If you lose your temper at a newspaper columnist, he'll get rich,
or famous or both.
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.programming.threads
Subject: Re: SIGSEGV is not blocking
Date: Sat, 19 May 2001 17:27:06 -0700
Victor Wagner wrote:
> But better way to cope with SIGSEGV is to ensure that it will never
> happen. I.e. use high enough level languages, where one doesn't need
> to cope with pointers and allocation issues - perl, tcl, python, scheme,
> ML, Haskell, erlang.
Sometimes, if a NULL pointer is very rare in code that's performance
critical, it can be more efficient to trap the SIGSEGV than to check the
pointer for NULL before you dereference it. But this would be something
only done if profiling showed it was needed.
DS
------------------------------
From: Sumithra Ramanathan <[EMAIL PROTECTED]>
Subject: unresolved symbol: open_namei
Date: Sun, 20 May 2001 00:23:37 -0500
Reply-To: [EMAIL PROTECTED]
Hello
I recently upgraded to Redhat 7.1 and recompiled a ethernet device
driver on it. There were no compile time errors. I tried to insmod and
the error unresolved symbol: open_namei is reported.
I had the same driver working on Redhat 7.0 and gcc downgraded to 2.95.
My questions are
1. Do I have to downgrade to egcs 1.1.2? If so what are packages to be
downgraded?
2. What does this error mean and why is open_namei not exported?
Thanks for any suggestions
Sumithra
------------------------------
From: "Dale Johnson" <[EMAIL PROTECTED]>
Subject: fork, pthread_create and the FP stack
Date: Sun, 20 May 2001 06:49:58 GMT
What happens to the values in the Intel FP stack when
a fork occurs? Does the new process inherit the same
values? Is this different than the behaviour on a
pthread_create?
Also, when a page fault happens, does it push the FP
values onto a process context area?
Dale.
------------------------------
From: "morpheus" <[EMAIL PROTECTED]>
Subject: Re: PID and fork()
Date: Sun, 20 May 2001 07:48:32 GMT
Ah-ah! I've understood that! Thanks everyone!
------------------------------
From: [EMAIL PROTECTED] (Joern Engel)
Subject: Re: Time to read the Disk
Date: 20 May 2001 13:56:16 GMT
Reply-To: [EMAIL PROTECTED]
Hi!
> Reordering can be done by the kernel or the
> hardware, but only if the disk accesses are
> asynchronous. When a program writes data it
> doesn't have to know if it is asynchronous
> or synchronous. But in this case it is read
> accesses which can only be done asynchronous
> if the application is written to handle it.
>
> Doing asynchronous reads is an optimization
> not being done by the compiler, it has to be
> done by the programmer. It wouldn't be
> posible for the compiler to make this kind
> of optimization reliable.
I am not sure, if I understand you correctly. You seem to imply that
internal reordering of the read requests would result in reordered delivery
of the results.
As in CPU-pipelines, this is not necessary. Assuming this to be done by the
hdd logic, the 3rd request in the queue can be handled in advance, since the
blocks are closer to the r/w-head, stored in the disk cache and be delivered
as soon, as the 1st and 2nd were delivered. Standard in CPU design, but I am
not sure about disk drives.
Joern
------------------------------
From: [EMAIL PROTECTED] (Hans)
Subject: deadlocks on slow machine (486)
Date: 20 May 2001 14:26:07 GMT
I'm not sure it is the kernel, but it appeared first when I was using 2.4.x
kernels. I thought it was gone with 2.4.5-pre3 kernel, but it seems to
happen just less frequent (once since I've installed it thursday).
I'm often working on the shell through SSH on an amd486 machine at 133 MHz.
Sometimes the shell freezes completely and starts working again after a
minute or so. Meanwhile I can access the same machine in another xterm
without any problem. It also does not seem to be very busy at that time.
I have a 100 Mbps HUB connected to a 3Com 905b on a P-II 233 (the desktop)
and to the AMD486 machine through a Compex NE-2000 NIC. I have tried a 905b
on the 486 machine, but it crashed (or maybe locked for a very long time)
when I copied a large amount of data. Besides the HD is too slow (1500
kB/s), so I cannot access the machine much faster with 100 Mbps anyway.
Does anyone have (heard of) similar problems?
Maybe I should install a faster pc. I had a pentium 120 for a while running
without any problems.
Hans Bausewein
------------------------------
From: Wong Ka Chun <[EMAIL PROTECTED]>
Subject: debugging init function in shared library?
Date: 20 May 2001 14:32:49 GMT
hello, how can I use gdb to set a break point in the init function
in a shared library loaded by my program?
TIA.
- Wong
------------------------------
From: <[EMAIL PROTECTED]>
Subject: Re: USB Development
Date: Sun, 20 May 2001 08:14:04 -0700
Well Simon, I'll tell you what I did and maybe someone else can help with a
less awkward approach. What I did was get into the submit_urb call, analyze
it and insert printk's in interesting parts until I could get an
understanding of what was going on. I am presuming you are using UHCI as
opposed to OHCI which is less common as my experience is with the UHCI USB
driver.
In order to understand USB messaging, look into submit_urb which maps into
the function uhci_submit_urb which is a dispatching function for the four
USB pipe types. You will have to study usb.h and get a feeling of the
contents of a urb structure. From here you can take apart the urb structure
and examine its data. You can printk calls to send (or receive) CONTROL,
BULK, INTERRUPT or ISOCHRONOUS pipe data. You should be able to then work
your way through what doesn't work in your driver by comparing the data
flowing through submit_urb with a working driver on another piece of
hardware.
Hope this helps.
You'll need to study the usb.c, usb-uhci.c and maybe hub.c
<[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> >Does anyone know of any resources on the above subject
> >I have an ACM communications device which doesn't
> >work with the existing acm driver, and I need to know
> >how to access the notification pipe.
>
> >The kernel based acm driver doesn't fully implement the
> >control interface, anyone know where I can find
> >out how to do this?
>
> Specifically I am wondering how to obtain the response
> to control messages I send using usb_control_msg .
>
> -----
>
> Be passionate about your email
> Just click here: http://another.com
>
>
------------------------------
From: John Beardmore <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: How to get a number of processors
Date: Sun, 20 May 2001 20:02:22 +0100
In message <[EMAIL PROTECTED]>, Eric P. McCoy
<[EMAIL PROTECTED]> writes
>John Beardmore <[EMAIL PROTECTED]> writes:
>
>> > While it's certainly possible for an app to figure out
>> >what other apps are doing, it's bad for it to base its behavior on any
>> >of that information because it has _no idea_ what the other app is
>> >doing.
>
>> Pardon ?
>
>Wow. That makes no sense to me, either.
>
>My best guess is that I meant to say: while app A may be able too how
>much system resources app B is using, app A can't make any predictions
>on how long those resources are going to be in use, what patterns
>they're going to be following, and so on. The kernel is still limited
>in this respect, but it has a better chance of guessing, assuming some
>heinously complicated logic, based on syscalls being used.
>
>In fact, it might not be that tough, I just haven't thought about it
>enough to have a solid guess.
I can't think of any way to guess from sys calls.
>> >If you can guarantee that your Magic Parallelizing App is going to be
>> >the only process running at any given time,
>
>> Well, the only process requiring significant resource.
>
>> > then you can damn well
>> >guarantee you'll have an operator smart enough to know how many CPUs
>> >are in the machine (and how to tell your app about it).
>
>> So anybody writing // apps can (has to!) afford a better class of
>> operator ??
>
>If they can afford a dedicated computer, they can afford to pay
>someone $10 to explain to a group of people how to configure the
>program properly. I'm talking just about large bunches of computers,
>here; for small numbers (say, <10) the programmer/admin can do it
>himself.
Hmm. OK, I'm happy to concede that people will always be able to do a
better job, but I still see no reason to stop the app deciding on the
basis of the hardware if that's will do the job the developers want.
>> > If you can't
>> >guarantee that, then you need to leave process management up to the
>> >kernel, which can understand the situation far better than any
>> >individual app.
>
>> If you know the circumstances in which the app will be run, why not
>> give it an indication of the number of CPUs ? Why give the operator
>> something else they can get wrong ?
>
>Because they won't. If necessary, print up a bunch of labels and
>stick them on front of the computers. Still not a real great
>solution, but at least it's no longer a platform-dependant one.
So long as the call to reveal the number of CPUs works, that is a step
towards platform independence in as far as it lets the app adapt to //
hardware, albeit imperfectly.
>> >Seems like what you're _really_ advocating is some powerful process
>> >management stuff in the kernel.
>
>> No. That would be the good long term solution. What I'm advocating,
>> or at least endorsing is a horrible bodge to give a better behaviour
>> in a limited set of circumstances.
>
>Ah, so we agree it's a bad thing.
It's a poor substitute for "some powerful process management stuff in
the kernel" but better than nothing.
> I'll grant you it may sometimes be
>necessary under certain conditions; my only original complaint was
>that it seemed like a hack.
Hacks are only bad if they stand in the way of better long term
solutions.
>If we agree on that, we're settled. I hope we meet up a year or so
>down the road rewriting the Linux scheduler, or something. Although
>if you see my name in the project, you should probably tell Linus to
>come over and personally kick my ass, as I have no idea how to even
>begin programming a scheduler.
:) I did it once for a Z80 process controller. Worked real well.
Access to the metal is something I miss. You can probably tell !
So much to do ! So little time !
Cheers, J/.
--
John Beardmore
------------------------------
From: John Beardmore <[EMAIL PROTECTED]>
Subject: Re: Best PC config for linux/crosscompiler development
Date: Sun, 20 May 2001 20:12:51 +0100
In message <9e3ilk$p1kp$[EMAIL PROTECTED]>, Sheldon Daigle
<[EMAIL PROTECTED]> writes
>My company has decided to start PowerPC development for ppc860 using Linux
>as the host environment. I've been tasked with deciding what type of PCs to
>purchase for Engineering to do their development on.
>
>We want to run Redhat linux, probably 7.x but I'm not sure what the best CPU
>config would be. Does anyone know if a software development environment
>would have the best performance on a single 1+ GHz CPU system or maybe a
>multi-CPU system, like dual 750,s or something.
As I recall, GNU make won't compile in parallel, a huge defect in my
view, but if you can be bothered to organise // compilation, throwing
CPUs at it might help so long as disk or memory access don't become a
bottle neck.
Is PPC a great platform ? My guess is that the Intel implementation
will be best / most optimised.
Cheers, J/.
--
John Beardmore
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to the
comp.os.linux.development.system newsgroup.
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************