Linux-Development-Sys Digest #733, Volume #8 Fri, 18 May 01 16:13:22 EDT
Contents:
Re: interrupts too frequent? ("Barry Smyth")
Best PC config for linux/crosscompiler development ("Sheldon Daigle")
Re: passing messages to a forked thread (Greg Copeland)
Re: passing messages to a forked thread (Greg Copeland)
Re: isa mem in 0xA0000-0xFFFFFF, Paging fault? ("Jason M. LaPenta")
Reinitializing PCI bus on x86 ([EMAIL PROTECTED])
Kernel memory maps --- what are the limits? ([EMAIL PROTECTED])
Re: SIGSEGV is not blocking ([EMAIL PROTECTED])
recompilation of glib-1.2.8, gtk+1.2.8 for Redhat 6.2 ("Frank")
Re: Trouble using large amount of memory with Redhat 7 (Cary Jamison)
Behavior of read() w.r.t.memory protection ("Arthur H. Gold")
----------------------------------------------------------------------------
From: "Barry Smyth" <[EMAIL PROTECTED]>
Subject: Re: interrupts too frequent?
Date: Fri, 18 May 2001 17:14:39 +0100
Using rdtscl I get around 75000 as the max. number of cycles to execute the
interrupt routine. I have worked out the time by dividing this by the CPU
speed which is ~900MHz to get a time of about 0.1 ms which seems to be
satisfactorally within the 1ms limit. Is this right?
Thanks,
Barry
"Robert Redelmeier" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Barry Smyth wrote:
> >
> > I'm not sure how long it takes to process each interrupt; but each
involves
> > several 'readl' and 'writel' calls followed by a copy of 4096 bytes from
a
> > DMA buffer to a vmalloc'd area whose pages have ben locked down.
> >
> > <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
> > > In article <[EMAIL PROTECTED]>,
> > > Barry Smyth <[EMAIL PROTECTED]> wrote:
> > >
> > > >Could this be because the operating system is doing something in the
> > > >background which lasts a few milliseconds leading to missing my
> > > >interrupts?
> > >
> > > How long do you take to process each interrupt?
>
>
> Best to check via `rdtscl()`. This will read the time-stamp counter
> and subtracting readings at entrance and exit will give you the
> number of cycles your routine took. Add 200-1000 for interrupt
> overhead outside your function.
>
> -- Robert
------------------------------
From: "Sheldon Daigle" <[EMAIL PROTECTED]>
Subject: Best PC config for linux/crosscompiler development
Date: Fri, 18 May 2001 12:24:51 -0400
My company has decided to start PowerPC development for ppc860 using Linux
as the host environment. I've been tasked with deciding what type of PCs to
purchase for Engineering to do their development on.
We want to run Redhat linux, probably 7.x but I'm not sure what the best CPU
config would be. Does anyone know if a software development environment
would have the best performance on a single 1+ GHz CPU system or maybe a
multi-CPU system, like dual 750,s or something.
Thanks,
Sheldon
------------------------------
Subject: Re: passing messages to a forked thread
From: Greg Copeland <[EMAIL PROTECTED]>
Date: 18 May 2001 12:26:07 -0500
"Darren LS" <[EMAIL PROTECTED]> writes:
> Greg Copeland <[EMAIL PROTECTED]> wrote in message
>
>
> >
> > See ipc, semget, etc., and pipe for more info on the above.
>
>
> hmm. sounds good but i may not own all the processes
Do you mean that you may not be the process owner or that the
process may not have been written by you? You can set permissions
on SYS V IPC resources if you mean the first and not the latter.
If on the other hand, you are limited to someone else's implementation,
obviously, you're going to have to conform somehow.
> > > >
> > > >
> > > > | 3. must be able to send signals to other processes both local and
> remote
> > > >
> > > > What do you mean by remote? On a different computer?
> > >
> > > yes
> >
> > So you are wanting to develop a message passing scheme? CORBA? RPC?
> SOAP?
> > Something simpler? I don't think there is enough information to directly
> > answer your question.
>
> RPC? thats Microsoft isn't it. no i am just looking for some common message
> piping system that runs on linux and can be uses remotely as well as locally
> i mean lets face it, thats what unix is about:-)
No, RPC was created by Sun. Likewise, NFS is also implemented via RPC. If you
need common (RPC is) and simple (IMOHO, RPC is not), you may want to look at,
again, sockets and/or SOAP. I do know that SOAP now exists for perl, python,
I think C and C++, and even Microsoft has stated that they wish to adopt SOAP
for .NET. So, I'd *guess* that support is available for Win platforms too.
Other than that, you'll need to look back to sockets (surprise), RPC, or even
something heavy duty like CORBA.
>
> >
> > Long story short, you need to make sure you have a reliable means to
> convey
> > this signal via a network path. It sure would stink to have someone send
> > bogus signals which cause all of your applications to shutdown because you
> > couldn't validate the source or intent. This could even be as simple as
> > sending a GPG encrypted message which the recipient decrypts and verifies
> > the signature.
>
> I agree but first things first :-)
>
>
> The message could be the signal to use. This is just one
> > example, but I hope you can see the importance of verifying who a message
> > is coming from if signals are directly acted upon from network events.
>
> I have my own encyption algorythms sorted, i could use them
Excellent. If you know how to do crypto at the API level, I'd rather recommend
that!!!
> >
> > If process monitoring is what you are trying to do, the previous poster is
> > absolutely correct. You need to have a parent process start your child
> which
> > then waits for it to exist, logging that it did, which means you can also
> > monitor why it terminated as well.
>
> I am with you with the logging it exist bit but how would the parent know
> if the child terminates?
See wait() and waitpid(). Both allow you get obtain the reason why the child
terminated. You can determine not only the return code (via exit or the last
return value), but also if it was via an unhandled signal and what the signal
was. Using these mechanisms, you can create very powerful process monitoring
facilities.
If you are looking for distributed cross platform middleware, which is what
it sounds like you are describing, it looks like you need to think about
something like: MQSeries; http://www-4.ibm.com/software/ts/mqseries/ or the
like. XIPC is probably another option, but I will *NEVER* recommend them
as I've actually used their products. Ack!
--
Greg Copeland, Principal Consultant
Copeland Computer Consulting
==================================================
PGP/GPG Key at http://www.keyserver.net
DE5E 6F1D 0B51 6758 A5D7 7DFE D785 A386 BD11 4FCD
==================================================
------------------------------
Subject: Re: passing messages to a forked thread
From: Greg Copeland <[EMAIL PROTECTED]>
Date: 18 May 2001 12:28:58 -0500
"Darren LS" <[EMAIL PROTECTED]> writes:
> Greg Copeland <[EMAIL PROTECTED]> wrote in message
>
>
> >
> > See ipc, semget, etc., and pipe for more info on the above.
>
>
> hmm. sounds good but i may not own all the processes
Do you mean that you may not be the process owner or that the
process may not have been written by you? You can set permissions
on SYS V IPC resources if you mean the first and not the latter.
If on the other hand, you are limited to someone else's implementation,
obviously, you're going to have to conform somehow.
> > > >
> > > >
> > > > | 3. must be able to send signals to other processes both local and
> remote
> > > >
> > > > What do you mean by remote? On a different computer?
> > >
> > > yes
> >
> > So you are wanting to develop a message passing scheme? CORBA? RPC?
> SOAP?
> > Something simpler? I don't think there is enough information to directly
> > answer your question.
>
> RPC? thats Microsoft isn't it. no i am just looking for some common message
> piping system that runs on linux and can be uses remotely as well as locally
> i mean lets face it, thats what unix is about:-)
No, RPC was created by Sun. Likewise, NFS is also implemented via RPC. If you
need common (RPC is) and simple (IMOHO, RPC is not), you may want to look at,
again, sockets and/or SOAP. I do know that SOAP now exists for perl, python,
I think C and C++, and even Microsoft has stated that they wish to adopt SOAP
for .NET. So, I'd *guess* that support is available for Win platforms too.
Other than that, you'll need to look back to sockets (surprise), RPC, or even
something heavy duty like CORBA.
>
> >
> > Long story short, you need to make sure you have a reliable means to
> convey
> > this signal via a network path. It sure would stink to have someone send
> > bogus signals which cause all of your applications to shutdown because you
> > couldn't validate the source or intent. This could even be as simple as
> > sending a GPG encrypted message which the recipient decrypts and verifies
> > the signature.
>
> I agree but first things first :-)
>
>
> The message could be the signal to use. This is just one
> > example, but I hope you can see the importance of verifying who a message
> > is coming from if signals are directly acted upon from network events.
>
> I have my own encyption algorythms sorted, i could use them
Excellent. If you know how to do crypto at the API level, I'd rather recommend
that!!!
> >
> > If process monitoring is what you are trying to do, the previous poster is
> > absolutely correct. You need to have a parent process start your child
> which
> > then waits for it to exist, logging that it did, which means you can also
> > monitor why it terminated as well.
>
> I am with you with the logging it exist bit but how would the parent know
> if the child terminates?
See wait() and waitpid(). Both allow you get obtain the reason why the child
terminated. You can determine not only the return code (via exit or the last
return value), but also if it was via an unhandled signal and what the signal
was. Using these mechanisms, you can create very powerful process monitoring
facilities.
If you are looking for distributed cross platform middleware, which is what
it sounds like you are describing, it looks like you need to think about
something like: MQSeries; http://www-4.ibm.com/software/ts/mqseries/ or the
like. XIPC is probably another option, but I will *NEVER* recommend them
as I've actually used their products. Ack!
--
Greg Copeland, Principal Consultant
Copeland Computer Consulting
==================================================
PGP/GPG Key at http://www.keyserver.net
DE5E 6F1D 0B51 6758 A5D7 7DFE D785 A386 BD11 4FCD
==================================================
------------------------------
From: "Jason M. LaPenta" <[EMAIL PROTECTED]>
Subject: Re: isa mem in 0xA0000-0xFFFFFF, Paging fault?
Date: Fri, 18 May 2001 13:47:12 -0400
Thanks a bunch for you help.
I also found a little bit more under
/usr/src/linux/Documentation/IO-mapping.txt. Seems isa_readw() works
without doing the remap, but I'm going to try the remap thing you
suggested anyway, because it is the right way to go.
I'm still comming up short on documentation for all these different
memory mapping functions. Are there any other places I should look? I
have the book "Linux Device Drivers" but it seems to be getting
out-of-date fast.
Jason
------------------------------
From: [EMAIL PROTECTED]
Subject: Reinitializing PCI bus on x86
Date: 19 May 2001 03:21:14 +1000
Reply-To: [EMAIL PROTECTED]
For a semi-embedded project I am working on, it would be really nice
if the memory ranges assigned to PCI cards would be below 0xc0000000.
That would allow me to mmap() them into a process's address space
with a 1:1 mapping (which is required, as this process is an emulator,
and the emulated machine's memory is supposed to map 1:1 on the
process's, and *it* is supposed to look at the PCI config space
for addresses).
However, from my kernel readings, it appears that the PCI bus is set
up by the BIOS on the x86 platform, and that linux simply uses the
address ranges supplied by the bios.
Is there a way to force a re-initialization at boot time (preferably
*before* starting any device drivers :). And if so, is there a way to
force those address ranges below 0xc0000000?
I don't mind if it involves patching the kernel itself; The kernel I am
using is already pretty badly patched to provide some extra functionality.
However, I am a bit reluctant to simply insert something like the following
at the start of the kernel's PCI handling:
maxad=0
for (c over all cards)
for (a over all membases on card c)
maxad=max(maxad,a+len(a))
for (c over all cards)
for (a over all membases on card c)
membase(a)-=maxad-0xb0000000
It probably would do the job, but still....
Bernie
--
Older man declare war. But it is youth who must fight and die
Herbert Hoover
US President 1929-33
At the Republican National Convention, 27 June 1944
------------------------------
From: [EMAIL PROTECTED]
Subject: Kernel memory maps --- what are the limits?
Date: 19 May 2001 03:30:25 +1000
Reply-To: [EMAIL PROTECTED]
Reading through the kernel startup code, I recently discovered that
the mem= commandline parameter can be used to build all sorts of weird
memory maps. GREAT! Especially as I was in need of a way to restrict
the memory Linux accesses....
However, it seems that Linux implicitly uses some memory even if a user
supplied memory map does not include it. Also, there seems to be a need
for at least some "low" memory, or things get slow.
I was wondering whether someone with a better view of the big picture could
shed some light on the following effects:
* boot with "mem=16M@16M". Booted like this, Linux will run extremely
slow. I can't quite work out why. This would be my preferred memory
map.
* boot with "mem=12M@16M mem=4M@8M". Booted like this, linux will be just
fine. However, it appears to still be using some memory in the
1M-4M range; If my user space program overwrites that (physical)
address range, the kernel will crash
This is with the latest 2.4.4 kernel, booted via lilo. Especially the
second case seems to be a bug --- the kernel should either complain
that it can't live without that memory, or leave the memory
alone. Accepting the absence of memory, yet still using it, is just
wrong.
Can anybody explain to me why these two effects happen? And possibly, how
I can overcome them?
Thanks,
Bernie
--
Thou wilt show my head to the people: it is worth showing
Georges Jacques Danton
French revolutionary
To his executioner, 5 April 1794
------------------------------
From: [EMAIL PROTECTED]
Subject: Re: SIGSEGV is not blocking
Date: 19 May 2001 03:39:43 +1000
Reply-To: [EMAIL PROTECTED]
"Arthur H. Gold" <[EMAIL PROTECTED]> writes:
>Is there a Good Reason why valid addresses (i.e. addresses within
>the VM space of a process) but with incompatible page protections
>for the purpose, cause system calls to just fail with EINVALs as
>opposed to causing a SIGSEGV?
And while we are at it --- I recently botched up a setjmp/longjmp based
multithreading implementation, and ended up with temporarily invalid
stack pointers. Of course, if at that point a signal arrives, things
go horribly wrong.
Eventually, I ended up having a SIGSEGV delivered to my SIGSEGV handler.
What surprised me was that the cr2 field in the sigcontext structure that
came along had *not* been updated. Not that it would have been any good ---
the SIGSEGV was delivered upon exit from a system call, and thus the
instruction pointer wasn't pointing at the faulting instruction, either.
Strangely enough, simply returning from the SIGSEGV handler would, well,
"work" (as in "it didn't immediately generate a new SIGSEGV, and my program
ran on, sort of).
Is this the intended behaviour? If so, where does that SIGSEGV get generated,
and is there a way to tell it apart from a "real" one (i.e. one where .cr2
and .eip actually contain valid info)?
Regards,
Bernie
--
There cannot be a crisis next week. My schedule is already full
Henry Kissinger
American politician
New York Times Magazine, 1 June 1969
------------------------------
Reply-To: "Frank" <[EMAIL PROTECTED]>
From: "Frank" <[EMAIL PROTECTED]>
Subject: recompilation of glib-1.2.8, gtk+1.2.8 for Redhat 6.2
Date: Fri, 18 May 2001 21:34:33 +0200
Hi!
Trying this newsgroup as I didn't get any response in alt.os.linux:
Im new to recompiling downloaded source code in Linux.
I have Redhat 6.2, and have rpm-ed glib-1.2.8 and gtk-1.2.8 on it.
I have to recompile glib/gtk in order to achieve something else in Ada.
I have downloaded the tar.gz files for glib/gtk 1.2.8, and put them on
/root/prog, and untared them there.
What I experience is that the glib/gtk compiling/installing leaves no error
messages (as far as I can tell),
but when I progress to the Ada recompiling, that recompiling asks for files
with the pattern:
gdk*.o, gtk*.o and glib*.o
to be linked in during the process.
In the Gtk installing directory I find several gtk/gdk-*.o files that I have
copied in to the directory that the Ada installations expects and the Ada
installations looks happier, but I can't find any file with the pattern with
the pattern glib*.o so it still doesn't succed.
My question in this forum is; should there be some glib*.o files after the
installation of glib based on the tar.gz files?
Do anyone have a good suggestion as to why I don't get any such files?
Frank
------------------------------
From: Cary Jamison <[EMAIL PROTECTED]>
Subject: Re: Trouble using large amount of memory with Redhat 7
Date: Fri, 18 May 2001 14:05:41 -0600
"J. P. Montgomery" wrote:
>
> I have a K7 AMD 1.2 G machine with 1.5 G of ram using an ASUS motherboard
> and am running Redhat 7. I use a Portland Fortran compiler (and sometimes
> g77 and gcc). I have found that when I write a simple program I can only
> dimension a complex array by about 72000000 before the job will core dump (a
> job size of about 549M as verified by computation and running top). The
> Portland people point to the OS and say that perhaps I can recompile the
> kernel. Okay ... so I've checked the Redhat site and done searching on
> newsgroups, etc. The hint I have found is that my stack size is 8M in the
> kernel. I can certainly recompile ... but I am trying to figure if this
> will solve the problem. I have noted that g77 has a different limit
> (somewhat lower ... the Portland guy said however, that they have no such
> limitation in the compiler). Also a coworker uses a Lahey compiler on a
> Win98 machine with 512M of memory. He can dimension the array mentioned
> above by 110000000 or about 840M.
>
> Can anyone point me in the right direction and explain what is happening. I
> have written a small c code which pulls all of the resource limits and most
> are set at unlimited but the stack and pipe. Top and other means of
> examining the memory indicate that all of the memory is recognized by the
> system ... so I am inclined to agree with the Portland guy.
>
> Thanks for the help,
> J Montgomery
An array that size is not going to be on your stack, so I don't think
that increasing your stack size will help. What error do you get when
it core dumps?
Cary
------------------------------
Date: Fri, 18 May 2001 15:00:35 -0500
From: "Arthur H. Gold" <[EMAIL PROTECTED]>
Subject: Behavior of read() w.r.t.memory protection
I've asked this before (quite a while ago) but got no response:
Consider the following program (assume it's run with some non-trivial
file redirected to stdin):
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/mman.h>
#define MMAP_FAILED ((void *)-1)
int main( void ) {
size_t page_size = getpagesize();
/* mmap two pages */
char * buf = mmap( 0, 2 * page_size, PROT_READ | PROT_WRITE
, MAP_ANONYMOUS | MAP_PRIVATE
, -1, 0 );
if ( buf == MMAP_FAILED ) {
perror( "mmap failed" );
exit( 1 );
}
/* now we mprotect the second page of this mapping to PROT_NONE */
else if ( mprotect( buf + page_size, page_size, PROT_NONE ) ) {
perror( "mprotect failed" );
exit( 1 );
}
else {
/* we point pbuf at the beginning of the second page
* (i.e. the page that's protected PROT_NONE
*/
char * pbuf = buf + page_size;
int i;
/* and we loop backwards ... */
for ( i = 0; i < 10; i++ ) {
/* ... and see what the result of read() is */
printf( "pbuf = %x, read() returned %d\n"
, pbuf - i
, (int)read( STDIN_FILENO, pbuf - i, 20 ) );
}
}
return 0;
}
Run in linux, it behaves as follows:
pbuf = 40015000, read() returned -1
pbuf = 40014fff, read() returned -1
pbuf = 40014ffe, read() returned -1
pbuf = 40014ffd, read() returned -1
pbuf = 40014ffc, read() returned 4
pbuf = 40014ffb, read() returned 4
pbuf = 40014ffa, read() returned 4
pbuf = 40014ff9, read() returned 4
pbuf = 40014ff8, read() returned 8
pbuf = 40014ff7, read() returned 8
i.e. it will happily read as many four-byte chunks as will fit in
unprotected memory (failing if there is not at least 4 bytes of such
memory available).
On FreeBSD, however, it seems as if the entire buffer is checked first,
and the results are:
On FreeBSD 4.3
pbuf = 280fc000, read() returned -1
pbuf = 280fbfff, read() returned -1
pbuf = 280fbffe, read() returned -1
pbuf = 280fbffd, read() returned -1
pbuf = 280fbffc, read() returned -1
pbuf = 280fbffb, read() returned -1
pbuf = 280fbffa, read() returned -1
pbuf = 280fbff9, read() returned -1
pbuf = 280fbff8, read() returned -1
pbuf = 280fbff7, read() returned -1
It seems that the FreeBSD behavior is `more' correct -- at least you can
tell that something's wrong. In Linux, in the case where there's at
least 4 bytes of unprotected memory available, the result is
indistinguishable from a situation where there are only that many bytes
available in the file to be read.
Any comments? Anyone?
Thanks!
--ag
--
Artie Gold, Austin, TX (finger the cs.utexas.edu account for more info)
mailto:[EMAIL PROTECTED] or mailto:[EMAIL PROTECTED]
--
I am looking for work. Contact me.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to the
comp.os.linux.development.system newsgroup.
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************