Linux-Development-Sys Digest #194, Volume #7 Tue, 14 Sep 99 16:14:25 EDT
Contents:
Re: LILO and System.map (Horst von Brand)
Going to write a new USB camera driver - help needed! (Remco van den Berg)
named pipes / select / fifo ([EMAIL PROTECTED])
Re: Figure Out The MS Source Code Yourself (Dave Newton)
Re: Max threads and TCP connections? (Miquel van Smoorenburg)
Re: unix98 pty's problems (Jonathan Stott)
Re: Win95 is a bloody pain in th ass(after I installed linux)!! (Tranceport)
skbuff related issues ? ([EMAIL PROTECTED])
Re: threads (David Schwartz)
Re: Embedded X-server anyone ? ? (Jonathan A. Buzzard)
Re: X Windows developement (Sami Tikka)
FREE Like Sybase Central ("Apple")
glibc-2.1.2 RPM ("Lawrence K. Chen, P.Eng.")
Re: threads (Leslie Mikesell)
Re: 497.2 days ought to be enough for everybody (bill davidsen)
Re: threads (Joseph H Allen)
PCI Memory Access Problems ("Chris Naylor")
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Horst von Brand)
Subject: Re: LILO and System.map
Date: 14 Sep 1999 10:26:59 GMT
On 6 Sep 1999 17:02:18 +0200, Bram Bouwens <[EMAIL PROTECTED]> wrote:
>Allin Cottrell <[EMAIL PROTECTED]> writes:
>>and also note that current klogd will happily read
>>/boot/System.map-<kernel version>, e.g.
>>/boot/System.map-2.2.12
>>which allows you to keep more than one map in play
>>if you wish.
>That's quite a useful remark!
>And if I would have several variants of the same version number?
Look at the EXTRAVERSION in recent kernels top Makefile.
--
Horst von Brand [EMAIL PROTECTED]
Casilla 9G, Vi�a del Mar, Chile +56 32 672616
------------------------------
From: [EMAIL PROTECTED] (Remco van den Berg)
Subject: Going to write a new USB camera driver - help needed!
Date: 14 Sep 1999 13:44:27 GMT
Linux devellopers,
I'm thinking about writing an USB driver for a Philips USB camera.
I have some questions:
Currently I'm running Linux 2.2.12. Can I use this kernel or do I have
to upgrade to 2.3.xx releases?
Is it possible to use a debugger on kernel modules? If yes, how?
Where can I find information about writing USB drivers?
Thanks for any help....
- Remco van den Berg
PS I'll have to write the code in my spare time. It's not an official Philips
project.
============================================================================
Philips Semiconductors B.V. tel:(+31 40 27)22031 fax:22764 Room: BE-345
mailto:[EMAIL PROTECTED] seri: rvdberg@nlsce1
home Email: [EMAIL PROTECTED] (non Philips related) ICQ: 47514668
============================================================================
Microsoft and Lotus Notes free. Don't send me any Microsoft attachments.
============================================================================
------------------------------
From: [EMAIL PROTECTED]
Subject: named pipes / select / fifo
Date: Tue, 14 Sep 1999 13:32:11 GMT
Hello,
I am porting an application from X86 Solaris and am having some
problems with data remaining in the fifo(named pipe). It seems that
even though data is in the fifo that the select does not pop. The
application worked fine on X86 solaris and since I am new to Linux I
was wondering if any one had any ideas as to what compile or link
options that I may need to use to get around this.
The application consists of the following. The client and server open
two named pipes. Each application opens both for read/write but only
uses the read and or write side so that I do not generate a SIGPIPE if
the other application is not started. The server also uses pipes to
communicate to a third application via its stdin and stdout file
descriptors. I start the server and it communicates with the third
process and sends the results to the client which I start after the
server. When the client writes back to the server in its write fifo
the server's select on the clients write fifo which it is trying to
read does not pop even though I know that there is data in the fifo by
doing an ls -l on the fifo.
Any thoughts or ideas would be greatly appreciated.
Thanks,
Pete
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: Dave Newton <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.misc
Subject: Re: Figure Out The MS Source Code Yourself
Date: Tue, 14 Sep 1999 13:30:28 GMT
d s f o x @ c o g s c i . u c s d . e d u (David Fox) wrote:
> It sounds like you aren't concerned about the law because you don't
> expect that it will be enforced.
No, I'm not concerned about the law because it's ridiculous. If someone
decides to enforce this law when I am the subject and I'm not doing
anythig blatantly stupid (Hi, I reverse engineered your code and sold
the source and some libraries using it) I will go to court, argue my
case, and expect to win. If I don't, hey, I made my bed, I get to lie
in it, and I'll take it from there.
If you want to never reverse engineer a piece of code, for whatever
reason, be my guest. For me, that approach doesn't work. I will
continue to do what I want, fully realizing the potential consequences
of my actions.
I don't understand what is confusing about any of this.
Dave
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED] (Miquel van Smoorenburg)
Subject: Re: Max threads and TCP connections?
Date: 14 Sep 1999 16:03:16 +0200
In article <[EMAIL PROTECTED]>,
Joseph H Allen <[EMAIL PROTECTED]> wrote:
>Actually, I've been wanting to rewrite news for years. The articles should
>be stored in a big circular buffer made out of a raw disk partition instead
>of as separate files.
You mean like what the current INN 2.x code does ;) [at least, the
CNFS storage method]
Mike.
--
... somehow I have a feeling the hurting hasn't even begun yet
-- Bill, "The Terrible Thunderlizards"
------------------------------
From: [EMAIL PROTECTED] (Jonathan Stott)
Subject: Re: unix98 pty's problems
Date: 14 Sep 1999 14:15:56 GMT
Reply-To: [EMAIL PROTECTED]
In article <[EMAIL PROTECTED]>,
Mike Dowling <[EMAIL PROTECTED]> wrote:
>Does this mean that there is supposed to be some kind of entry in
>/etc/fstab for mounting something on /dev/devpts? Something like
>
>none /dev/pts devpts defaults 0 0
Yes.
-JS
--
Northeastern University Jonathan Stott
Center for Eletromagnetics Research [EMAIL PROTECTED]
usmail://360 Huntington Ave./235 FR/Boston/MA/02115
------------------------------
From: Tranceport <[EMAIL PROTECTED]>
Subject: Re: Win95 is a bloody pain in th ass(after I installed linux)!!
Date: Tue, 14 Sep 1999 14:20:17 GMT
hehe...
That was funny.
I heard ya! Same here. I spent $10 to buy Red Hat 6.0 and the fucker
wouldn't install on my pc. I found out that for some reason my CD
Burner does not like the installation program of Red Hat.
After tweaking angrily the hardware for 2 days (I got sore knees from
kneeling on my gutted PC) trying to understand what jumper I screwd up,
I gave up. I found out that RH 5.2 installs fine even if it takes 60
minutes to format the hard drive in linux partition (on my other PC 'CD-
Burner'less it takes 2 minutes, go figure!!).
After installing 5.2 I upgraded it to 6.0 to find out that I could not
telnet to the new linux box from anywhere!!!
I reformatted the hard drive and restarted. Twice. The second time the
telnet worked. I didn't do anything different. Blah!
In defense of Linux I can say that:
1 - it's free
2 - it's as unreliable as windows (not worse nor better)
3 - it's free
:)
Everything done by man is bound to be a screwup, given the right
conditions.
Just DON'T GIVE UP!!
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED]
Subject: skbuff related issues ?
Date: Tue, 14 Sep 1999 15:09:18 GMT
hi all,
i am trying to write a net protocol for clustered computing. My
questions are:-
i) in alloc_skb the skbufff control part is put at the top of the data
part. Why ? ( the reason given is cache optimization but i cant see how
.)
ii) i have to copy a block from kernel space to user space. how can i do
it tha fastest ? aligning the start of the black at a 16-byte boundary
is one of the ways, i guess. what abt others ?
iii) can i get rid of copying overheads by mapping user memory to the
skbuff data area ? the problem is even if i have a 10-byte packet i will
have to map an entire page.
iv) where can i get docs abt the cache architecture of intel
motherboards ? also abt the dma ?
Thanx in advance
joy ganguly
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: David Schwartz <[EMAIL PROTECTED]>
Subject: Re: threads
Date: Tue, 14 Sep 1999 10:52:53 -0700
Leslie Mikesell wrote:
> In article <[EMAIL PROTECTED]>,
> David Schwartz <[EMAIL PROTECTED]> wrote:
>
> > We are talking about optimization here, and you don't optimize for the
> >"the CPU is doing nothing when all of a sudden one request comes in"
> >case. In that case, who cares what you do -- anything will work. We care
> >about what happens when the CPU and the system are overwhelmed.
>
> But you are making the assumption that the system is going to be
> overwhelmed by the load on *this* particular program. If the
> load comes instead from a proliferation of many different programs
> your select()/read() combination just continues to add extra
> system calls as you block in select, context switch off to
> another task, and then have to dive back to the kernel to read()
> on ever i/o that completes for you, perhaps taking resources away
> from the programs generating a larger load.
If the system is loaded by other programs, I had better do as much work
as possible in my timeslice. Otherwise, I'm going to slow those other
programs down. Or, to put it another way, the more loaded the system is,
the more work there is to do at any one time.
A single-process model means that you can do more work when there is
more work to do. A process-per-connection model means that when you have
more work to do, you incur extra penalties and context switches.
It really is that simple.
> If you are benchmarking your program with an artificial load you
> might demonstrate that you can handle things better than the
> kernel scheduler, but it may not be true in normal operation.
It is. Really. I'm afraid I can't do better than present argumentation
for why this should be so (fewer blocks, fewer stacks, fewer context
switches) and point to years of experience that demonstrates that it is.
> >> Huh? Why did select not return when the first read could complete?
> >
> > Because the receiving of a packet that generates I/O that satisfies the
> >select and thus makes to process ready to run takes place before the
> >process actually gets scheduled. In some cases, way before. On
> >uniprocessor, we may be busy doing work for other connections that are
> >part of the same server.
>
> What kind of work is this server doing besides waiting for i/o to
> complete?
If all the server is doing is waiting, then efficiency doesn't matter.
You can't make a machine wait any faster. The assumption is that we're
trying to make performance better, and to do that, we have to begin at
the point where the server has reached its limit, figure out what that
limit is, and fix it.
> > The only case where we wakeup instantly is when the CPU isn't busy. And
> >who cares about that case? It's senseless to optimize for that case
> >since anything is equally good when there is CPU to burn.
>
> Some other program may need those CPU cycles while you are burning
> them making 2 system calls instead of one for every read() coming
> your way. I don't want to make the assumption that because *this*
> program isn't overloaded that no other program is either.
If another program needs those CPU cycles, then it can have them. And
while it's using them, a few more packets will arrive. That will allow
me to do the work of ten connections with fewer expensive system calls
and fewer context switches. That will make that other program happier.
> >> Don't all system calls involve a context switch into the kernel, and
> >> isn't that a good time to come back directly into the program that
> >> is ready to complete it's read()? I run lots of different programs
> >> at once and there is no reason to think this one is going to complete
> >> some operation before another equally important program.
> >
> > If your asking if all system calls are equally expensive, no. System
> >calls that don't require an extra pass through the scheduler are cheaper
> >than those that do. We're talking about the total number of process or
> >thread context switches here. That's usually what you need to minimize
> >to boost performance.
>
> And how do you measure this?
You can directly inspect the code (for open source OSes). It's really
obvious. Or you can ask the designers. Or you can benchmark.
> >> My question is, at what point can we see a difference, and what
> >> software will demonstrate it? I've run squid and apache as reverse
> >> proxies to accelerate access to the static portions of a web site
> >> (somewhere over a million hits a day, mostly in a 4 hour period)
> >> and couldn't see a dramatic difference between the machine load
> >> of 200 httpd's with 1 or 2 runnable vs 1 squid sitting in select.
> >> The actual setup wasn't exactly the same so it isn't a great
> >> comparison, but still, I don't think there was any real advantage
> >> with that kind of load.
> >
> > 200 is nothing. Try 5,000. Or 16,000.
>
> The nature of the http protocol is such that you don't need to
> run that many at once unless you are responding slowly and
> apache has some awkward things happening in accept() that would
> be a problem long before you see any difference from the
> rest of the process model.
I disagree entirely. Suppose you are serving out a 50Mb file to people
over 33.6 modem connections. Each connection will remain for as long as
it takes it to get that whole file.
But it's still hard to push a web server to its limits. In general,
serving web pages is so simple that optimization is only necessary if
you want to look nice for benchmarks. I like to joke that a '486 can
saturate a T3 with simple static web pages.
This is why web servers are bad examples for learning good server
design. You can get away with murder and still have a reasonably decent
product.
You will, however, start running into trouble if you mix CPU bound
loads in. For example, if you add a web page that queries a database or
requires a complex search, you will start to see CPU limits.
> I'd like to something measurable
> to test the performance difference, though, and also to measure
> the impact on other programs on the same machine. Are there
> any programs that do the same thing each way?
It's hard to keep everything else the same when you're talking about
fundamental architectural differences. And, of course, the same
architecture is not the right decision for every server.
DS
------------------------------
From: [EMAIL PROTECTED] (Jonathan A. Buzzard)
Crossposted-To: comp.os.linux.x,comp.windows.x,comp.sys.palmtops.pilot
Subject: Re: Embedded X-server anyone ? ?
Date: Mon, 13 Sep 1999 19:53:42 +0100
In article <7r7nr2$hvl$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Peter Samuelson) writes:
> [Nash Aragam <[EMAIL PROTECTED]>]
>> Am lookin' for any/all info that might be available on the
>> idea/concept/design/implementation of a SMALL_FOOTPRINT X-server for
>> use in embedded OSs and embedded/handheld palmtops.
>
> Considering the complexity of the X protocol, I'm not sure even a
> bare-bones implementation would be small. I could be wrong.
>
Most of the size of say the XF86_SVGA binary (about 3.2MB) comes from
having drivers for just about every chipset under the sun in it.
Given the complexity of compiling the server yourself, the fact that
disk space is cheap and that Linux uses demand page loading so it does
not load the unused drivers into memory it is generally not a problem.
If you where to compile a server with just the chipset you needed it
would be less than 1MB at a guess. You could also try XFree86 4.0 which
dynamically loads in only the driver for the chipset you need so no
recompiling.
JAB.
--
Jonathan A. Buzzard Email: [EMAIL PROTECTED]
Northumberland, United Kingdom. Tel: +44(0)1661-832195
------------------------------
From: [EMAIL PROTECTED] (Sami Tikka)
Subject: Re: X Windows developement
Date: Sun, 12 Sep 1999 00:47:55 +0300
Reply-To: [EMAIL PROTECTED]
On Fri, 20 Aug 1999 18:00:46 GMT, Tranceport <[EMAIL PROTECTED]> wrote:
>I want to understand from a high level perspective what
>components X is it made of, what do they do, how they interact together.
Programming X is done by using libraries of different abstraction level,
sometimes mixing them. The lowest level library is called Xlib. It offers
you basic things like windows, bitmaps and drawing commands.
Then there are libraries that offer you user-interface components like
buttons, scrollbars and text editors. Some of these are called Tk, Qt, GTK,
Motif, Xaw, Andrew, ... too many to list. These are called widget libraries.
Some of these also utilize a library that takes care of some things between
the widget library and the Xlib. Some widget libraries are built directly on
top of Xlib. These are usually much lighter.
>In any case to answer your question, I have to develop Xaw (X athena
>widgets, i think)
You poor thing. I think you must be the only one using Xaw nowadays.
>> Get some of the O'Reilly books for starters (the first couple
>> volumes anyway). They cover moderately low level stuff like
>> xlib. If you want to diddle at the X-protocol level, then I
>> don't know where to point you.
>
>Thanks for the suggestion. I'll take a look at them.
The X protocol level is also covered by another book by O'Reilly. But at
least the edition of the book I used 7 years ago was almost a verbatim copy
of the X protocol document from the X11R5 distribution but with the stylish
O'Reilly paperback covers slapped on. I felt ripped off but I must admit it
looks better on the bookshelf that a bunch of paper stapled together from
one corner...
--
Sami Tikka, [EMAIL PROTECTED], http://www.iki.fi/sti/
"There is no spoon."
------------------------------
From: "Apple" <[EMAIL PROTECTED]>
Crossposted-To:
alt.uu.comp.os.linux.questions,comp.databases.sybase,comp.os.linux.development.apps,sybase.public.sqlserver.linux
Subject: FREE Like Sybase Central
Date: Tue, 14 Sep 1999 17:45:54 +0200
FREE FREE FREE FREE FREE FREE
Like Sybase Central new version
to manage Sybase dataserver and Replication Server.
Support Data V10.x & V11.x (including 11.9.2)
and Rs V10 & V11.x
Download now at http://perso.wanadoo.fr/laserquest/linux
FREE FREE FREE FREE FREE FREE
------------------------------
From: "Lawrence K. Chen, P.Eng." <[EMAIL PROTECTED]>
Crossposted-To: linux.redhat.misc
Subject: glibc-2.1.2 RPM
Date: Tue, 14 Sep 1999 10:37:16 -0400
Is there a release glibc-2.1.2 RPM somewhere for RedHat 6.0?
Hopefully its not glibc-2.1.2-9 under Rawhide at rufus.w3.org....I played
around with this and Asynchronous I/O doesn't work. (aio_suspend never
returns)
--
Who: Lawrence Chen, P.Eng. Email: [EMAIL PROTECTED]
What: Software Developer URL: http://www.opentext.com/basis
Where: Open Text, BASIS Division Phone: 614-761-7449
5080 Tuttle Crossing Blvd. Fax: 614-761-7269
Dublin, OH 43016 ICQ: 12129673
------------------------------
From: [EMAIL PROTECTED] (Leslie Mikesell)
Subject: Re: threads
Date: 14 Sep 1999 11:09:27 -0500
In article <[EMAIL PROTECTED]>,
David Schwartz <[EMAIL PROTECTED]> wrote:
>> That takes a lot of imagination. What has the CPU been doing to let
>> ten accumulate?
>
> Quite a few things. Running other tasks. Perhaps doing the work
>necessary to service the I/O that is currently active.
>
> We are talking about optimization here, and you don't optimize for the
>"the CPU is doing nothing when all of a sudden one request comes in"
>case. In that case, who cares what you do -- anything will work. We care
>about what happens when the CPU and the system are overwhelmed.
But you are making the assumption that the system is going to be
overwhelmed by the load on *this* particular program. If the
load comes instead from a proliferation of many different programs
your select()/read() combination just continues to add extra
system calls as you block in select, context switch off to
another task, and then have to dive back to the kernel to read()
on ever i/o that completes for you, perhaps taking resources away
from the programs generating a larger load.
If you are benchmarking your program with an artificial load you
might demonstrate that you can handle things better than the
kernel scheduler, but it may not be true in normal operation.
>> Huh? Why did select not return when the first read could complete?
>
> Because the receiving of a packet that generates I/O that satisfies the
>select and thus makes to process ready to run takes place before the
>process actually gets scheduled. In some cases, way before. On
>uniprocessor, we may be busy doing work for other connections that are
>part of the same server.
What kind of work is this server doing besides waiting for i/o to
complete?
> The only case where we wakeup instantly is when the CPU isn't busy. And
>who cares about that case? It's senseless to optimize for that case
>since anything is equally good when there is CPU to burn.
Some other program may need those CPU cycles while you are burning
them making 2 system calls instead of one for every read() coming
your way. I don't want to make the assumption that because *this*
program isn't overloaded that no other program is either.
>> Don't all system calls involve a context switch into the kernel, and
>> isn't that a good time to come back directly into the program that
>> is ready to complete it's read()? I run lots of different programs
>> at once and there is no reason to think this one is going to complete
>> some operation before another equally important program.
>
> If your asking if all system calls are equally expensive, no. System
>calls that don't require an extra pass through the scheduler are cheaper
>than those that do. We're talking about the total number of process or
>thread context switches here. That's usually what you need to minimize
>to boost performance.
And how do you measure this?
>> My question is, at what point can we see a difference, and what
>> software will demonstrate it? I've run squid and apache as reverse
>> proxies to accelerate access to the static portions of a web site
>> (somewhere over a million hits a day, mostly in a 4 hour period)
>> and couldn't see a dramatic difference between the machine load
>> of 200 httpd's with 1 or 2 runnable vs 1 squid sitting in select.
>> The actual setup wasn't exactly the same so it isn't a great
>> comparison, but still, I don't think there was any real advantage
>> with that kind of load.
>
> 200 is nothing. Try 5,000. Or 16,000.
The nature of the http protocol is such that you don't need to
run that many at once unless you are responding slowly and
apache has some awkward things happening in accept() that would
be a problem long before you see any difference from the
rest of the process model. I'd like to something measurable
to test the performance difference, though, and also to measure
the impact on other programs on the same machine. Are there
any programs that do the same thing each way?
Les Mikesell
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (bill davidsen)
Subject: Re: 497.2 days ought to be enough for everybody
Date: 14 Sep 1999 19:09:21 GMT
In article <[EMAIL PROTECTED]>,
David Schwartz <[EMAIL PROTECTED]> wrote:
|
| There were numerous jiffy wrap bugs. It's possible that when the wrap
| occured, you just weren't near any of the problem code.
Hum, I may have the longest running Linux server on the planet, then.
I'll have to try to guess how long it has been up, uptime has
undoubtedly long since died.
--
bill davidsen <[EMAIL PROTECTED]> CTO, TMR Associates, Inc
I thought I had forgotten how to throw a boomerang, but it's
all coming back to me...
------------------------------
From: [EMAIL PROTECTED] (Joseph H Allen)
Subject: Re: threads
Date: Thu, 9 Sep 1999 16:39:50 GMT
In article <[EMAIL PROTECTED]>,
David Schwartz <[EMAIL PROTECTED]> wrote:
> Forgive me for replying to myself, but it occured to me that this
>really is an important point and I don't think I covered it in enough
>detail.
> In my previous post I pointed out that a web server that uses one
>process per connection will have to have 10 context switches to do a
>little bit of work on 10 connections. I asserted that a threads
>architecture could reduce this but didn't really go into detail.
> Imagine for a moment that you have a web server that does everything
>in one big select loop. Obviously, this can handle any number of
>connections with no context switches. From a performance standpoint, this
>would seem optimal. But it has several problems:
> 1) We have to gather together everything we might ever wait for in one
>place. And if we ever have to wait for something we can't 'select' on,
>such as disk I/O, we're in trouble.
One cheesy solution to the disk I/O problem is to use a seperate thread or
process for just disk I/O, and have it send a message to the process with
the big select() when it's complete. Supposedly Solaris uses kernel threads
in just this way to hack non-blocking disk I/O onto the system.
> 3) We have to keep 'saving our place' to go back to it. We can't easily
>use the stack to keep track of what we're doing (as most progams do).
You can have multiple stacks in a cooperative user-space multi-threading
environment. Basically this means that thread switching only happens during
I/O waits and never preemtively by the kernel. It's nice because you almost
never have to use any more locks than you would in the normal event-driven
I/O model. I have written a widget library for X which uses this technique:
see ftp://ftp.worcester.com/pub/joe/notif-0.2.tar.Z
> I hope this makes the point more clear. Rather than comparing threads
>to a 'one process per connection' model, compare it to a 'one process
>for everything' model. Then look at how it solves the problems with that
>architecture without imposing too many penalties of its own.
One problem with the one thread per conenction model is that you may have
100s of clients that are doing nothing most of the time, and each operating
system thread takes up a lot of stack space. For example, the default stack
may be 1MB- you can then have only 2000 connections before you run out of
memory (or maybe worse, given whatever kernel resources are taken by the
threads). It would seem that you really want threads for making best use of
the available CPU power, and not for code structuring. Perhaps you have one
thread for quick I/O processing feeding a pool of threads which do the
actual work (perhaps each thread does a single transaction). Depending on
the nature of the work, the pool would consist of maybe 3 or 4 threads per
actual processor.
--
/* [EMAIL PROTECTED] (192.74.137.5) */ /* Joseph H. Allen */
int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0)
+r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2
]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}
------------------------------
From: "Chris Naylor" <[EMAIL PROTECTED]>
Subject: PCI Memory Access Problems
Date: Tue, 14 Sep 1999 12:06:54 -0700
Hello All!
Still working on my first linux device driver and am having the following
problem. I have a memory space I am trying to access which is in
base_address[1]. I ioremap() this address and then try to write and read
(using writew and readw) to this new address. The write goes in fine - but
when I try to read back the same address the system freezes completely.
The code looks something like this:
int mem = dev->base_address[1];
char *baseptr = ioremap(mem, 1024*1024);
writew(0x55,baseptr);
unsigned int data = readw(baseptr); CRASH HERE
Any ideas would be a GREAT help! Thanks!
Chris
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************