Re: The IO problem on multiple PCI busses

2001-03-06 Thread Tony Mantler

At 5:01 PM -0600 3/6/2001, Oliver Xymoron wrote:
>On Fri, 2 Mar 2001, David S. Miller wrote:
>
>>  > On PPC, we don't have an "IO" space neither, all we have is a range of
>>  > memory addresses that will cause IO cycles to happen on the PCI bus.
>>
>> This is precisely what the "next MMAP is XXX space" ioctl I've
>> suggested is for.  I think I've addressed this concern in my
>> proposal already.  Look:
>>
>>  fd = open("/proc/bus/pci/${BUS}/${DEV}", ...);
>>  if (fd < 0)
>>  return -errno;
>>  err = ioctl(fd, PCI_MMAP_IO, 0);
>
>I know I'm coming in on this late, but wouldn't it be cleaner to have
>separate files for memory and io cycles, eg ${BUS}/${DEV}.(io|mem)?
>They're logically different so they might as well be embodied separately.

If I were designing this (and I'm not), I would do it as thus:

/proc/bus/pci/${BUS}/${DEV} is same as it always is
/proc/bus/pci/${BUS}/${DEV}.d/io.n for IO resources, where n is the number
of the IO resource
/proc/bus/pci/${BUS}/${DEV}.d/mem.n for Mem resouces, where n is...
/proc/bus/pci/${BUS}/${DEV}.d/ints for interrupts, which would block on
read when there are no interrupts pending, and after an interrupt is
triggered the data read would be some sort of information about the
interrupt.

And that should (in theory) be all you need for writing a basic userspace
PCI device driver. (You wouldn't really be able to set up DMA or such, but
at that point I think "put the damn driver in the kernel" would be an
appropriate utterance)


This is just off the top of my head, so no warranties expressed or implied
about the sanity of this kind of system.

Come to think of it, is /proc really the best place to put all this stuff?
It would be a pain to put it in /dev and mess with assigning majors and
minors and making sure all the special devices get created and stuff...
Makes me wish Linux had an /hw fs like on IRIX. (I suppose devfs is close,
but I don't personally like the idea of completely replacing /dev with an
automatic filesystem)

Anyways...


Cheers - Tony 'Nicoya' Mantler :)


--
Tony "Nicoya" Mantler - Renaissance Nerd Extraordinaire - [EMAIL PROTECTED]
Winnipeg, Manitoba, Canada   --   http://nicoya.feline.pp.se/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: [ANNOUNCE] Darkstar Development Project

2000-09-11 Thread Tony Mantler

At 5:13 PM -0500 9/11/2000, Larry McVoy wrote:
[...]
>> > > > over a 384Kbits/sec link.
>
>That's a 48Kbyte/sec link.  Hardly a "horribly fast network".  In fact,
>the bandwidth to FSMlabs.com and innominate.org seems to be identical,
>I suspect that my link is the bottleneck, not either of theirs.
>In order for the link to be the reason for the performance difference,
>innominate.org would need to be a .9Kbyte/second link.  I kinda doubt
>that seeing as a 128MB cvs checkout took 23 minutes (works out to around
>90Kbyte/sec uncompressed, so cvs must compress it, so I'd guesstimate
>they have around a 30Kbyte/sec link or so).
[...]

"It's the latency, stupid". I wouldn't care to argue whether CVS is slower
than BK or not, but consider that if you had a router between you and the
CVS server that was dropping even 5% of your packets, or even just bumping
the latency by a quarter second (and I've seen routers that do that. evil
things), the timing numbers will jump *significantly*.

The best way to test network performance between the two protocols would be
to get yourself a good ol'fasioned serial cable and connect your test
client and test server to eachother via PPP at about ~9600bps or so, *then*
do your tests. Equal (though low) latency, equal (definatley low)
bandwidth, equal server and client performance.

Of course, all the CVS work I do is either over local 10/100 ethernet or
through my 6d/1uMbit cablemodem (which actually gets 6d/1u, up here in the
great white north), so what do I care? 8)


Cheers - Tony 'Nicoya' Mantler :)


--
Tony "Nicoya" Mantler - Renaissance Nerd Extraordinaire - [EMAIL PROTECTED]
Winnipeg, Manitoba, Canada   --   http://nicoya.feline.pp.se/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: [ANNOUNCE] Darkstar Development Project

2000-09-11 Thread Tony Mantler

At 5:13 PM -0500 9/11/2000, Larry McVoy wrote:
[...]
over a 384Kbits/sec link.

That's a 48Kbyte/sec link.  Hardly a "horribly fast network".  In fact,
the bandwidth to FSMlabs.com and innominate.org seems to be identical,
I suspect that my link is the bottleneck, not either of theirs.
In order for the link to be the reason for the performance difference,
innominate.org would need to be a .9Kbyte/second link.  I kinda doubt
that seeing as a 128MB cvs checkout took 23 minutes (works out to around
90Kbyte/sec uncompressed, so cvs must compress it, so I'd guesstimate
they have around a 30Kbyte/sec link or so).
[...]

"It's the latency, stupid". I wouldn't care to argue whether CVS is slower
than BK or not, but consider that if you had a router between you and the
CVS server that was dropping even 5% of your packets, or even just bumping
the latency by a quarter second (and I've seen routers that do that. evil
things), the timing numbers will jump *significantly*.

The best way to test network performance between the two protocols would be
to get yourself a good ol'fasioned serial cable and connect your test
client and test server to eachother via PPP at about ~9600bps or so, *then*
do your tests. Equal (though low) latency, equal (definatley low)
bandwidth, equal server and client performance.

Of course, all the CVS work I do is either over local 10/100 ethernet or
through my 6d/1uMbit cablemodem (which actually gets 6d/1u, up here in the
great white north), so what do I care? 8)


Cheers - Tony 'Nicoya' Mantler :)


--
Tony "Nicoya" Mantler - Renaissance Nerd Extraordinaire - [EMAIL PROTECTED]
Winnipeg, Manitoba, Canada   --   http://nicoya.feline.pp.se/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/



Re: hfs support for blocksize != 512

2000-08-29 Thread Tony Mantler

At 8:09 PM -0500 8/29/2000, Roman Zippel wrote:
>So lets get back to the vfs interface

Yes, let's do that.

Every time I hear someone talking about implementing a filesystem, the
words "you are doomed" are usually to be heard somewhere along the lines.

Now, the bits on disk aren't usually the part that kills you - heck, I
repaired an HFS drive with a hex editor once (don't try that at home, kids)
- it's the evil and miserable FS driver APIs that get you. Big ugly
structs, coherency problems with layers upon layers of xyz-cache, locking
nightmares etc.

So, when my boss dropped a multiple-compressed-backed ramdisk filesystem in
my lap and said "make it use less memory", the words "I am doomed" floated
through my head.

Thankfully for the sake of both myself and my sanity, the platform of
choice was QNX 4.

(Obligitory disclaimer: QNX is an embedded operating system, both it's
architecture and target market is considerably different from Linux's)

QNX's filesystem interfaces make it so painfully easy to write a filesystem
that it puts everything else to shame. You can easily write a fully
functioning, race-free, completely coherent filesystem in less than a week,
it's that simple.

When I wanted to make my compressed-backed ramdisk filesystem attach to
multiple points in the namespace with seperate and multiple backings on
each point, in only a single instance of the driver, it was as easy as
changing 10 lines of code.

Now, for those of you who don't have convinient access to QNX4 or QNX
Neutrino (which has an even nicer interface, mostly cleaning up on the QNX4
stuff), here's the disneyfied version of how it all works:

When your filesystem starts up it tells the FS api "hey you, fs api. if
someone needs something under directory FOO, call me". Your filesystem then
wanders off and sleeps in the background 'till someone needs it.

Now, let's say you do an 'ls' on the FOO directory. The FS api would tap
your filesystem on the shoulder and ask "Hey you, what's in the FOO
directory?". Your filesystem would reply "BAR and BAZ".

Now you do 'cat FOO/BAZ >/dev/null', the FS api taps your filesystem on the
shoulder and says "someone wants to open FOO/BAZ". Your filesystem replys
"Yeah, got it open, here's an FD for you". The FS layer then comes back
again and says "I'll take block x y and z from the file on this FD", to
which your filesystem replies "Ok, here it is".

Etc etc, you get the point.

So what does it all mean? Basically, if you want hugely complex dentries,
and inodes as big as your head, you can do that. If you don't, more power
to you. It's all entirely contained inside your specific FS code, the FS
api doesn't care one bit. It just asks you for files.

It also means that you can do cute things like use the exact same API for
block/char/random devices as you do for filesystems. No big fuss over
special files, procfs, devfs, or dead chickens, your device driver just
calls up the FS api and says "hey, I'm /dev/dsp" or "hey, I'll be taking
care of /proc/cpuinfo" and it all "just works".

Also, it means that if you want to represent your multiforked filesystem as
files-as-directories, (can-o-worms: open) you can just do it. No changes to
the FS api, no other filesystems break, etc. Everything "just works".


If someone, ANYONE, could bring this kind of painfully simple FS api to
linux, and make it work, not only would I be eternally in their debt, I
would personally send them a box of genuine canadian maple-sugar candies as
a small token of my infinite thanks.

Even failing that, I urge anyone who would want to look at (re)designing
any filesystem API to look at how QNX does it. It's really a beautiful
thing. Further reading can be found in "Getting Started with QNX Neutrino
2: A Guide for Realtime Programmers", ISBN 0968250114.


I should apologise here for this email being particularily fluffy. It's
getting a bit late here, and I don't want to switch my brain on again
before I go to sleep.

For those of you who would rather not have read through this entire email,
here's the condensed version: VFS is inherintly a wrong-level API, QNX does
it much better. Flame on. :)


Cheers - Tony 'Nicoya' Mantler :)


--
Tony "Nicoya" Mantler - Renaissance Nerd Extraordinaire - [EMAIL PROTECTED]
Winnipeg, Manitoba, Canada   --   http://nicoya.feline.pp.se/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at http://www.tux.org/lkml/