Hi!
> > And no, I don't actually hink that sendfile() is all that hot. It was
> > _very_ easy to implement, and can be considered a 5-minute hack to give
> > a feature that fit very well in the MM architecture, and that the Apache
> > folks had already been using on other architectures.
>
> The
Hi!
And no, I don't actually hink that sendfile() is all that hot. It was
_very_ easy to implement, and can be considered a 5-minute hack to give
a feature that fit very well in the MM architecture, and that the Apache
folks had already been using on other architectures.
The current
Hi Dave,
How are the VB withdrawal symptoms going? :)
> Anton, why are you always returning -1 (which means error for the
> smb_message[] array functions) when using sendfile?
Returning -1 tells the higher level code that we actually sent the bytes
out ourselves and not to bother doing it.
Anton Blanchard writes:
> diff -u -u -r1.257 reply.c
> --- source/smbd/reply.c 2001/01/24 19:34:53 1.257
> +++ source/smbd/reply.c 2001/01/26 05:38:53
> @@ -2383,6 +2391,51 @@
...
> +while(nread) {
> +int nwritten;
> +nwritten =
Anton Blanchard writes:
diff -u -u -r1.257 reply.c
--- source/smbd/reply.c 2001/01/24 19:34:53 1.257
+++ source/smbd/reply.c 2001/01/26 05:38:53
@@ -2383,6 +2391,51 @@
...
+while(nread) {
+int nwritten;
+nwritten = sendfile(smbd_server_fd(),
Hi Dave,
How are the VB withdrawal symptoms going? :)
Anton, why are you always returning -1 (which means error for the
smb_message[] array functions) when using sendfile?
Returning -1 tells the higher level code that we actually sent the bytes
out ourselves and not to bother doing it.
> Do you have it at a URL?
The patch is small so I have attached it to this email. It should apply
to the samba CVS tree. Remember this is still a hack and I need to add
code to ensure the file is not truncated and we sendfile() less than we
promised. (After talking to tridge and davem, this
On Thu, 25 Jan 2001, bert hubert wrote:
> On Thu, Jan 25, 2001 at 09:06:33AM +, James Sutherland wrote:
>
> > performance than it would for an httpd, because of the long-lived
> > sessions, but rewriting it as a state machine (no forking, threads or
> > other crap, just use non-blocking
On Thu, 25 Jan 2001, Anton Blanchard wrote:
> I have patches for samba to do sendfile. Making a tux module does not make
> sense to me, especially since we are nowhere near the limits of samba in
> userspace. Once userspace samba can run no faster, then we should think
> about other options.
Do
> No plans for samba to use sendfile? Even better make it a tux-like module?
> (that would enable Netware-Linux like performance with the standard
> kernel... would be cool afterall ;)
I have patches for samba to do sendfile. Making a tux module does not make
sense to me, especially since we
On Thu, Jan 25, 2001 at 09:06:33AM +, James Sutherland wrote:
> performance than it would for an httpd, because of the long-lived
> sessions, but rewriting it as a state machine (no forking, threads or
> other crap, just use non-blocking I/O) would probably make much more
> sense.
>From a
On Thu, 25 Jan 2001, Alan Cox wrote:
> > I think, that is not what we need. Once Ingo wrote, that since HTTP
> > serving can also be viewed as a kind of fileserving, it should be
> > possible to create a TUX like module for the same framwork, that serves
> > using the SMB protocol instead of
On Thu, 25 Jan 2001, Alan Cox wrote:
I think, that is not what we need. Once Ingo wrote, that since HTTP
serving can also be viewed as a kind of fileserving, it should be
possible to create a TUX like module for the same framwork, that serves
using the SMB protocol instead of HTTP...
On Thu, Jan 25, 2001 at 09:06:33AM +, James Sutherland wrote:
performance than it would for an httpd, because of the long-lived
sessions, but rewriting it as a state machine (no forking, threads or
other crap, just use non-blocking I/O) would probably make much more
sense.
From a kernel
No plans for samba to use sendfile? Even better make it a tux-like module?
(that would enable Netware-Linux like performance with the standard
kernel... would be cool afterall ;)
I have patches for samba to do sendfile. Making a tux module does not make
sense to me, especially since we are
On Thu, 25 Jan 2001, Anton Blanchard wrote:
I have patches for samba to do sendfile. Making a tux module does not make
sense to me, especially since we are nowhere near the limits of samba in
userspace. Once userspace samba can run no faster, then we should think
about other options.
Do you
On Thu, 25 Jan 2001, bert hubert wrote:
On Thu, Jan 25, 2001 at 09:06:33AM +, James Sutherland wrote:
performance than it would for an httpd, because of the long-lived
sessions, but rewriting it as a state machine (no forking, threads or
other crap, just use non-blocking I/O) would
Do you have it at a URL?
The patch is small so I have attached it to this email. It should apply
to the samba CVS tree. Remember this is still a hack and I need to add
code to ensure the file is not truncated and we sendfile() less than we
promised. (After talking to tridge and davem, this
> I think, that is not what we need. Once Ingo wrote, that since HTTP
> serving can also be viewed as a kind of fileserving, it should be
> possible to create a TUX like module for the same framwork, that serves
> using the SMB protocol instead of HTTP...
Kernel SMB is basically not a sane
On Wed, 24 Jan 2001, Sasi Peter wrote:
> > AIUI, Jeff Merkey was working on loading "userspace" apps into the
> kernel
> > to tackle this sort of problem generically. I don't know if he's
> tried it
> > with Samba - the forking would probably be a problem...
>
> I think, that is not what we
> AIUI, Jeff Merkey was working on loading "userspace" apps into the
kernel
> to tackle this sort of problem generically. I don't know if he's
tried it
> with Samba - the forking would probably be a problem...
I think, that is not what we need. Once Ingo wrote, that since HTTP
serving can
On Wed, 24 Jan 2001, Sasi Peter wrote:
> On 14 Jan 2001, Linus Torvalds wrote:
>
> > The only obvious use for it is file serving, and as high-performance
> > file serving tends to end up as a kernel module in the end anyway (the
> > only hold-out is samba, and that's been discussed too),
On Wed, 24 Jan 2001, Sasi Peter wrote:
On 14 Jan 2001, Linus Torvalds wrote:
The only obvious use for it is file serving, and as high-performance
file serving tends to end up as a kernel module in the end anyway (the
only hold-out is samba, and that's been discussed too), "sendfile()"
AIUI, Jeff Merkey was working on loading "userspace" apps into the
kernel
to tackle this sort of problem generically. I don't know if he's
tried it
with Samba - the forking would probably be a problem...
I think, that is not what we need. Once Ingo wrote, that since HTTP
serving can also
On Wed, 24 Jan 2001, Sasi Peter wrote:
AIUI, Jeff Merkey was working on loading "userspace" apps into the
kernel
to tackle this sort of problem generically. I don't know if he's
tried it
with Samba - the forking would probably be a problem...
I think, that is not what we need. Once
I think, that is not what we need. Once Ingo wrote, that since HTTP
serving can also be viewed as a kind of fileserving, it should be
possible to create a TUX like module for the same framwork, that serves
using the SMB protocol instead of HTTP...
Kernel SMB is basically not a sane idea.
On 14 Jan 2001, Linus Torvalds wrote:
> The only obvious use for it is file serving, and as high-performance
> file serving tends to end up as a kernel module in the end anyway (the
> only hold-out is samba, and that's been discussed too), "sendfile()"
> really is more a proof of concept than
On Tue, 23 Jan 2001, Helge Hafting wrote:
> James Sutherland wrote:
> >
> > On Mon, 22 Jan 2001, Helge Hafting wrote:
> >
> > > And when the next user wants the same webpage/file you read it from
> > > the RAID again? Seems to me you loose the benefit of caching stuff in
> > > memory with this
James Sutherland wrote:
>
> On Mon, 22 Jan 2001, Helge Hafting wrote:
>
> > And when the next user wants the same webpage/file you read it from
> > the RAID again? Seems to me you loose the benefit of caching stuff in
> > memory with this scheme. Sure - the RAID controller might have some
> >
James Sutherland wrote:
On Mon, 22 Jan 2001, Helge Hafting wrote:
And when the next user wants the same webpage/file you read it from
the RAID again? Seems to me you loose the benefit of caching stuff in
memory with this scheme. Sure - the RAID controller might have some
cache, but
On Tue, 23 Jan 2001, Helge Hafting wrote:
James Sutherland wrote:
On Mon, 22 Jan 2001, Helge Hafting wrote:
And when the next user wants the same webpage/file you read it from
the RAID again? Seems to me you loose the benefit of caching stuff in
memory with this scheme. Sure -
On 14 Jan 2001, Linus Torvalds wrote:
The only obvious use for it is file serving, and as high-performance
file serving tends to end up as a kernel module in the end anyway (the
only hold-out is samba, and that's been discussed too), "sendfile()"
really is more a proof of concept than
On Mon, 22 Jan 2001 12:01:23 -0800 (PST), David Lang <[EMAIL PROTECTED]> wrote:
> how about always_defragment (or whatever the option is now called) so that
> your routing box always reassembles packets and then fragments them to the
> correct size for the next segment? wouldn't this do the job?
:37:07 -0700
> From: Val Henson <[EMAIL PROTECTED]>
> To: David Lang <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED], Linus Torvalds <[EMAIL PROTECTED]>
> Subject: Re: Is sendfile all that sexy?
>
> On Mon, Jan 22, 2001 at 10:27:58AM -0800, David Lang wrote:
&g
On Mon, Jan 22, 2001 at 10:27:58AM -0800, David Lang wrote:
> On Mon, 22 Jan 2001, Val Henson wrote:
>
> > There is a use for an optimized socket->socket transfer - proxying
> > high speed TCP connections. It would be exciting if the zerocopy
> > networking framework led to a decent
On Mon, 22 Jan 2001, Val Henson wrote:
> On Wed, Jan 17, 2001 at 11:32:35AM -0800, Linus Torvalds wrote:
> >
> > However, for socket->socket, we would not have such an advantage. A
> > socket->socket sendfile() would not avoid any copies the way the
> > networking is done today. That _may_
On Mon, 22 Jan 2001, Val Henson wrote:
> There is a use for an optimized socket->socket transfer - proxying
> high speed TCP connections. It would be exciting if the zerocopy
> networking framework led to a decent socket->socket transfer.
if you are proxying connextions you should really be
On Wed, Jan 17, 2001 at 11:32:35AM -0800, Linus Torvalds wrote:
> In article <[EMAIL PROTECTED]>,
> Ben Mansell <[EMAIL PROTECTED]> wrote:
> >
> >The current sendfile() has the limitation that it can't read data from
> >a socket. Would it be another 5-minute hack to remove this limitation, so
>
On Mon, 22 Jan 2001, Helge Hafting wrote:
> And when the next user wants the same webpage/file you read it from
> the RAID again? Seems to me you loose the benefit of caching stuff in
> memory with this scheme. Sure - the RAID controller might have some
> cache, but it is usually smaller than
James Sutherland wrote:
>
> On Sat, 20 Jan 2001, Linus Torvalds wrote:
>
> >
> >
> > On Sat, 20 Jan 2001, Roman Zippel wrote:
> > >
> > > On Sat, 20 Jan 2001, Linus Torvalds wrote:
> > >
> > > > But point-to-point also means that you don't get any real advantage from
> > > > doing things like
James Sutherland wrote:
On Sat, 20 Jan 2001, Linus Torvalds wrote:
On Sat, 20 Jan 2001, Roman Zippel wrote:
On Sat, 20 Jan 2001, Linus Torvalds wrote:
But point-to-point also means that you don't get any real advantage from
doing things like device-to-device DMA.
On Mon, 22 Jan 2001, Helge Hafting wrote:
And when the next user wants the same webpage/file you read it from
the RAID again? Seems to me you loose the benefit of caching stuff in
memory with this scheme. Sure - the RAID controller might have some
cache, but it is usually smaller than main
On Wed, Jan 17, 2001 at 11:32:35AM -0800, Linus Torvalds wrote:
In article [EMAIL PROTECTED],
Ben Mansell [EMAIL PROTECTED] wrote:
The current sendfile() has the limitation that it can't read data from
a socket. Would it be another 5-minute hack to remove this limitation, so
you could
On Mon, 22 Jan 2001, Val Henson wrote:
There is a use for an optimized socket-socket transfer - proxying
high speed TCP connections. It would be exciting if the zerocopy
networking framework led to a decent socket-socket transfer.
if you are proxying connextions you should really be looking
On Mon, 22 Jan 2001, Val Henson wrote:
On Wed, Jan 17, 2001 at 11:32:35AM -0800, Linus Torvalds wrote:
However, for socket-socket, we would not have such an advantage. A
socket-socket sendfile() would not avoid any copies the way the
networking is done today. That _may_ change, of
On Mon, Jan 22, 2001 at 10:27:58AM -0800, David Lang wrote:
On Mon, 22 Jan 2001, Val Henson wrote:
There is a use for an optimized socket-socket transfer - proxying
high speed TCP connections. It would be exciting if the zerocopy
networking framework led to a decent socket-socket
:07 -0700
From: Val Henson [EMAIL PROTECTED]
To: David Lang [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED], Linus Torvalds [EMAIL PROTECTED]
Subject: Re: Is sendfile all that sexy?
On Mon, Jan 22, 2001 at 10:27:58AM -0800, David Lang wrote:
On Mon, 22 Jan 2001, Val Henson wrote:
There is a use
On Mon, 22 Jan 2001 12:01:23 -0800 (PST), David Lang [EMAIL PROTECTED] wrote:
how about always_defragment (or whatever the option is now called) so that
your routing box always reassembles packets and then fragments them to the
correct size for the next segment? wouldn't this do the job?
It
On Sat, 20 Jan 2001, Linus Torvalds wrote:
> There's no no-no here: you can even create the "struct page"s on demand,
> and create a dummy local zone that contains them that they all point back
> to. It should be trivial - nobody else cares about those pages or that
> zone anyway.
>
> This is
FYI -
Another use sendfile(2) might be used for. Suppose you were to generate
large amounts of data -- maybe kernel profiling data, audit data, whatever,
in the kernel.
You want to pull that data out as fast as possible and write it to
a disk or network socket. Normally, I
Hello!
> "struct page" tricks, some macros etc WILL NOT WORK. In particular, we do
> not currently have a good "page_to_bus/phys()" function. That means that
> anybody trying to do DMA to this page is currently screwed, simply because
> he has no good way of getting the physical address.
We
On Sun, 21 Jan 2001, James Sutherland wrote:
> For many applications, yes - but think about a file server for a
> moment. 99% of the data read from the RAID (or whatever) is really
> aimed at the appropriate NIC - going via main memory would just slow
> things down.
patently wrong. Compare the
On Sat, 20 Jan 2001, Linus Torvalds wrote:
>
>
> On Sat, 20 Jan 2001, Roman Zippel wrote:
> >
> > On Sat, 20 Jan 2001, Linus Torvalds wrote:
> >
> > > But point-to-point also means that you don't get any real advantage from
> > > doing things like device-to-device DMA. Because the links are
> >
On Sat, 20 Jan 2001, Linus Torvalds wrote:
On Sat, 20 Jan 2001, Roman Zippel wrote:
On Sat, 20 Jan 2001, Linus Torvalds wrote:
But point-to-point also means that you don't get any real advantage from
doing things like device-to-device DMA. Because the links are
asynchronous,
On Sun, 21 Jan 2001, James Sutherland wrote:
For many applications, yes - but think about a file server for a
moment. 99% of the data read from the RAID (or whatever) is really
aimed at the appropriate NIC - going via main memory would just slow
things down.
patently wrong. Compare the
Hello!
"struct page" tricks, some macros etc WILL NOT WORK. In particular, we do
not currently have a good "page_to_bus/phys()" function. That means that
anybody trying to do DMA to this page is currently screwed, simply because
he has no good way of getting the physical address.
We already
FYI -
Another use sendfile(2) might be used for. Suppose you were to generate
large amounts of data -- maybe kernel profiling data, audit data, whatever,
in the kernel.
You want to pull that data out as fast as possible and write it to
a disk or network socket. Normally, I
On Sat, 20 Jan 2001, Linus Torvalds wrote:
There's no no-no here: you can even create the "struct page"s on demand,
and create a dummy local zone that contains them that they all point back
to. It should be trivial - nobody else cares about those pages or that
zone anyway.
This is very
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
> But think like a good hardware designer.
>
> In 99% of all cases, where do you want the results of a read to end up?
> Where do you want the contents of a write to come from?
>
> Right. Memory.
>
> Now, optimize for the common case. Make the
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
> Now, there are things to look out for: when you do these kinds of dummy
> "struct page" tricks, some macros etc WILL NOT WORK. In particular, we do
> not currently have a good "page_to_bus/phys()" function. That means that
> anybody trying to do
> I'm _not_ seeing the point for a high-performance link to have a generic
> packet buffer.
>
> Linus
Well suppose your RAID controller can take over control of disks
distributed throughout your I/O subsystem. If you assume the bandwidth of
the I/O subsystem is not the
On Sat, 20 Jan 2001, Roman Zippel wrote:
>
> On Sat, 20 Jan 2001, Linus Torvalds wrote:
>
> > But point-to-point also means that you don't get any real advantage from
> > doing things like device-to-device DMA. Because the links are
> > asynchronous, you need buffers in between them anyway,
On Sat, 20 Jan 2001, Roman Zippel wrote:
>
> AFAIK as long as that dummy page struct is only used in the page cache,
> that should work, but you get new problems as soon as you map the page
> also into a user process (grep for CONFIG_DISCONTIGMEM under
> include/asm-mips64 to see the needed
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
> But point-to-point also means that you don't get any real advantage from
> doing things like device-to-device DMA. Because the links are
> asynchronous, you need buffers in between them anyway, and there is no
> bandwidth advantage of not going
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
> There's no no-no here: you can even create the "struct page"s on demand,
> and create a dummy local zone that contains them that they all point back
> to. It should be trivial - nobody else cares about those pages or that
> zone anyway.
AFAIK as
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Today, Linus Torvalds ([EMAIL PROTECTED]) wrote:
> Just wait. My crystal ball is infallible.
One of these days, that line will be your downfall :-)
*grins*
Mo.
- --
Mo McKinlay
[EMAIL PROTECTED]
-
On 20 Jan 2001, Kai Henningsen wrote:
>
> Then again, I could easily see those I/O devices go the general embedded
> route, which in a decade or two could well mean they run some sort of
> embedded Linux on the controller.
>
> Which would make some features rather easy to implement.
I'm
[EMAIL PROTECTED] (Linus Torvalds) wrote on 18.01.01 in
<[EMAIL PROTECTED]>:
> (Short and sweet: most hogh-performance people want point-to-point serial
> line IO with no hops, because it's a known art to make that go fast. No
> general-case routing in hardware - if you want to go as fast as
On Sat, 20 Jan 2001 [EMAIL PROTECTED] wrote:
> > Actually, as long as there is no "struct page" there _are_ problems.
> > This is why the NUMA stuff was brought up - it would require that there
> > be a mem_map for the PCI pages.. (to do ref-counting etc).
>
> I see.
>
> Is this strong
Hello!
> Actually, as long as there is no "struct page" there _are_ problems.
> This is why the NUMA stuff was brought up - it would require that there
> be a mem_map for the PCI pages.. (to do ref-counting etc).
I see.
Is this strong "no-no-no"? What is obstacle to allow "struct page"
to sit
Hello!
Actually, as long as there is no "struct page" there _are_ problems.
This is why the NUMA stuff was brought up - it would require that there
be a mem_map for the PCI pages.. (to do ref-counting etc).
I see.
Is this strong "no-no-no"? What is obstacle to allow "struct page"
to sit
On Sat, 20 Jan 2001 [EMAIL PROTECTED] wrote:
Actually, as long as there is no "struct page" there _are_ problems.
This is why the NUMA stuff was brought up - it would require that there
be a mem_map for the PCI pages.. (to do ref-counting etc).
I see.
Is this strong "no-no-no"? What
[EMAIL PROTECTED] (Linus Torvalds) wrote on 18.01.01 in
[EMAIL PROTECTED]:
(Short and sweet: most hogh-performance people want point-to-point serial
line IO with no hops, because it's a known art to make that go fast. No
general-case routing in hardware - if you want to go as fast as the
On 20 Jan 2001, Kai Henningsen wrote:
Then again, I could easily see those I/O devices go the general embedded
route, which in a decade or two could well mean they run some sort of
embedded Linux on the controller.
Which would make some features rather easy to implement.
I'm not
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Today, Linus Torvalds ([EMAIL PROTECTED]) wrote:
Just wait. My crystal ball is infallible.
One of these days, that line will be your downfall :-)
*grins*
Mo.
- --
Mo McKinlay
[EMAIL PROTECTED]
-
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
There's no no-no here: you can even create the "struct page"s on demand,
and create a dummy local zone that contains them that they all point back
to. It should be trivial - nobody else cares about those pages or that
zone anyway.
AFAIK as
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
But point-to-point also means that you don't get any real advantage from
doing things like device-to-device DMA. Because the links are
asynchronous, you need buffers in between them anyway, and there is no
bandwidth advantage of not going
On Sat, 20 Jan 2001, Roman Zippel wrote:
AFAIK as long as that dummy page struct is only used in the page cache,
that should work, but you get new problems as soon as you map the page
also into a user process (grep for CONFIG_DISCONTIGMEM under
include/asm-mips64 to see the needed
On Sat, 20 Jan 2001, Roman Zippel wrote:
On Sat, 20 Jan 2001, Linus Torvalds wrote:
But point-to-point also means that you don't get any real advantage from
doing things like device-to-device DMA. Because the links are
asynchronous, you need buffers in between them anyway, and there
I'm _not_ seeing the point for a high-performance link to have a generic
packet buffer.
Linus
Well suppose your RAID controller can take over control of disks
distributed throughout your I/O subsystem. If you assume the bandwidth of
the I/O subsystem is not the
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
Now, there are things to look out for: when you do these kinds of dummy
"struct page" tricks, some macros etc WILL NOT WORK. In particular, we do
not currently have a good "page_to_bus/phys()" function. That means that
anybody trying to do DMA
Hi,
On Sat, 20 Jan 2001, Linus Torvalds wrote:
But think like a good hardware designer.
In 99% of all cases, where do you want the results of a read to end up?
Where do you want the contents of a write to come from?
Right. Memory.
Now, optimize for the common case. Make the common
Hi!
> > And no, I don't actually hink that sendfile() is all that hot. It was
> > _very_ easy to implement, and can be considered a 5-minute hack to give
> > a feature that fit very well in the MM architecture, and that the Apache
> > folks had already been using on other architectures.
>
> The
In article <[EMAIL PROTECTED]>,
<[EMAIL PROTECTED]> wrote:
>Hello!
>
>> It's about direct i/o from/to pages,
>
>Yes. Formally, there are no problems to send to tcp directly from io space.
Actually, as long as there is no "struct page" there _are_ problems.
This is why the NUMA stuff was brought
Hello!
> It's about direct i/o from/to pages,
Yes. Formally, there are no problems to send to tcp directly from io space.
But could someone explain me one thing. Does bus-mastering
from io really work? And if it does, is it enough fast?
At least, looking at my book on pci, I do not understand
Linus Torvalds wrote:
> > I wrote a driver for a zoran-chipset frame-grabber card. The "natural"
> > way to save a video stream was exactly the way it came out of the
> > card. And the card was structured that you could put on an "mpeg
> > decoder" (or encoder) chip, and you could DMA the stream
On Fri, Jan 19, 2001 at 11:58:03AM +0100, Rogier Wolff wrote:
> Now if we design the NUMA support correctly, just filling in "disk has
> a seek-time of 10ms, and 20Mb per second throughput when accessed
> linearly" NUMA may on it's own "tune" the swapper to do the right
> thing. And once
Linus Torvalds wrote:
> I do not know of _any_ disk controllers that let you map the controller
> buffers over PCI. Which means that with current hardware, you have to
> assume that the disk is the initiator of the PCI-PCI DMA requests. Agreed?
I personally don't have driver-writing experience
On Fri, 19 Jan 2001, Roman Zippel wrote:
> Hi,
>
> On Thu, 18 Jan 2001, Linus Torvalds wrote:
>
> > > I agree, it's device dependent, but such hardware exists.
> >
> > Show me any practical case where the hardware actually exists.
>
> http://www.augan.com
>
> > I do not know of _any_ disk
Hi,
On Thu, 18 Jan 2001, Linus Torvalds wrote:
> > I agree, it's device dependent, but such hardware exists.
>
> Show me any practical case where the hardware actually exists.
http://www.augan.com
> I do not know of _any_ disk controllers that let you map the controller
> buffers over PCI.
Hi,
On Thu, 18 Jan 2001, Linus Torvalds wrote:
I agree, it's device dependent, but such hardware exists.
Show me any practical case where the hardware actually exists.
http://www.augan.com
I do not know of _any_ disk controllers that let you map the controller
buffers over PCI. Which
On Fri, 19 Jan 2001, Roman Zippel wrote:
Hi,
On Thu, 18 Jan 2001, Linus Torvalds wrote:
I agree, it's device dependent, but such hardware exists.
Show me any practical case where the hardware actually exists.
http://www.augan.com
I do not know of _any_ disk controllers that
Linus Torvalds wrote:
I do not know of _any_ disk controllers that let you map the controller
buffers over PCI. Which means that with current hardware, you have to
assume that the disk is the initiator of the PCI-PCI DMA requests. Agreed?
I personally don't have driver-writing experience for
On Fri, Jan 19, 2001 at 11:58:03AM +0100, Rogier Wolff wrote:
Now if we design the NUMA support correctly, just filling in "disk has
a seek-time of 10ms, and 20Mb per second throughput when accessed
linearly" NUMA may on it's own "tune" the swapper to do the right
thing. And once parametrized
Linus Torvalds wrote:
I wrote a driver for a zoran-chipset frame-grabber card. The "natural"
way to save a video stream was exactly the way it came out of the
card. And the card was structured that you could put on an "mpeg
decoder" (or encoder) chip, and you could DMA the stream directly
Hello!
It's about direct i/o from/to pages,
Yes. Formally, there are no problems to send to tcp directly from io space.
But could someone explain me one thing. Does bus-mastering
from io really work? And if it does, is it enough fast?
At least, looking at my book on pci, I do not understand
In article [EMAIL PROTECTED],
[EMAIL PROTECTED] wrote:
Hello!
It's about direct i/o from/to pages,
Yes. Formally, there are no problems to send to tcp directly from io space.
Actually, as long as there is no "struct page" there _are_ problems.
This is why the NUMA stuff was brought up - it
Hi!
And no, I don't actually hink that sendfile() is all that hot. It was
_very_ easy to implement, and can be considered a 5-minute hack to give
a feature that fit very well in the MM architecture, and that the Apache
folks had already been using on other architectures.
The current
> Which in turn implies that the non-disk target hardware has to be able to
> have a PCI-mapped memory buffer for the source or the destination, AND
> they have to be able to cope with the fact that the data you get off the
> disk will have to be the raw data at 512-byte granularity.
And that
In article <[EMAIL PROTECTED]>,
Russell Leighton <[EMAIL PROTECTED]> wrote:
>
>"copy this fd to that one, and optimize that if you can"
>
>... isn't this Larry M's "splice" (http://www.bitmover.com/lm/papers/splice.ps)?
We talked extensively about "splice()" with Larry. It was one of the
1 - 100 of 212 matches
Mail list logo