On Sat, 17 Feb 2001, Anton Blanchard wrote:
>
> > Hmm. Yeah, I think that may be one of the problems (Intel's card isn't
> > supported afaik; if I have to I'll switch to 3com, or hopelessly try to
> > implement support). I'm looking for a patch to implement sendfile in
> > Samba, as Alan
On Sat, 17 Feb 2001, Anton Blanchard wrote:
Hmm. Yeah, I think that may be one of the problems (Intel's card isn't
supported afaik; if I have to I'll switch to 3com, or hopelessly try to
implement support). I'm looking for a patch to implement sendfile in
Samba, as Alan suggested.
> Hmm. Yeah, I think that may be one of the problems (Intel's card isn't
> supported afaik; if I have to I'll switch to 3com, or hopelessly try to
> implement support). I'm looking for a patch to implement sendfile in
> Samba, as Alan suggested. That seems like a good first step.
As Alan
Hmm. Yeah, I think that may be one of the problems (Intel's card isn't
supported afaik; if I have to I'll switch to 3com, or hopelessly try to
implement support). I'm looking for a patch to implement sendfile in
Samba, as Alan suggested. That seems like a good first step.
As Alan said,
> > My testing showed that the lowlatency patches abosolutely destroy a
system
> > thoughput under heavy disk IO.
>
> I'm surprised - I've been keeping an eye on that.
>
> Here's the result of a bunch of back-to-back `dbench 12' runs
> on UP, alternating with and without LL:
It's interesting
Tom Sightler wrote:
>
> My testing showed that the lowlatency patches abosolutely destroy a system
> thoughput under heavy disk IO.
I'm surprised - I've been keeping an eye on that.
Here's the result of a bunch of back-to-back `dbench 12' runs
on UP, alternating with and without LL:
With:
Tom Sightler wrote:
My testing showed that the lowlatency patches abosolutely destroy a system
thoughput under heavy disk IO.
I'm surprised - I've been keeping an eye on that.
Here's the result of a bunch of back-to-back `dbench 12' runs
on UP, alternating with and without LL:
With:
My testing showed that the lowlatency patches abosolutely destroy a
system
thoughput under heavy disk IO.
I'm surprised - I've been keeping an eye on that.
Here's the result of a bunch of back-to-back `dbench 12' runs
on UP, alternating with and without LL:
It's interesting that your
On Wed, 14 Feb 2001, Tom Sightler wrote:
> Quoting "Gord R. Lamb" <[EMAIL PROTECTED]>:
>
> > On Wed, 14 Feb 2001, Jeremy Jackson wrote:
> >
> > > "Gord R. Lamb" wrote:
> > > > in etherchannel bond, running
> > linux-2.4.1+smptimers+zero-copy+lowlatency)
>
> Not related to network, but why would
On Wed, 14 Feb 2001, Tom Sightler wrote:
Quoting "Gord R. Lamb" [EMAIL PROTECTED]:
On Wed, 14 Feb 2001, Jeremy Jackson wrote:
"Gord R. Lamb" wrote:
in etherchannel bond, running
linux-2.4.1+smptimers+zero-copy+lowlatency)
Not related to network, but why would you have lowlatency
Quoting "Gord R. Lamb" <[EMAIL PROTECTED]>:
> On Wed, 14 Feb 2001, Jeremy Jackson wrote:
>
> > "Gord R. Lamb" wrote:
> > > in etherchannel bond, running
> linux-2.4.1+smptimers+zero-copy+lowlatency)
Not related to network, but why would you have lowlatency patches on this box?
My testing
On Wed, 14 Feb 2001, Jeremy Jackson wrote:
> "Gord R. Lamb" wrote:
>
> > Hi everyone,
> >
> > I'm trying to optimize a box for samba file serving (just contiguous block
> > I/O for the moment), and I've now got both CPUs maxxed out with system
> > load.
> >
> > (For background info, the system
"Gord R. Lamb" wrote:
> Hi everyone,
>
> I'm trying to optimize a box for samba file serving (just contiguous block
> I/O for the moment), and I've now got both CPUs maxxed out with system
> load.
>
> (For background info, the system is a 2x933 Intel, 1gb system memory,
> 133mhz FSB, 1gbit
> When reading the profiler results, the largest consuming kernel (calls?)
> are file_read_actor and csum_partial_copy_generic, by a longshot (about
> 70% and 20% respectively).
>
> Presumably, the csum_partial_copy_generic should be eliminated (or at
> least reduced) by David Miller's zerocopy
Hi everyone,
I'm trying to optimize a box for samba file serving (just contiguous block
I/O for the moment), and I've now got both CPUs maxxed out with system
load.
(For background info, the system is a 2x933 Intel, 1gb system memory,
133mhz FSB, 1gbit 64bit/66mhz FC card, 2x 1gbit 64/66
Hi everyone,
I'm trying to optimize a box for samba file serving (just contiguous block
I/O for the moment), and I've now got both CPUs maxxed out with system
load.
(For background info, the system is a 2x933 Intel, 1gb system memory,
133mhz FSB, 1gbit 64bit/66mhz FC card, 2x 1gbit 64/66
When reading the profiler results, the largest consuming kernel (calls?)
are file_read_actor and csum_partial_copy_generic, by a longshot (about
70% and 20% respectively).
Presumably, the csum_partial_copy_generic should be eliminated (or at
least reduced) by David Miller's zerocopy patch,
"Gord R. Lamb" wrote:
Hi everyone,
I'm trying to optimize a box for samba file serving (just contiguous block
I/O for the moment), and I've now got both CPUs maxxed out with system
load.
(For background info, the system is a 2x933 Intel, 1gb system memory,
133mhz FSB, 1gbit 64bit/66mhz
On Wed, 14 Feb 2001, Jeremy Jackson wrote:
"Gord R. Lamb" wrote:
Hi everyone,
I'm trying to optimize a box for samba file serving (just contiguous block
I/O for the moment), and I've now got both CPUs maxxed out with system
load.
(For background info, the system is a 2x933 Intel,
Quoting "Gord R. Lamb" [EMAIL PROTECTED]:
On Wed, 14 Feb 2001, Jeremy Jackson wrote:
"Gord R. Lamb" wrote:
in etherchannel bond, running
linux-2.4.1+smptimers+zero-copy+lowlatency)
Not related to network, but why would you have lowlatency patches on this box?
My testing showed that
20 matches
Mail list logo