Phillip Susi wrote:
Mark Lord wrote:
Phillip Susi wrote:
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a bug.
Disabling readahead means faster execution queued commands,
since it doesn't have to "linger" and do unwanted
Mark Lord wrote:
Phillip Susi wrote:
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a bug.
Disabling readahead means faster execution queued commands,
since it doesn't have to "linger" and do unwanted read-ahead.
So this bug is a
Mark Lord wrote:
Phillip Susi wrote:
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a bug.
Disabling readahead means faster execution queued commands,
since it doesn't have to linger and do unwanted read-ahead.
So this bug is a
Phillip Susi wrote:
Mark Lord wrote:
Phillip Susi wrote:
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a bug.
Disabling readahead means faster execution queued commands,
since it doesn't have to linger and do unwanted read-ahead.
Paa Paa wrote:
Q: What conclusion can I make on "hdparm -t" results or can I make
any conclusions? Do I really have lower performance with NCQ or not?
If I do, is this because of my HD or because of kernel?
What IO scheduler are you using? If AS or CFQ, could you try with
deadline?
I was
Paa Paa wrote:
Q: What conclusion can I make on hdparm -t results or can I make
any conclusions? Do I really have lower performance with NCQ or not?
If I do, is this because of my HD or because of kernel?
What IO scheduler are you using? If AS or CFQ, could you try with
deadline?
I was
On Thu, Apr 05, 2007 at 12:11:57PM -0400, Mark Lord wrote:
> For personal systems, yes. For servers, probably not a bug.
>
> Disabling readahead means faster execution queued commands,
> since it doesn't have to "linger" and do unwanted read-ahead.
> So this bug is a "feature" for random access
Mark Lord wrote:
This is mostly a problem with the WD Raptor drive, and some other WD
drives.
I have not yet encountered/noticed the problem with other brands.
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a bug.
In my case the
Phillip Susi wrote:
Mark Lord wrote:
This is mostly a problem with the WD Raptor drive, and some other WD
drives.
I have not yet encountered/noticed the problem with other brands.
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a
Mark Lord wrote:
The drive firmware readahead is inherently *way* more effective than
other forms, and without it, sequential read performance really suffers.
Regardless of how software tries to compensate.
Why? As the platter spins under the head, the drive can either read or
ignore the
Mark Lord wrote:
The drive firmware readahead is inherently *way* more effective than
other forms, and without it, sequential read performance really suffers.
Regardless of how software tries to compensate.
Why? As the platter spins under the head, the drive can either read or
ignore the
Phillip Susi wrote:
Mark Lord wrote:
This is mostly a problem with the WD Raptor drive, and some other WD
drives.
I have not yet encountered/noticed the problem with other brands.
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a
Mark Lord wrote:
This is mostly a problem with the WD Raptor drive, and some other WD
drives.
I have not yet encountered/noticed the problem with other brands.
Sounds like this is a serious bug in the WD firmware.
For personal systems, yes. For servers, probably not a bug.
In my case the
On Thu, Apr 05, 2007 at 12:11:57PM -0400, Mark Lord wrote:
For personal systems, yes. For servers, probably not a bug.
Disabling readahead means faster execution queued commands,
since it doesn't have to linger and do unwanted read-ahead.
So this bug is a feature for random access servers.
Phillip Susi wrote:
Mark Lord wrote:
But WD drives, in particular the Raptor series, have a firmware "feature"
that disables "drive readahead" whenever NCQ is in use.
Why is this an issue? Shouldn't the kernel be sending down its own
readahead requests to keep the disk busy?
The drive
Phillip Susi wrote:
Mark Lord wrote:
But WD drives, in particular the Raptor series, have a firmware feature
that disables drive readahead whenever NCQ is in use.
Why is this an issue? Shouldn't the kernel be sending down its own
readahead requests to keep the disk busy?
The drive
Paa Paa wrote:
Q: What conclusion can I make on "hdparm -t" results or can I make
any conclusions? Do I really have lower performance with NCQ or not?
If I do, is this because of my HD or because of kernel?
What IO scheduler are you using? If AS or CFQ, could you try with
deadline?
I was
Q: What conclusion can I make on "hdparm -t" results or can I make any
conclusions? Do I really have lower performance with NCQ or not? If I do,
is this because of my HD or because of kernel?
What IO scheduler are you using? If AS or CFQ, could you try with deadline?
I was using CFQ. I now
Mark Lord wrote:
But WD drives, in particular the Raptor series, have a firmware "feature"
that disables "drive readahead" whenever NCQ is in use.
Why is this an issue? Shouldn't the kernel be sending down its own
readahead requests to keep the disk busy?
-
To unsubscribe from this list:
Chris Snook wrote:
Paa Paa wrote:
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer
rate was measured using "hdparm -t":
With NCQ
Paa Paa wrote:
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer
rate was measured using "hdparm -t":
With NCQ (queue_depth == 31):
Paa Paa wrote:
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer
rate was measured using "hdparm -t":
With NCQ (queue_depth == 31):
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer rate
was measured using "hdparm -t":
With NCQ (queue_depth == 31): 50MB/s.
Without
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer rate
was measured using hdparm -t:
With NCQ (queue_depth == 31): 50MB/s.
Without
Paa Paa wrote:
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer
rate was measured using hdparm -t:
With NCQ (queue_depth == 31):
Paa Paa wrote:
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer
rate was measured using hdparm -t:
With NCQ (queue_depth == 31):
Chris Snook wrote:
Paa Paa wrote:
I'm using Linux 2.6.20.4. I noticed that I get lower SATA hard drive
throughput with 2.6.20.4 than with 2.6.19. The reason was that 2.6.20
enables NCQ by defauly (queue_depth = 31/32 instead of 0/32). Transfer
rate was measured using hdparm -t:
With NCQ
Mark Lord wrote:
But WD drives, in particular the Raptor series, have a firmware feature
that disables drive readahead whenever NCQ is in use.
Why is this an issue? Shouldn't the kernel be sending down its own
readahead requests to keep the disk busy?
-
To unsubscribe from this list:
Q: What conclusion can I make on hdparm -t results or can I make any
conclusions? Do I really have lower performance with NCQ or not? If I do,
is this because of my HD or because of kernel?
What IO scheduler are you using? If AS or CFQ, could you try with deadline?
I was using CFQ. I now
Paa Paa wrote:
Q: What conclusion can I make on hdparm -t results or can I make
any conclusions? Do I really have lower performance with NCQ or not?
If I do, is this because of my HD or because of kernel?
What IO scheduler are you using? If AS or CFQ, could you try with
deadline?
I was
30 matches
Mail list logo