Holger Kiehl wrote:
top - 08:39:11 up 2:03, 2 users, load average: 23.01, 21.48, 15.64
Tasks: 102 total, 2 running, 100 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0% us, 17.7% sy, 0.0% ni, 0.0% id, 78.9% wa, 0.2% hi, 3.1%
si Mem: 8124184k total, 8093068k used,31116k free,
On Wed, 31 Aug 2005, Holger Kiehl wrote:
On Thu, 1 Sep 2005, Nick Piggin wrote:
Holger Kiehl wrote:
meminfo.dump:
MemTotal: 8124172 kB
MemFree: 23564 kB
Buffers: 7825944 kB
Cached: 19216 kB
SwapCached: 0 kB
Active: 25708 kB
On Tue, Aug 30, 2005 at 08:06:21PM +, Holger Kiehl wrote:
How does one determine the PCI-X bus speed?
Usually only the card (in your case the Symbios SCSI controller) can
tell. If it does, it'll be most likely in 'dmesg'.
There is nothing in dmesg:
Fusion MPT base driver 3.01.20
On Wed, Aug 31 2005, Vojtech Pavlik wrote:
On Tue, Aug 30, 2005 at 08:06:21PM +, Holger Kiehl wrote:
How does one determine the PCI-X bus speed?
Usually only the card (in your case the Symbios SCSI controller) can
tell. If it does, it'll be most likely in 'dmesg'.
There is
On Wed, Aug 31 2005, Holger Kiehl wrote:
Ok, I did run the following dd command in different combinations:
dd if=/dev/zero of=/dev/sd?1 bs=4k count=500
I think a bs of 4k is way too small and will cause huge CPU overhead.
Can you try with something like 4M? Also, you can use
Holger Kiehl wrote:
3236497 total 1.4547
2507913 default_idle 52248.1875
158752 shrink_zone 43.3275
121584 copy_user_generic_c 3199.5789
34271 __wake_up_bit
On Wed, 31 Aug 2005, Vojtech Pavlik wrote:
On Tue, Aug 30, 2005 at 08:06:21PM +, Holger Kiehl wrote:
How does one determine the PCI-X bus speed?
Usually only the card (in your case the Symbios SCSI controller) can
tell. If it does, it'll be most likely in 'dmesg'.
There is nothing in
On Wed, 31 Aug 2005, Jens Axboe wrote:
Nothing sticks out here either. There's plenty of idle time. It smells
like a driver issue. Can you try the same dd test, but read from the
drives instead? Use a bigger blocksize here, 128 or 256k.
I used the following command reading from all 8 disks in
From linux-kernel mailing list.
Don't do this. BLKDEV_MIN_RQ sets the size of the mempool reserved
requests and will only get slightly used in low memory conditions, so
most memory will probably be wasted.
Change /sys/block/xxx/queue/nr_requests
Tom Callahan
TESSCO Technologies
I'll try this approach as well. On 2.4.X kernels, I had to change
nr_requests to achieve performance, but
I noticed it didn't seem to work as well on 2.6.X. I'll retry the
change with nr_requests on 2.6.X.
Thanks
Jeff
Tom Callahan wrote:
From linux-kernel mailing list.
Don't do
On Wed, Aug 31 2005, jmerkey wrote:
I have seen an 80GB/sec limitation in the kernel unless this value is
changed in the SCSI I/O layer
for 3Ware and other controllers during testing of 2.6.X series kernels.
Change these values in include/linux/blkdev.h and performance goes from
512 is not enough. It has to be larger. I just tried 512 and it still
limits the data rates.
Jeff
Jens Axboe wrote:
On Wed, Aug 31 2005, jmerkey wrote:
I have seen an 80GB/sec limitation in the kernel unless this value is
changed in the SCSI I/O layer
for 3Ware and other controllers
On Wed, Aug 31 2005, jmerkey wrote:
512 is not enough. It has to be larger. I just tried 512 and it still
limits the data rates.
Please don't top post.
512 wasn't the point, setting it properly is the point. If you need more
than 512, go ahead. This isn't Holger's problem, though, the
On Wed, Aug 31 2005, Holger Kiehl wrote:
# ./oread /dev/sdX
and it will read 128k chunks direct from that device. Run on the same
drives as above, reply with the vmstat info again.
Using kernel 2.6.12.5 again, here the results:
[snip]
Ok, reads as expected, like the buffered io but using
Holger Kiehl wrote:
On Wed, 31 Aug 2005, Jens Axboe wrote:
On Wed, Aug 31 2005, Holger Kiehl wrote:
[]
I used the following command reading from all 8 disks in parallel:
dd if=/dev/sd?1 of=/dev/null bs=256k count=78125
Here vmstat output (I just cut something out in the middle):
join the party. ;)
8 400GB SATA disk on same Marvel 8 port PCIX-133 card. P4 CPU.
Supermicro SCT board.
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [multipath] [raid6]
[raid10] [faulty]
md0 : active raid0 sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda
[0]
On Thu, 1 Sep 2005, Nick Piggin wrote:
Holger Kiehl wrote:
meminfo.dump:
MemTotal: 8124172 kB
MemFree: 23564 kB
Buffers: 7825944 kB
Cached: 19216 kB
SwapCached: 0 kB
Active: 25708 kB
Inactive: 7835548 kB
HighTotal:
On Mon, 29 Aug 2005, Mark Hahn wrote:
The U320 SCSI controller has a 64 bit PCI-X bus for itself, there is no other
device on that bus. Unfortunatly I was unable to determine at what speed
it is running, here the output from lspci -vv:
...
Status: Bus=2 Dev=4 Func=0 64bit+
On Mon, 29 Aug 2005, Al Boldi wrote:
Holger Kiehl wrote:
Why do I only get 247 MB/s for writting and 227 MB/s for reading (from the
bonnie++ results) for a Raid0 over 8 disks? I was expecting to get nearly
three times those numbers if you take the numbers from the individual
disks.
What limit
8 SCSI U320 (15000 rpm) disks where 4 disks (sdc, sdd, sde, sdf)
figure each is worth, say, 60 MB/s, so you'll peak (theoretically) at
240 MB/s per channel.
The U320 SCSI controller has a 64 bit PCI-X bus for itself, there is no other
device on that bus. Unfortunatly I was unable to
Holger == Holger Kiehl [EMAIL PROTECTED] writes:
Holger Hello I have a system with the following setup:
(4-way CPUs, 8 spindles on two controllers)
Try using XFS.
See http://scalability.gelato.org/DiskScalability_2fResults --- ext3
is single threaded and tends not to get the full
21 matches
Mail list logo