Re: [git patches] IDE update

2007-05-09 Thread Jeff Garzik

Bartlomiej Zolnierkiewicz wrote:

Bartlomiej Zolnierkiewicz (11):
  ide: fix UDMA/MWDMA/SWDMA masks (v3)
  ide: rework the code for selecting the best DMA transfer mode (v3)
  ide: add ide_tune_dma() helper
  ide: make /proc/ide/ optional
  ide: split off ioctl handling from IDE settings (v2)
  ide: move IDE settings handling to ide-proc.c
  ide: cable detection fixes (take 2)
  ide: add initializing argument to ide_register_hw()
  ide: add ide_proc_register_port()
  ide: legacy PCI bus order probing fixes
  ide: fix PIO setup on resume for ATAPI devices


[ sorry but the patch is too big to be included ]



The limit was raised to 400K IIRC.

Jeff


-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2007-05-09 Thread Jeff Garzik

Bartlomiej Zolnierkiewicz wrote:

* the code for selecting and programming the best DMA transfer mode
  has been reworked to be cleaner, more generic and more libata-like,
  ( 500 LOCs gone and this change allows the change described below)



Bartlomiej Zolnierkiewicz (11):
  ide: rework the code for selecting the best DMA transfer mode (v3)



Has this seen testing/exposure in -mm tree?

Jeff


-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2007-05-09 Thread Jeff Garzik

Andrew Morton wrote:

Jeff Garzik [EMAIL PROTECTED] wrote:

Has this seen testing/exposure in -mm tree?



argh.  If this was in a file called
ide-rework-the-code-for-selecting-the-best-DMA-transfer-mode.patch then it
would be so easy.

logs into hera

greps

ah, it's hidden in ide-max-dma-mode-v3.patch.

akpm:/usr/src/25 grep ide-max-dma-mode-v3 pc/*series
pc/2.6.21-mm1-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-mm2-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc2-mm2-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc3-mm1-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc3-mm2-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc4-mm1-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc5-mm3-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc5-mm4-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc6-mm1-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc7-mm1-series:ide-ide-max-dma-mode-v3.patch
pc/2.6.21-rc7-mm2-series:ide-ide-max-dma-mode-v3.patch

No probs.


Cool beans.  I'm happy then.

Jeff



-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2007-05-09 Thread Jeff Garzik

Bartlomiej Zolnierkiewicz wrote:

On Thursday 10 May 2007, Jeff Garzik wrote:

The limit was raised to 400K IIRC.



That's (good) news to me, here goes the actual 150K patch:


Thanks.  I did in fact receive copies from vger, so it went through.

Jeff



-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2007-05-09 Thread David Miller
From: Jeff Garzik [EMAIL PROTECTED]
Date: Wed, 09 May 2007 18:46:16 -0400

 Bartlomiej Zolnierkiewicz wrote:
  Bartlomiej Zolnierkiewicz (11):
ide: fix UDMA/MWDMA/SWDMA masks (v3)
ide: rework the code for selecting the best DMA transfer mode (v3)
ide: add ide_tune_dma() helper
ide: make /proc/ide/ optional
ide: split off ioctl handling from IDE settings (v2)
ide: move IDE settings handling to ide-proc.c
ide: cable detection fixes (take 2)
ide: add initializing argument to ide_register_hw()
ide: add ide_proc_register_port()
ide: legacy PCI bus order probing fixes
ide: fix PIO setup on resume for ATAPI devices
  
  
  [ sorry but the patch is too big to be included ]
 
 
 The limit was raised to 400K IIRC.

On lkml but not linux-ide, I'll go and bump it up there too.

There, done...
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] ide update

2005-08-19 Thread Bartlomiej Zolnierkiewicz
On 8/19/05, Alan Cox [EMAIL PROTECTED] wrote:
 On Gwe, 2005-08-19 at 11:02 +0200, Bartlomiej Zolnierkiewicz wrote:
  lkml.org/lkml/2005/1/27/20
 
  AFAIK CS5535 driver was never ported to 2.6.x.  Somebody needs to
  port it to 2.6.x kernel, cleanup to match kernel coding standards and test.
 
 That was done some time ago and posted to various people.

This is a good news that cs5530 driver was ported.

BTW posting to various people is really not the best method of submitting
kernel patches (in this case linux-ide@vger.kernel.org + cc: me is).

Bartlomiej
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] ide update

2005-08-18 Thread Nish Aravamudan
On 8/18/05, Linus Torvalds [EMAIL PROTECTED] wrote:
 
 
 On Thu, 18 Aug 2005, Bartlomiej Zolnierkiewicz wrote:
 
  3 obvious fixes + support for 2 new controllers
  (just new PCI IDs).
 
 Btw, things like this:
 
 +#define IDEFLOPPY_TICKS_DELAY  HZ/20   /* default delay for ZIP 100 
 (50ms) */
 
 are just bugs waiting to happen.

snip

Shouldn't this be msecs_to_jiffies(50) to avoid the rounding issues I
mentioned on LKML a bit ago?

Thanks,
Nish
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] ide update

2005-08-18 Thread Mark Lord

Linus Torvalds wrote:


Btw, things like this:

+#define IDEFLOPPY_TICKS_DELAY  HZ/20   /* default delay for ZIP 100 
(50ms) */

are just bugs waiting to happen.


Needs parenthesis: ((HZ)/20)

Or one could just use the msecs_to_jiffies() macro.

Cheers
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-11 Thread Alan Cox
On Maw, 2005-07-05 at 20:14, Jens Axboe wrote:
 IDE still has much lower overhead per command than your average SCSI
 hardware. SATA with FIS even improves on this, definitely a good thing!

But SCSI overlaps them while in PATA they are dead time. Thats why PATA
is so demanding of large I/O block sizes

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-08 Thread Ingo Molnar

* Jens Axboe [EMAIL PROTECTED] wrote:

 But! I used hdparm -t solely, 2.6 was always ~5% faster than 2.4. But 
 using -Tt slowed down the hd speed by about 30%. So it looks like some 
 scheduler interaction, perhaps the memory timing loops gets it marked 
 as batch or something?

to check whether that could be the case, could you try:

nice -n -20 hdparm -t /dev/hdc

does that produce different results?

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-08 Thread Erik Slagter
On Fri, 2005-07-08 at 10:06 +1000, Grant Coady wrote:

 I've not been able to get dual channel I/O speed faster than single 
 interface speed, either as 'md' RAID0 or simultaneous reading or 
 writing done the other day:
 
 Time to write or read 500MB file:
 
 summary  2.4.31-hf1  2.6.12.2
 boxen \ time -   w   r   w   r
 ---  
 ...
 peetoo   33  20  26.522
 (simultaneuous   57  37.552  38.5)
 
 MB/s  2.4.31-hf1  2.6.12.2
   w   r   w   r
 single15  25  19  23
 dual  17.527  19  26
 
 These timings show very little happening in parallel, is that normal?

me too ;-)


signature.asc
Description: This is a digitally signed message part


Re: [git patches] IDE update

2005-07-08 Thread Jens Axboe
On Fri, Jul 08 2005, Ingo Molnar wrote:
 
 * Jens Axboe [EMAIL PROTECTED] wrote:
 
  But! I used hdparm -t solely, 2.6 was always ~5% faster than 2.4. But 
  using -Tt slowed down the hd speed by about 30%. So it looks like some 
  scheduler interaction, perhaps the memory timing loops gets it marked 
  as batch or something?
 
 to check whether that could be the case, could you try:
 
   nice -n -20 hdparm -t /dev/hdc
 
 does that produce different results?

Same result, see my next mail, it turned out to be a read-ahead bug.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-07 Thread Bartlomiej Zolnierkiewicz
On 7/6/05, Bill Davidsen [EMAIL PROTECTED] wrote:
 Ondrej Zary wrote:
  Jens Axboe wrote:
 
  On Tue, Jul 05 2005, Ondrej Zary wrote:
 
  André Tomt wrote:
 
  Al Boldi wrote:
 
 
  Bartlomiej Zolnierkiewicz wrote: {
 
 
  On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
  Hdparm -tT gives 38mb/s in 2.4.31
  Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle
 
  Hdparm -tT gives 28mb/s in 2.6.12
  Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT
 
 
 
  The hdparm doesn't get as high scores as in 2.4 is a old discussed
  to death problem on LKML. So far nobody has been able to show it
  affects anything  but that pretty useless quasi-benchmark.
 
 
  No, it's not a problem with hdparm. hdparm only shows that there is
  _really_ a problem:
 
  2.6.12
  [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
  count=1048576
  1048576+0 records in
  1048576+0 records out
 
  real0m32.339s
  user0m1.500s
  sys 0m14.560s
 
  2.4.26
  [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
  count=1048576
  1048576+0 records in
  1048576+0 records out
 
  real0m23.858s
  user0m1.750s
  sys 0m15.180s
 
 
 
  Perhaps some read-ahead bug. What happens if you use bs=128k for
  instance?
 
  Nothing - it's still the same.
 
  [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k
  count=4096
  4096+0 records in
  4096+0 records out
 
  real0m32.832s
  user0m0.040s
  sys 0m15.670s
 
 Why is the system time so high? I tried that test here, and got:
 
 oddball:root time dd if=/dev/hda of=/dev/null bs=128k count=4096
 4096+0 records in
 4096+0 records out
 
 real0m37.927s
 user0m0.025s
 sys 0m6.547s
 oddball:root uname -rn
 oddball.prodigy.com 2.6.11ac7
 
 Now this is one of the slowest CPUs still in use (which I why I test
 responsiveness on it), and it uses far less CPU time.
 cat /proc/cpuinfo
 processor   : 0
 vendor_id   : GenuineIntel
 cpu family  : 6
 model   : 5
 model name  : Pentium II (Deschutes)
 stepping: 1
 cpu MHz : 348.507
 cache size  : 512 KB
 fdiv_bug: no
 hlt_bug : no
 f00f_bug: no
 coma_bug: no
 fpu : yes
 fpu_exception   : yes
 cpuid level : 2
 wp  : yes
 flags   : fpu vme de tsc msr pae mce cx8 apic sep mtrr pge mca
 cmov pat pse36 mmx fxsr
 bogomips: 686.08
 
 
 The first post said it felt like running PIO, it certainly is using CPU
 like it as well.
 
 Now here's some dmesg from this system...
 
 PIIX4: IDE controller at PCI slot :00:07.1
 PIIX4: chipset revision 1
 PIIX4: not 100% native mode: will probe irqs later
  ide0: BM-DMA at 0x1080-0x1087, BIOS settings: hda:DMA, hdb:pio
  ide1: BM-DMA at 0x1088-0x108f, BIOS settings: hdc:DMA, hdd:pio
 Probing IDE interface ide0...
 hda: Maxtor 90845D4, ATA DISK drive
 hdb: WDC AC31600H, ATA DISK drive
 hdb: Disabling (U)DMA for WDC AC31600H (blacklisted)
 ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
 Probing IDE interface ide1...
 hdc: NEC CD-ROM DRIVE:28C, ATAPI CD/DVD-ROM drive
 ide1 at 0x170-0x177,0x376 on irq 15
 hda: max request size: 128KiB
 hda: 16514064 sectors (8455 MB) w/512KiB Cache, CHS=16383/16/63, UDMA(33)
 hda: cache flushes not supported
   hda: hda1 hda2 hda3 hda4  hda5 
 hdb: max request size: 128KiB
 hdb: 3173184 sectors (1624 MB) w/128KiB Cache, CHS=3148/16/63
 hdb: cache flushes not supported
   hdb: hdb1 hdb2 hdb3
 hdc: ATAPI 32X CD-ROM drive, 128kB Cache, UDMA(33)
 Uniform CD-ROM driver Revision: 3.20
 
 
 And indeed it does show hda as dma, and hdb as pio (older versions of
 the kernel let me set hdb to dma and it worked fine...). But in the
 posted demsg the BIOS settings show pio for hda. Is this in any way
 relevant, given that UDA(33) appears later?

BIOS setting is irrelevant and ~14MB/s for UDMA33 is OK.
CPU cycles are wasted somewhere else...
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-07 Thread Ondrej Zary

Bill Davidsen wrote:

Ondrej Zary wrote:


Jens Axboe wrote:


On Tue, Jul 05 2005, Ondrej Zary wrote:


André Tomt wrote:


Al Boldi wrote:



Bartlomiej Zolnierkiewicz wrote: {



On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT





The hdparm doesn't get as high scores as in 2.4 is a old 
discussed to death problem on LKML. So far nobody has been able 
to show it affects anything  but that pretty useless quasi-benchmark.




No, it's not a problem with hdparm. hdparm only shows that there is 
_really_ a problem:


2.6.12
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m32.339s
user0m1.500s
sys 0m14.560s

2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m23.858s
user0m1.750s
sys 0m15.180s





Perhaps some read-ahead bug. What happens if you use bs=128k for
instance?


Nothing - it's still the same.

[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k 
count=4096

4096+0 records in
4096+0 records out

real0m32.832s
user0m0.040s
sys 0m15.670s


Why is the system time so high? I tried that test here, and got:

oddball:root time dd if=/dev/hda of=/dev/null bs=128k count=4096
4096+0 records in
4096+0 records out

real0m37.927s
user0m0.025s
sys 0m6.547s
oddball:root uname -rn
oddball.prodigy.com 2.6.11ac7

Now this is one of the slowest CPUs still in use (which I why I test 
responsiveness on it), and it uses far less CPU time.

cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 5
model name  : Pentium II (Deschutes)
stepping: 1
cpu MHz : 348.507
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 mmx fxsr

bogomips: 686.08


The first post said it felt like running PIO, it certainly is using CPU 
like it as well.


Now here's some dmesg from this system...

PIIX4: IDE controller at PCI slot :00:07.1
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0x1080-0x1087, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0x1088-0x108f, BIOS settings: hdc:DMA, hdd:pio
Probing IDE interface ide0...
hda: Maxtor 90845D4, ATA DISK drive
hdb: WDC AC31600H, ATA DISK drive
hdb: Disabling (U)DMA for WDC AC31600H (blacklisted)
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Probing IDE interface ide1...
hdc: NEC CD-ROM DRIVE:28C, ATAPI CD/DVD-ROM drive
ide1 at 0x170-0x177,0x376 on irq 15
hda: max request size: 128KiB
hda: 16514064 sectors (8455 MB) w/512KiB Cache, CHS=16383/16/63, UDMA(33)
hda: cache flushes not supported
 hda: hda1 hda2 hda3 hda4  hda5 
hdb: max request size: 128KiB
hdb: 3173184 sectors (1624 MB) w/128KiB Cache, CHS=3148/16/63
hdb: cache flushes not supported
 hdb: hdb1 hdb2 hdb3
hdc: ATAPI 32X CD-ROM drive, 128kB Cache, UDMA(33)
Uniform CD-ROM driver Revision: 3.20


And indeed it does show hda as dma, and hdb as pio (older versions of 
the kernel let me set hdb to dma and it worked fine...). But in the 
posted demsg the BIOS settings show pio for hda. Is this in any way 
relevant, given that UDA(33) appears later?


I've had AC31600H drive too and it used to run fine in DMA mode for ages 
(at least in Windows). No idea why it is blacklisted.




Ondrei:

PIIX4: IDE controller at PCI slot 00:07.1
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:pio, hdb:pio
ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:pio, hdd:pio
hda: WDC WD300BB-00AUA1, ATA DISK drive
blk: queue c03b3360, I/O limit 4095Mb (mask 0x)
hdd: MSI CD-RW MS-8340S, ATAPI CD/DVD-ROM drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
hda: attached ide-disk driver.
hda: host protected area = 1
hda: 58633344 sectors (30020 MB) w/2048KiB Cache, CHS=3649/255/63, UDMA(33)


Unless Ondrei's CPU is slower than a PI-350, maybe it is running pio for 
some reason, in spite of hdparm -i showing udma2 mode.


I just mentioned it because the CPU usage doesn't seem compatible with 
DMA operation.



Yes, the CPU is slower. It's Cyrix MII PR300 (225MHz):

[EMAIL PROTECTED]:/home/rainbow# cat /proc/cpuinfo
processor   : 0
vendor_id   : CyrixInstead
cpu family  : 6
model   : 2
model name  : M II 3x Core/Bus Clock
stepping: 4
cpu MHz : 225.004
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no

Re: [git patches] IDE update

2005-07-07 Thread Bill Davidsen

Ondrej Zary wrote:

Jens Axboe wrote:


On Tue, Jul 05 2005, Ondrej Zary wrote:


André Tomt wrote:


Al Boldi wrote:



Bartlomiej Zolnierkiewicz wrote: {



On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT




The hdparm doesn't get as high scores as in 2.4 is a old discussed 
to death problem on LKML. So far nobody has been able to show it 
affects anything  but that pretty useless quasi-benchmark.




No, it's not a problem with hdparm. hdparm only shows that there is 
_really_ a problem:


2.6.12
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m32.339s
user0m1.500s
sys 0m14.560s

2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m23.858s
user0m1.750s
sys 0m15.180s




Perhaps some read-ahead bug. What happens if you use bs=128k for
instance?


Nothing - it's still the same.

[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k 
count=4096

4096+0 records in
4096+0 records out

real0m32.832s
user0m0.040s
sys 0m15.670s


Why is the system time so high? I tried that test here, and got:

oddball:root time dd if=/dev/hda of=/dev/null bs=128k count=4096
4096+0 records in
4096+0 records out

real0m37.927s
user0m0.025s
sys 0m6.547s
oddball:root uname -rn
oddball.prodigy.com 2.6.11ac7

Now this is one of the slowest CPUs still in use (which I why I test 
responsiveness on it), and it uses far less CPU time.

cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 5
model name  : Pentium II (Deschutes)
stepping: 1
cpu MHz : 348.507
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 mmx fxsr

bogomips: 686.08


The first post said it felt like running PIO, it certainly is using CPU 
like it as well.


Now here's some dmesg from this system...

PIIX4: IDE controller at PCI slot :00:07.1
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0x1080-0x1087, BIOS settings: hda:DMA, hdb:pio
ide1: BM-DMA at 0x1088-0x108f, BIOS settings: hdc:DMA, hdd:pio
Probing IDE interface ide0...
hda: Maxtor 90845D4, ATA DISK drive
hdb: WDC AC31600H, ATA DISK drive
hdb: Disabling (U)DMA for WDC AC31600H (blacklisted)
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
Probing IDE interface ide1...
hdc: NEC CD-ROM DRIVE:28C, ATAPI CD/DVD-ROM drive
ide1 at 0x170-0x177,0x376 on irq 15
hda: max request size: 128KiB
hda: 16514064 sectors (8455 MB) w/512KiB Cache, CHS=16383/16/63, UDMA(33)
hda: cache flushes not supported
 hda: hda1 hda2 hda3 hda4  hda5 
hdb: max request size: 128KiB
hdb: 3173184 sectors (1624 MB) w/128KiB Cache, CHS=3148/16/63
hdb: cache flushes not supported
 hdb: hdb1 hdb2 hdb3
hdc: ATAPI 32X CD-ROM drive, 128kB Cache, UDMA(33)
Uniform CD-ROM driver Revision: 3.20


And indeed it does show hda as dma, and hdb as pio (older versions of 
the kernel let me set hdb to dma and it worked fine...). But in the 
posted demsg the BIOS settings show pio for hda. Is this in any way 
relevant, given that UDA(33) appears later?



Ondrei:

PIIX4: IDE controller at PCI slot 00:07.1
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:pio, hdb:pio
ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:pio, hdd:pio
hda: WDC WD300BB-00AUA1, ATA DISK drive
blk: queue c03b3360, I/O limit 4095Mb (mask 0x)
hdd: MSI CD-RW MS-8340S, ATAPI CD/DVD-ROM drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
hda: attached ide-disk driver.
hda: host protected area = 1
hda: 58633344 sectors (30020 MB) w/2048KiB Cache, CHS=3649/255/63, UDMA(33)


Unless Ondrei's CPU is slower than a PI-350, maybe it is running pio for 
some reason, in spite of hdparm -i showing udma2 mode.


I just mentioned it because the CPU usage doesn't seem compatible with 
DMA operation.


-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-07 Thread Mark Lord

Note:

hdparm can also use O_DIRECT for the -t timing test.

Eg.  hdparm --direct -t /dev/hda
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-07 Thread Grant Coady
On Thu, 07 Jul 2005 18:32:52 -0400, Mark Lord [EMAIL PROTECTED] wrote:

hdparm can also use O_DIRECT for the -t timing test.

I've not been able to get dual channel I/O speed faster than single 
interface speed, either as 'md' RAID0 or simultaneous reading or 
writing done the other day:

Time to write or read 500MB file:

summary2.4.31-hf1  2.6.12.2
boxen \ time - w   r   w   r
---
...
peetoo 33  20  26.522
(simultaneuous 57  37.552  38.5)

MB/s2.4.31-hf1  2.6.12.2
w   r   w   r
single  15  25  19  23
dual17.527  19  26

These timings show very little happening in parallel, is that normal?

Thanks,
--Grant.

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

André Tomt wrote:

Al Boldi wrote:


Bartlomiej Zolnierkiewicz wrote: {


On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT



The hdparm doesn't get as high scores as in 2.4 is a old discussed to 
death problem on LKML. So far nobody has been able to show it affects 
anything  but that pretty useless quasi-benchmark.




No, it's not a problem with hdparm. hdparm only shows that there is 
_really_ a problem:


2.6.12
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m32.339s
user0m1.500s
sys 0m14.560s

2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m23.858s
user0m1.750s
sys 0m15.180s

--
Ondrej Zary
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

Jens Axboe wrote:

On Tue, Jul 05 2005, Ondrej Zary wrote:


André Tomt wrote:


Al Boldi wrote:



Bartlomiej Zolnierkiewicz wrote: {



On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT



The hdparm doesn't get as high scores as in 2.4 is a old discussed to 
death problem on LKML. So far nobody has been able to show it affects 
anything  but that pretty useless quasi-benchmark.




No, it's not a problem with hdparm. hdparm only shows that there is 
_really_ a problem:


2.6.12
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m32.339s
user0m1.500s
sys 0m14.560s

2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m23.858s
user0m1.750s
sys 0m15.180s



Perhaps some read-ahead bug. What happens if you use bs=128k for
instance?


Nothing - it's still the same.

[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k 
count=4096

4096+0 records in
4096+0 records out

real0m32.832s
user0m0.040s
sys 0m15.670s

--
Ondrej Zary
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Jens Axboe
On Tue, Jul 05 2005, Ondrej Zary wrote:
 Jens Axboe wrote:
 On Tue, Jul 05 2005, Ondrej Zary wrote:
 
 André Tomt wrote:
 
 Al Boldi wrote:
 
 
 Bartlomiej Zolnierkiewicz wrote: {
 
 
 On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
 Hdparm -tT gives 38mb/s in 2.4.31
 Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle
 
 Hdparm -tT gives 28mb/s in 2.6.12
 Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT
 
 
 The hdparm doesn't get as high scores as in 2.4 is a old discussed to 
 death problem on LKML. So far nobody has been able to show it affects 
 anything  but that pretty useless quasi-benchmark.
 
 
 No, it's not a problem with hdparm. hdparm only shows that there is 
 _really_ a problem:
 
 2.6.12
 [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
 count=1048576
 1048576+0 records in
 1048576+0 records out
 
 real0m32.339s
 user0m1.500s
 sys 0m14.560s
 
 2.4.26
 [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
 count=1048576
 1048576+0 records in
 1048576+0 records out
 
 real0m23.858s
 user0m1.750s
 sys 0m15.180s
 
 
 Perhaps some read-ahead bug. What happens if you use bs=128k for
 instance?
 
 Nothing - it's still the same.
 
 [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k 
 count=4096
 4096+0 records in
 4096+0 records out
 
 real0m32.832s
 user0m0.040s
 sys 0m15.670s

Can you post full dmesg of 2.4 and 2.6 kernel boot? What does hdparm
-I/-i say for both kernels?

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

Jens Axboe wrote:

On Tue, Jul 05 2005, Ondrej Zary wrote:


Jens Axboe wrote:


On Tue, Jul 05 2005, Ondrej Zary wrote:



André Tomt wrote:



Al Boldi wrote:




Bartlomiej Zolnierkiewicz wrote: {




On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT



The hdparm doesn't get as high scores as in 2.4 is a old discussed to 
death problem on LKML. So far nobody has been able to show it affects 
anything  but that pretty useless quasi-benchmark.




No, it's not a problem with hdparm. hdparm only shows that there is 
_really_ a problem:


2.6.12
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m32.339s
user0m1.500s
sys 0m14.560s

2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m23.858s
user0m1.750s
sys 0m15.180s



Perhaps some read-ahead bug. What happens if you use bs=128k for
instance?



Nothing - it's still the same.

[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k 
count=4096

4096+0 records in
4096+0 records out

real0m32.832s
user0m0.040s
sys 0m15.670s



Can you post full dmesg of 2.4 and 2.6 kernel boot? What does hdparm
-I/-i say for both kernels?



The 2.4.26 kernel is the one from Slackware 10.0 bootable install CD.
dmesg outputs attached, hdparm -i and hdparm -I shows the same in both
kernels (compared using diff) - attached too.

--
Ondrej Zary

Linux version 2.4.26 ([EMAIL PROTECTED]) (gcc version 3.3.4) #6 Mon Jun 14 
19:07:27 PDT 2004
BIOS-provided physical RAM map:
 BIOS-e820:  - 0009fc00 (usable)
 BIOS-e820: 0009fc00 - 000a (reserved)
 BIOS-e820: 000f - 0010 (reserved)
 BIOS-e820: 0010 - 07ff (usable)
 BIOS-e820: 07ff - 07ff3000 (ACPI NVS)
 BIOS-e820: 07ff3000 - 0800 (ACPI data)
 BIOS-e820:  - 0001 (reserved)
127MB LOWMEM available.
On node 0 totalpages: 32752
zone(0): 4096 pages.
zone(1): 28656 pages.
zone(2): 0 pages.
Kernel command line: BOOT_IMAGE=/kernels/bare.i/bzImage initrd=initrd.img 
load_ramdisk=1 prompt_ramdisk=0 ramdisk_size=6464 rw root=/dev/ram 
SLACK_KERNEL=bare.i
Initializing CPU#0
Detected 225.001 MHz processor.
Console: colour VGA+ 80x25
Calibrating delay loop... 448.92 BogoMIPS
Memory: 123608k/131008k available (1844k kernel code, 7012k reserved, 618k 
data, 120k init, 0k highmem)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
Dentry cache hash table entries: 16384 (order: 5, 131072 bytes)
Inode cache hash table entries: 8192 (order: 4, 65536 bytes)
Mount cache hash table entries: 512 (order: 0, 4096 bytes)
Buffer cache hash table entries: 4096 (order: 2, 16384 bytes)
Page-cache hash table entries: 32768 (order: 5, 131072 bytes)
CPU: After generic, caps: 0080a135   0004
CPU: Common caps: 0080a135   0004
CPU: Cyrix M II 3x Core/Bus Clock stepping 04
Checking 'hlt' instruction... OK.
POSIX conformance testing by UNIFIX
mtrr: v1.40 (20010327) Richard Gooch ([EMAIL PROTECTED])
mtrr: detected mtrr type: Cyrix ARR
PCI: PCI BIOS revision 2.10 entry at 0xfb020, last bus=0
PCI: Using configuration type 1
PCI: Probing PCI hardware
PCI: Probing PCI hardware (bus 00)
PCI: Using IRQ router PIIX/ICH [8086/7110] at 00:07.0
PCI: Device 00:3a not found by BIOS
PCI: Device 00:3b not found by BIOS
Limiting direct PCI/PCI transfers.
Linux NET4.0 for Linux 2.4
Based upon Swansea University Computer Society NET3.039
Initializing RT netlink socket
Starting kswapd
VFS: Disk quotas vdquot_6.5.1
Journalled Block Device driver loaded
pty: 512 Unix98 ptys configured
Serial driver version 5.05c (2001-07-08) with HUB-6 MANY_PORTS MULTIPORT 
SHARE_IRQ SERIAL_PCI enabled
ttyS00 at 0x03f8 (irq = 4) is a 16550A
ttyS01 at 0x02f8 (irq = 3) is a 16550A
Real Time Clock Driver v1.10f
Floppy drive(s): fd0 is 1.44M
FDC 0 is a post-1991 82077
RAMDISK driver initialized: 16 RAM disks of 6464K size 1024 blocksize
loop: loaded (max 8 devices)
Uniform Multi-Platform E-IDE driver Revision: 7.00beta4-2.4
ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx
PIIX4: IDE controller at PCI slot 00:07.1
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
ide0: BM-DMA at 0xf000-0xf007, BIOS settings: hda:pio, hdb:pio
ide1: BM-DMA at 0xf008-0xf00f, BIOS settings: hdc:pio, hdd:pio
hda: WDC WD300BB-00AUA1, ATA DISK drive
blk: queue c03b3360, I/O limit 4095Mb (mask 0x)
hdd: MSI CD-RW MS-8340S, ATAPI CD/DVD-ROM drive
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
hda: 

Re: [git patches] IDE update

2005-07-05 Thread Jens Axboe
On Tue, 2005-07-05 at 14:35 +0200, Ondrej Zary wrote:
 2.4.26
 [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
 count=1048576
 1048576+0 records in
 1048576+0 records out
 
 real0m23.858s
 user0m1.750s
 sys 0m15.180s
 
 
 Perhaps some read-ahead bug. What happens if you use bs=128k for
 instance?
 
 
 Nothing - it's still the same.
 
 [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k 
 count=4096
 4096+0 records in
 4096+0 records out
 
 real0m32.832s
 user0m0.040s
 sys 0m15.670s
  
  
  Can you post full dmesg of 2.4 and 2.6 kernel boot? What does hdparm
  -I/-i say for both kernels?
  
 
 The 2.4.26 kernel is the one from Slackware 10.0 bootable install CD.
 dmesg outputs attached, hdparm -i and hdparm -I shows the same in both
 kernels (compared using diff) - attached too.

Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as a config option, otherwise edit
include/asm/param.h appropriately.


-- 
Jens Axboe [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

Jens Axboe wrote:

On Tue, 2005-07-05 at 14:35 +0200, Ondrej Zary wrote:


2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out

real0m23.858s
user0m1.750s
sys 0m15.180s



Perhaps some read-ahead bug. What happens if you use bs=128k for
instance?



Nothing - it's still the same.

[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=128k 
count=4096

4096+0 records in
4096+0 records out

real0m32.832s
user0m0.040s
sys 0m15.670s



Can you post full dmesg of 2.4 and 2.6 kernel boot? What does hdparm
-I/-i say for both kernels?



The 2.4.26 kernel is the one from Slackware 10.0 bootable install CD.
dmesg outputs attached, hdparm -i and hdparm -I shows the same in both
kernels (compared using diff) - attached too.



Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as a config option, otherwise edit
include/asm/param.h appropriately.



I forgot to write that my 2.6.12 kernel is already compiled with HZ 100 
(it makes the system more responsive).
I've just tried 2.6.8.1 with HZ 1000 and there is no difference in HDD 
performance comparing to 2.6.12.


--
Ondrej Zary
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

Jens Axboe wrote:

On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:


Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as a config option, otherwise edit
include/asm/param.h appropriately.



I forgot to write that my 2.6.12 kernel is already compiled with HZ 100 
(it makes the system more responsive).
I've just tried 2.6.8.1 with HZ 1000 and there is no difference in HDD 
performance comparing to 2.6.12.



OK, interesting. You could try and boot with profile=2 and do

# readprofile -r
# dd if=/dev/hda of=/dev/null bs=128k 
# readprofile  prof_output


for each kernel and post it here, so we can see if anything sticks out.

Here are the profiles (used dd with count=4096) from 2.4.26 and 2.6.12 
(nothing from 2.6.8.1 because I don't have the .map file anymore).


--
Ondrej Zary
   448 default_idle   5.6000
 4 __switch_to0.0192
 1 ret_from_sys_call  0.0588
 3 handle_IRQ_event   0.0268
 6 schedule   0.0072
 8 __wake_up  0.0833
 2 __constant_memcpy  0.0078
 4 add_page_to_hash_queue 0.0625
 2 __remove_inode_page0.0156
 6 add_to_page_cache_unique   0.0375
 7 page_cache_read0.0398
 2 unlock_page0.0179
 7 generic_file_readahead 0.0168
11 do_generic_file_read   0.0101
  1366 file_read_actor8.5375
 2 generic_file_read  0.0048
 1 kmem_slab_destroy  0.0042
 3 kmem_cache_grow0.0057
 2 kmem_cache_alloc   0.0625
28 kmem_cache_free0.8750
 2 kmem_cache_reap0.0042
12 __kmem_cache_alloc 0.0536
 4 lru_cache_add  0.0357
 1 __lru_cache_del0.0078
 3 delta_nr_inactive_pages0.0312
 5 delta_nr_cache_pages   0.0521
17 shrink_cache   0.0180
 1 refill_inactive0.0031
16 __free_pages_ok0.0200
20 rmqueue0.0284
 6 __alloc_pages  0.0091
 1 __free_pages   0.0208
 3 sys_read   0.0117
 1 sys_write  0.0039
 5 __remove_from_lru_list 0.0391
 3 __remove_from_queues   0.0625
 7 __put_unused_buffer_head   0.0729
 3 get_unused_buffer_head 0.0234
24 set_bh_page0.2143
32 create_buffers 0.1429
 4 try_to_release_page0.0500
 4 create_empty_buffers   0.0417
24 block_read_full_page   0.0375
27 try_to_free_buffers0.1055
10 max_block  0.0694
 9 blkdev_get_block   0.1406
 2 blkdev_direct_IO   0.0417
 1 blkdev_readpage0.0312
 2 init_buffer_head   0.0312
25 __constant_c_and_count_memset  0.1736
 2 write_profile  0.0417
   114 __make_request 0.0642
27 generic_make_request   0.0844
25 submit_bh  0.0977
 5 ide_inb0.3125
 7 ide_outb   0.4375
 2 ide_outl   0.1250
 1 ide_execute_command0.0078
 2 SELECT_DRIVE   0.0312
 4 ide_start_request  0.0083
 2 ide_do_request 0.0050
 2 ide_get_queue  0.0312
 4 ide_intr   0.0139
 6 ide_dma_intr   0.0312
30 ide_build_sglist   0.0481
 5 ide_build_dmatable 0.0120
 1 __ide_dma_read 0.0042
 1 __ide_dma_count0.0312
 3 __constant_c_and_count_memset  0.0208
67 idedisk_end_request0.3490
 2 __generic_copy_to_user 0.0312
 0 *unknown*
  2499 total 

Re: [git patches] IDE update

2005-07-05 Thread Jens Axboe
On Tue, Jul 05 2005, Ondrej Zary wrote:
 Jens Axboe wrote:
 On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
 
 Ok, looks alright for both. Your machine is quite slow, perhaps that is
 showing the slower performance. Can you try and make HZ 100 in 2.6 and
 test again? 2.6.13-recent has it as a config option, otherwise edit
 include/asm/param.h appropriately.
 
 
 I forgot to write that my 2.6.12 kernel is already compiled with HZ 100 
 (it makes the system more responsive).
 I've just tried 2.6.8.1 with HZ 1000 and there is no difference in HDD 
 performance comparing to 2.6.12.
 
 
 OK, interesting. You could try and boot with profile=2 and do
 
 # readprofile -r
 # dd if=/dev/hda of=/dev/null bs=128k 
 # readprofile  prof_output
 
 for each kernel and post it here, so we can see if anything sticks out.
 
 Here are the profiles (used dd with count=4096) from 2.4.26 and 2.6.12 
 (nothing from 2.6.8.1 because I don't have the .map file anymore).

Looks interesting, 2.6 spends oodles of times copying to user space.
Lets check if raw reads perform ok, please try and time this app in 2.4
and 2.6 as well.

# gcc -Wall -O2 -o oread oread.c
# time ./oread /dev/hda

-- 
Jens Axboe

#include stdio.h
#include unistd.h
#define __USE_GNU
#include fcntl.h
#include stdlib.h

#define BS  (131072)
#define BLOCKS  (4096)
#define ALIGN(buf)  (char *) (((unsigned long) (buf) + 4095)  ~(4095))

int main(int argc, char *argv[])
{
char *buffer;
int fd, i;

if (argc  2) {
printf(%s: device\n, argv[0]);
return 1;
}

fd = open(argv[1], O_RDONLY | O_DIRECT);
if (fd == -1) {
perror(open);
return 2;
}

buffer = ALIGN(malloc(BS + 4095));

for (i = 0; i  BLOCKS; i++) {
int ret = read(fd, buffer, BS);

if (!ret)
break;
else if (ret  0) {
perror(read infile);
break;
}
}

return 0;
}


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

Jens Axboe wrote:

On Tue, Jul 05 2005, Ondrej Zary wrote:


Jens Axboe wrote:


On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:



Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as a config option, otherwise edit
include/asm/param.h appropriately.



I forgot to write that my 2.6.12 kernel is already compiled with HZ 100 
(it makes the system more responsive).
I've just tried 2.6.8.1 with HZ 1000 and there is no difference in HDD 
performance comparing to 2.6.12.



OK, interesting. You could try and boot with profile=2 and do

# readprofile -r
# dd if=/dev/hda of=/dev/null bs=128k 
# readprofile  prof_output


for each kernel and post it here, so we can see if anything sticks out.



Here are the profiles (used dd with count=4096) from 2.4.26 and 2.6.12 
(nothing from 2.6.8.1 because I don't have the .map file anymore).



Looks interesting, 2.6 spends oodles of times copying to user space.
Lets check if raw reads perform ok, please try and time this app in 2.4
and 2.6 as well.

# gcc -Wall -O2 -o oread oread.c
# time ./oread /dev/hda

oread is faster than dd, but still not as fast as 2.4. In 2.6.12, HDD 
led is blinking, in 2.4 it's solid on during the read.


2.6.12:
[EMAIL PROTECTED]:/home/rainbow# time ./oread /dev/hda

real0m25.082s
user0m0.000s
sys 0m0.680s

2.4.26:
[EMAIL PROTECTED]:/home/rainbow# time ./oread /dev/hda

real0m23.513s
user0m0.000s
sys 0m2.360s

--
Ondrej Zary
  2016 default_idle  25.2000
 1 system_call0.0156
 2 handle_IRQ_event   0.0179
 1 schedule   0.0012
 1 __run_task_queue   0.0104
 4 follow_page0.0312
 3 get_user_pages 0.0063
 4 mark_dirty_kiobuf  0.0357
 1 handle_mm_fault0.0048
 2 generic_file_direct_IO 0.0026
 1 lru_cache_add  0.0089
 2 init_buffer0.0625
12 set_bh_page0.1071
 3 create_buffers 0.0134
30 generic_direct_IO  0.0457
17 brw_kiovec 0.0186
11 max_block  0.0764
 9 blkdev_get_block   0.1406
 2 blkdev_direct_IO   0.0417
 1 write_profile  0.0208
 2 generic_unplug_device  0.0312
93 __make_request 0.0524
25 generic_make_request   0.0781
20 submit_bh  0.0781
 5 ide_inb0.3125
12 ide_outb   0.7500
 1 ide_outl   0.0625
 1 ide_wait_stat  0.0030
 4 ide_execute_command0.0312
 2 ide_do_request 0.0050
 4 ide_get_queue  0.0625
 2 ide_intr   0.0069
 2 ide_dma_intr   0.0104
10 ide_build_sglist   0.0160
 1 ide_build_dmatable 0.0024
 1 __ide_dma_read 0.0042
 1 __constant_c_and_count_memset  0.0069
 1 __ide_do_rw_disk   0.0007
42 idedisk_end_request0.2188
 4 __rdtsc_delay  0.1250
 0 *unknown*
  2356 total  0.0013
 1 sched_clock0.0057
 3 cond_resched_lock  0.0312
 2 write_profile  0.0312
 1 __wake_up_bit  0.0208
 2 __generic_file_aio_read0.0038
 1 generic_file_read  0.0052
 1 mempool_alloc  0.0037
 1 set_page_dirty 0.0125
 3 set_page_dirty_lock0.0625
 4 __follow_page  0.0227
 7 get_user_pages 0.0072
 2 do_wp_page 0.0025
 8 __bio_add_page 0.0227
 5 bio_add_page   0.1562
 1 bio_set_map_data   0.0208
 1 update_atime   0.0057
 1 dio_get_page   0.0156
 1 dio_bio_submit 0.0069
 2 dio_bio_complete   

Re: [git patches] IDE update

2005-07-05 Thread Linus Torvalds


On Tue, 5 Jul 2005, Jens Axboe wrote:
 
 Looks interesting, 2.6 spends oodles of times copying to user space.
 Lets check if raw reads perform ok, please try and time this app in 2.4
 and 2.6 as well.

I think it's just that 2.4.x used to allow longer command queues. I think
MAX_NR_REQUESTS is 1024 in 2.4.x, and just 128 in 2.6.x or something like
that.

Also, the congestion thresholds are questionable: we consider a queue
congested if it is within 12% of full, but then we consider it uncongested
whenever it falls to within 18% of full, which I bet means that for some
streaming loads we have just a 6% window that we keep adding new
requests to (we wait when we're almost full, but then we start adding
requests again when we're _still_ almost full). Jens, we talked about this
long ago, but I don't think we ever did any timings.

Making things worse, things like this are only visible on stupid hardware
that has long latencies to get started (many SCSI controllers used to have
horrid latencies), so you'll never even see any difference on a lot of 
hardware.

It's probably worth testing with a bigger request limit. I forget what the 
/proc interfaces are (and am too lazy to look it up), Jens can tell us ;)

Linus
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Jens Axboe
On Tue, Jul 05 2005, Linus Torvalds wrote:
 
 
 On Tue, 5 Jul 2005, Jens Axboe wrote:
  
  Looks interesting, 2.6 spends oodles of times copying to user space.
  Lets check if raw reads perform ok, please try and time this app in 2.4
  and 2.6 as well.
 
 I think it's just that 2.4.x used to allow longer command queues. I think
 MAX_NR_REQUESTS is 1024 in 2.4.x, and just 128 in 2.6.x or something like
 that.

But for this case, you only have one command in flight. hdparm is highly
synchronous, my oread case is as well.

 Also, the congestion thresholds are questionable: we consider a queue
 congested if it is within 12% of full, but then we consider it uncongested
 whenever it falls to within 18% of full, which I bet means that for some
 streaming loads we have just a 6% window that we keep adding new
 requests to (we wait when we're almost full, but then we start adding
 requests again when we're _still_ almost full). Jens, we talked about this
 long ago, but I don't think we ever did any timings.

In theory, the ioc batching should handle that case. But as you can see
from recent commits, I'm not very happy with how this currently works.
It should not impact this testing, though.

 Making things worse, things like this are only visible on stupid hardware
 that has long latencies to get started (many SCSI controllers used to have
 horrid latencies), so you'll never even see any difference on a lot of 
 hardware.

IDE still has much lower overhead per command than your average SCSI
hardware. SATA with FIS even improves on this, definitely a good thing!

 It's probably worth testing with a bigger request limit. I forget what the 
 /proc interfaces are (and am too lazy to look it up), Jens can tell us ;)

It's /sys/block/device/queue/nr_requests now, can be changed at will.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Jens Axboe
On Tue, Jul 05 2005, Ondrej Zary wrote:
 Jens Axboe wrote:
 On Tue, Jul 05 2005, Ondrej Zary wrote:
 
 Jens Axboe wrote:
 
 On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
 
 
 Ok, looks alright for both. Your machine is quite slow, perhaps that is
 showing the slower performance. Can you try and make HZ 100 in 2.6 and
 test again? 2.6.13-recent has it as a config option, otherwise edit
 include/asm/param.h appropriately.
 
 
 I forgot to write that my 2.6.12 kernel is already compiled with HZ 100 
 (it makes the system more responsive).
 I've just tried 2.6.8.1 with HZ 1000 and there is no difference in HDD 
 performance comparing to 2.6.12.
 
 
 OK, interesting. You could try and boot with profile=2 and do
 
 # readprofile -r
 # dd if=/dev/hda of=/dev/null bs=128k 
 # readprofile  prof_output
 
 for each kernel and post it here, so we can see if anything sticks out.
 
 
 Here are the profiles (used dd with count=4096) from 2.4.26 and 2.6.12 
 (nothing from 2.6.8.1 because I don't have the .map file anymore).
 
 
 Looks interesting, 2.6 spends oodles of times copying to user space.
 Lets check if raw reads perform ok, please try and time this app in 2.4
 and 2.6 as well.
 
 # gcc -Wall -O2 -o oread oread.c
 # time ./oread /dev/hda
 
 oread is faster than dd, but still not as fast as 2.4. In 2.6.12, HDD 
 led is blinking, in 2.4 it's solid on during the read.
 
 2.6.12:
 [EMAIL PROTECTED]:/home/rainbow# time ./oread /dev/hda
 
 real0m25.082s
 user0m0.000s
 sys 0m0.680s
 
 2.4.26:
 [EMAIL PROTECTED]:/home/rainbow# time ./oread /dev/hda
 
 real0m23.513s
 user0m0.000s
 sys 0m2.360s

Hmm, still not as fast, not so good. 2.6 shows more idle time than 2.4,
about 20% more. I seem to remember Ken Chen saying that 2.6 direct io
was still a little slower than 2.4, your really slow hardware could be
showing this to a much greater effect.

I'll try and play with this tomorrow!

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Jens Axboe
On Tue, Jul 05 2005, Ondrej Zary wrote:
 oread is faster than dd, but still not as fast as 2.4. In 2.6.12, HDD 
 led is blinking, in 2.4 it's solid on during the read.

Oh, and please do test 2.6 by first setting the deadline scheduler for
hda. I can see you are using the 'as' scheduler right now.

# echo deadline  /sys/block/hda/queue/scheduler

Thanks!

-- 
Jens Axboe

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

Jens Axboe wrote:

On Tue, Jul 05 2005, Ondrej Zary wrote:

oread is faster than dd, but still not as fast as 2.4. In 2.6.12, HDD 
led is blinking, in 2.4 it's solid on during the read.



Oh, and please do test 2.6 by first setting the deadline scheduler for
hda. I can see you are using the 'as' scheduler right now.

# echo deadline  /sys/block/hda/queue/scheduler



No change, still 25 seconds.

--
Ondrej Zary
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Ondrej Zary

Jens Axboe wrote:

On Tue, Jul 05 2005, Linus Torvalds wrote:



On Tue, 5 Jul 2005, Jens Axboe wrote:


Looks interesting, 2.6 spends oodles of times copying to user space.
Lets check if raw reads perform ok, please try and time this app in 2.4
and 2.6 as well.


I think it's just that 2.4.x used to allow longer command queues. I think
MAX_NR_REQUESTS is 1024 in 2.4.x, and just 128 in 2.6.x or something like
that.



But for this case, you only have one command in flight. hdparm is highly
synchronous, my oread case is as well.



Also, the congestion thresholds are questionable: we consider a queue
congested if it is within 12% of full, but then we consider it uncongested
whenever it falls to within 18% of full, which I bet means that for some
streaming loads we have just a 6% window that we keep adding new
requests to (we wait when we're almost full, but then we start adding
requests again when we're _still_ almost full). Jens, we talked about this
long ago, but I don't think we ever did any timings.



In theory, the ioc batching should handle that case. But as you can see
from recent commits, I'm not very happy with how this currently works.
It should not impact this testing, though.



Making things worse, things like this are only visible on stupid hardware
that has long latencies to get started (many SCSI controllers used to have
horrid latencies), so you'll never even see any difference on a lot of 
hardware.



IDE still has much lower overhead per command than your average SCSI
hardware. SATA with FIS even improves on this, definitely a good thing!


It's probably worth testing with a bigger request limit. I forget what the 
/proc interfaces are (and am too lazy to look it up), Jens can tell us ;)



It's /sys/block/device/queue/nr_requests now, can be changed at will.


Tested with default 128, 1024 and 4 (minimum) and no change.

--
Ondrej Zary
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Grant Coady
On Tue, 5 Jul 2005 16:21:26 +0200, Jens Axboe [EMAIL PROTECTED] wrote:
# gcc -Wall -O2 -o oread oread.c
# time ./oread /dev/hda

Executive Summary
``
Comparing 'oread' with hdparm -tT on latest 2.4 vs 2.6 stable on 
various x86 boxen.  Performance drops for 2.6, sometimes:
2.4.31-hf1  2.6.12.2
peetoo: 26MB/s   - 20MB/s,  oread similar120GB + 80GB
silly:  22MB/s   - 8.5MB/s, oread similar13GB
tosh:   35MB/s   - 23MB/s,  oread similar40GB 2.5
pooh:   17MB/s   - 14MB/s,  oread 30 - 24   40GB
menace: 11.5MB/s - 13MB/s,  oread similar 6GB 2.5

--Grant

The details, config, dmesg, etc from linked resources:

Intel SE440BX-2 with pII 400/100/512/2.0 CPU/FSB/L2/Vccp 
512MB SDRAM on 440BX + PIIX4
  http://scatter.mine.nu/test/linux-2.6/peetoo/

peetoo:~$ uname -r
2.4.31-hf1
peetoo:~$ time /home/share/install/oread /dev/hda

real0m20.065s
user0m0.010s
sys 0m0.810s
peetoo:~$ time /home/share/install/oread /dev/hdc

real0m18.484s
user0m0.000s
sys 0m0.800s
peetoo:~$ hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   444 MB in  2.00 seconds = 222.00 MB/sec
 Timing buffered disk reads:   84 MB in  3.06 seconds =  27.45 MB/sec
peetoo:~$ hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   456 MB in  2.01 seconds = 226.87 MB/sec
 Timing buffered disk reads:   76 MB in  3.00 seconds =  25.33 MB/sec
peetoo:~$ hdparm -tT /dev/hdc

/dev/hdc:
 Timing cached reads:   464 MB in  2.01 seconds = 230.85 MB/sec
 Timing buffered disk reads:   76 MB in  3.00 seconds =  25.33 MB/sec
peetoo:~$ hdparm -tT /dev/hdc

/dev/hdc:
 Timing cached reads:   460 MB in  2.00 seconds = 230.00 MB/sec
 Timing buffered disk reads:   82 MB in  3.00 seconds =  27.33 MB/sec
- - -
peetoo:~$ uname -r
2.6.12.2b
peetoo:~$ time /home/share/install/oread /dev/hda

real0m21.428s
user0m0.003s
sys 0m0.436s
peetoo:~$ time /home/share/install/oread /dev/hdc

real0m17.588s
user0m0.001s
sys 0m0.455s

peetoo:~$ cat /sys/block/hda/queue/scheduler
noop [deadline]
peetoo:~$ cat /sys/block/hdc/queue/scheduler
noop [deadline]
peetoo:~$ time /home/share/install/oread /dev/hda

real0m21.417s
user0m0.005s
sys 0m0.462s
peetoo:~$ time /home/share/install/oread /dev/hda

real0m18.626s
user0m0.006s
sys 0m0.440s
peetoo:~$ time /home/share/install/oread /dev/hdc

real0m17.555s
user0m0.005s
sys 0m0.422s
peetoo:~$ df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/hda3  2586348563000   2023348  22% /
/dev/hdc3  2586348   1582716   1003632  62% /usr
/dev/hdc6  2586348   1217568   1368780  48% /usr/src
/dev/hda9 20562504  10821500   9741004  53% /home/install
/dev/hdc9 20562504   3634700  16927804  18% /home/public
/dev/hda1041446344  39676256   1770088  96% /home/archive
deltree:/home/share2064256   1042952   1021304  51% /home/share

peetoo:~$ hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   448 MB in  2.00 seconds = 223.81 MB/sec
 Timing buffered disk reads:   62 MB in  3.07 seconds =  20.17 MB/sec
peetoo:~$ hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   472 MB in  2.01 seconds = 234.40 MB/sec
 Timing buffered disk reads:   54 MB in  3.01 seconds =  17.97 MB/sec
peetoo:~$ hdparm -tT /dev/hdc

/dev/hdc:
 Timing cached reads:   456 MB in  2.01 seconds = 226.56 MB/sec
 Timing buffered disk reads:   62 MB in  3.08 seconds =  20.11 MB/sec
peetoo:~$ hdparm -tT /dev/hdc

o o o

EPoX EP-61LXA-M: Intel 440LX chipset with pentiumII/266 
on 66MHz FSB (4 x 66) with 128MB SDRAM
  http://scatter.mine.nu/test/linux-2.6/silly/

[EMAIL PROTECTED]:~# uname -r
2.4.31-hf1
[EMAIL PROTECTED]:~# time /home/share/install/oread /dev/hda

real0m23.657s
user0m0.000s
sys 0m1.300s
[EMAIL PROTECTED]:~# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   356 MB in  2.02 seconds = 176.24 MB/sec
 Timing buffered disk reads:   68 MB in  3.04 seconds =  22.37 MB/sec
[EMAIL PROTECTED]:~# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   344 MB in  2.00 seconds = 172.00 MB/sec
 Timing buffered disk reads:   68 MB in  3.03 seconds =  22.44 MB/sec
- - -
[EMAIL PROTECTED]:~# uname -r
2.6.12.2a
[EMAIL PROTECTED]:~# time /home/share/install/oread /dev/hda

real0m23.569s
user0m0.005s
sys 0m0.563s

[EMAIL PROTECTED]:~# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   336 MB in  2.00 seconds = 167.77 MB/sec
 Timing buffered disk reads:   50 MB in  3.05 seconds =  16.37 MB/sec
[EMAIL PROTECTED]:~# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   340 MB in  2.01 seconds = 169.01 MB/sec
 Timing buffered disk reads:   28 MB in  3.22 seconds =   8.70 MB/sec
[EMAIL PROTECTED]:~# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   336 MB in  2.01 seconds = 167.44 MB/sec
 Timing buffered disk reads:   26 MB in  3.05 seconds =   8.52 MB/sec
[EMAIL PROTECTED]:~# echo deadline  /sys/block/hda/queue/scheduler
[EMAIL PROTECTED]:~# hdparm -tT /dev/hda


RE: [git patches] IDE update

2005-07-05 Thread Al Boldi
Linus Torvalds wrote: {
On Wed, 6 Jul 2005, Grant Coady wrote:
 
 Executive Summary

Btw, can you try this same thing (or at least a subset) with a large file on
a filesystem? Does that show the same pattern, or is it always just the raw
device?
}

Linus,
Cat /dev/hda  /dev/null and cat /tmp/tst.dsk  /dev/null show the same
symptoms.
The problem shows most when the cpu is slow and the hd is fast.
When the cpu is fast and the hd is slow the cpu will make up for lost cycles
and the problem will not show!


-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-05 Thread Grant Coady
On Tue, 5 Jul 2005 17:51:50 -0700 (PDT), Linus Torvalds [EMAIL PROTECTED] 
wrote:

Btw, can you try this same thing (or at least a subset) with a large file
on a filesystem? Does that show the same pattern, or is it always just the 
raw device?

Sure, take a while longer to vary by block size.  One effect seems 
to be wrong is interaction between /dev/hda and /dev/hdc in 'peetoo', 
the IDE channels not independent?

write:  time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
read:   time dd of=/dev/null bs=1M if=/zeroes

summary 2.4.31-hf1  2.6.12.2
boxen \ time -  w   r   w   r
--- 
menace  58.550  57  47.5
pooh24  24  22.527
peetoo  33  20  26.522
(simultaneuous  57  37.552  38.5)
silly   54  24  49  25
tosh30  19.527  19.5

filesystem: reiserfs 3.6, distro: slackware-10.1 + updates
hardware config, etc: http://scatter.mine.nu/test/

--Grant

the long story:

[EMAIL PROTECTED]:~# uname -r
2.4.31-hf1
[EMAIL PROTECTED]:~# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/hda3  1991992   1074516917476  54% /
deltree:/home/share2064256   1042968   1021288  51% /home/share

[EMAIL PROTECTED]:~# time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m57.830s
user0m0.050s
sys 0m20.940s
[EMAIL PROTECTED]:~# time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m59.041s
user0m0.030s
sys 0m21.780s
[EMAIL PROTECTED]:~# time dd of=/dev/null bs=1M if=/zeroes
500+0 records in
500+0 records out

real0m49.963s
user0m0.000s
sys 0m15.510s
- - -
[EMAIL PROTECTED]:~# uname -r
2.6.12.2a
[EMAIL PROTECTED]:~# time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m57.199s
user0m0.022s
sys 0m15.040s
[EMAIL PROTECTED]:~# time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m56.825s
user0m0.024s
sys 0m14.893s
[EMAIL PROTECTED]:~# time dd of=/dev/null bs=1M if=/zeroes
500+0 records in
500+0 records out

real0m47.560s
user0m0.017s
sys 0m15.533s
[EMAIL PROTECTED]:~# time dd of=/dev/null bs=1M if=/zeroes
500+0 records in
500+0 records out

real0m47.842s
user0m0.012s
sys 0m15.647s

o o o
[EMAIL PROTECTED]:~# df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/hda3  3084380   2018240   1066140  66% /
/dev/hda6  2056220   1049544   1006676  52% /usr/src
/dev/hda7   256996 34260222736  14% /usr/local
/dev/hda8   256996 33896223100  14% /home
/dev/hda1420562536 32840  20529696   1% /home/pooh
deltree:/home/share2064256   1042968   1021288  51% /home/share

[EMAIL PROTECTED]:~# uname -r
2.4.31-hf1
[EMAIL PROTECTED]:~# time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m23.912s
user0m0.010s
sys 0m19.820s
[EMAIL PROTECTED]:~# time dd of=/dev/null bs=1M if=/zeroes
500+0 records in
500+0 records out

real0m24.302s
user0m0.020s
sys 0m16.760s
- - -
[EMAIL PROTECTED]:~# uname -r
2.6.12.2a
[EMAIL PROTECTED]:~# time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m22.449s
user0m0.017s
sys 0m13.576s
[EMAIL PROTECTED]:~# time dd of=/dev/null bs=1M if=/zeroes
500+0 records in
500+0 records out

real0m26.780s
user0m0.010s
sys 0m13.398s

o o o

peetoo:~$ df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/hda3  2586348   1075652   1510696  42% /
/dev/hdc3  2586348   2044228542120  80% /usr
/dev/hdc6  2586348   1217568   1368780  48% /usr/src
/dev/hda9 20562504  10821500   9741004  53% /home/install
/dev/hdc9 20562504   4329320  16233184  22% /home/public
/dev/hda1041446344  39676256   1770088  96% /home/archive
deltree:/home/share2064256   1042968   1021288  51% /home/share

peetoo:~$ uname -r
2.4.31-hf1
peetoo:~$ time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m33.322s
user0m0.000s
sys 0m13.650s
peetoo:~$ time $(dd if=/dev/zero bs=1M count=500 of=/zeroes; sync)
500+0 records in
500+0 records out

real0m32.983s
user0m0.010s
sys 0m13.740s
peetoo:~$ time $(dd if=/dev/zero bs=1M count=500 of=/usr/zeroes; sync)
500+0 records in
500+0 records out

real0m30.775s
user0m0.000s
sys 0m13.600s
peetoo:~$ time $(dd if=/dev/zero bs=1M count=500 of=/usr/zeroes; sync)
500+0 records in
500+0 records out

real0m33.077s
user0m0.010s
sys 0m13.740s
peetoo:~$ time dd of=/dev/null bs=1M if=/zeroes
500+0 records in
500+0 records out

real

RE: [git patches] IDE update

2005-07-04 Thread Al Boldi
Bartlomiej Zolnierkiewicz wrote: {
Please pull from:
rsync://rsync.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6.git
}

Does it fix the idedriver int/dma problem?

Al

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-04 Thread Bartlomiej Zolnierkiewicz
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
 Bartlomiej Zolnierkiewicz wrote: {
 Please pull from:
 rsync://rsync.kernel.org/pub/scm/linux/kernel/git/bart/ide-2.6.git
 }
 
 Does it fix the idedriver int/dma problem?

What is the int/dma problem?
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [git patches] IDE update

2005-07-04 Thread Al Boldi
Bartlomiej Zolnierkiewicz wrote: {
What is the int/dma problem?
}

Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT

It feels like DMA is not being applied properly in 2.6.12.

Your comments please.

Thanks!

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-04 Thread Bartlomiej Zolnierkiewicz
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
 Bartlomiej Zolnierkiewicz wrote: {
 What is the int/dma problem?
 }
 
 Hdparm -tT gives 38mb/s in 2.4.31
 Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle
 
 Hdparm -tT gives 28mb/s in 2.6.12
 Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT
 
 It feels like DMA is not being applied properly in 2.6.12.
 
 Your comments please.

Are earlier 2.6.x kernels okay?

dmesg output?

Bartlomiej
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [git patches] IDE update

2005-07-04 Thread Al Boldi
Bartlomiej Zolnierkiewicz wrote: {
 On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
 Hdparm -tT gives 38mb/s in 2.4.31
 Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle
 
 Hdparm -tT gives 28mb/s in 2.6.12
 Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT
 
 It feels like DMA is not being applied properly in 2.6.12.

Are earlier 2.6.x kernels okay?

dmesg output?
}

Same on 2.6.10,11,12.
No errors though, only sluggish system.

-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-04 Thread Ondrej Zary

Al Boldi wrote:

Bartlomiej Zolnierkiewicz wrote: {


On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT

It feels like DMA is not being applied properly in 2.6.12.



Are earlier 2.6.x kernels okay?

dmesg output?
}

Same on 2.6.10,11,12.
No errors though, only sluggish system.


Something like this http://lkml.org/lkml/2005/6/13/1 ?

--
Ondrej Zary
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-04 Thread Bartlomiej Zolnierkiewicz
On 7/4/05, Ondrej Zary [EMAIL PROTECTED] wrote:
 Al Boldi wrote:
  Bartlomiej Zolnierkiewicz wrote: {
 
 On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
 Hdparm -tT gives 38mb/s in 2.4.31
 Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle
 
 Hdparm -tT gives 28mb/s in 2.6.12
 Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT
 
 It feels like DMA is not being applied properly in 2.6.12.
 
 
  Are earlier 2.6.x kernels okay?
 
  dmesg output?
  }
 
  Same on 2.6.10,11,12.
  No errors though, only sluggish system.

What about earlier kernels?

Please try to narrow down the problem to a specific kernel version.

 Something like this http://lkml.org/lkml/2005/6/13/1 ?

It doesn't seem like IDE regression but...

Bartlomiej
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-04 Thread Bartlomiej Zolnierkiewicz
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
 Bartlomiej Zolnierkiewicz wrote: {
  
  On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
  Hdparm -tT gives 38mb/s in 2.4.31
  Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle
  
  Hdparm -tT gives 28mb/s in 2.6.12
  Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT
  
  It feels like DMA is not being applied properly in 2.6.12.
  
   Same on 2.6.10,11,12.
   No errors though, only sluggish system.
 
 What about earlier kernels?
 Please try to narrow down the problem to a specific kernel version.
 }
 
 Don't know about 2.6.0-2.6.9, but 2.4.31 is ok.

2.4 - 2.6 means zillions of changes.
 
 Bartlomiej,
 When you compare 2.4.31 with 2.6.12 don't you see this problem on your
 machine?

Unfortunately I've never encountered this problem on any machine.

 If you have a fast system the slowdown won't show, but your IOWAIT will be
 higher anyway!

AFAIK high iowait is a known block layer problem

 It feels like running on PIO instead of DMA.
-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [git patches] IDE update

2005-07-04 Thread André Tomt

Al Boldi wrote:

Bartlomiej Zolnierkiewicz wrote: {


On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda  /dev/null gives 2% user 33% sys 65% idle

Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda  /dev/null gives 2% user 25% sys 0% idle 73% IOWAIT


The hdparm doesn't get as high scores as in 2.4 is a old discussed to 
death problem on LKML. So far nobody has been able to show it affects 
anything  but that pretty useless quasi-benchmark.



It feels like DMA is not being applied properly in 2.6.12.


Same on 2.6.10,11,12.
No errors though, only sluggish system.


Really sluggish or just benchmark-sluggish? If the former, try 
selecting a different IO elevator/sheduler. If the latter it doesn't 
matter much, at least not with the very simple hdparm test :-)




What about earlier kernels?
Please try to narrow down the problem to a specific kernel version.
}


Don't know about 2.6.0-2.6.9, but 2.4.31 is ok.

Bartlomiej,
When you compare 2.4.31 with 2.6.12 don't you see this problem on your
machine?
If you have a fast system the slowdown won't show, but your IOWAIT will be
higher anyway!


Nothing wrong with 73% iowait, I'd even consider it very low while 
putting load on a harddrive. Its just time spent waiting for data to be 
returned from disk, and thus I usually expect no lower than ~98-99% 
while stressing any disk. Harddisks are _slow as snails_ compared to cpu 
cycles ;-)


Beware 2.4 didn't export that statistic at all to userspace, so 0% 
iowait gets reported from most 2.6-ready reporting tools on 2.4.


-
To unsubscribe from this list: send the line unsubscribe linux-ide in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html