Re: [zfs-discuss] Thank you!

2007-07-16 Thread Scott Lovenberg
 eric kustarz wrote:
  On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
 

  You sir, are a gentleman and a scholar!
  Seriously, this is exactly  
  the information I was looking for, thank you very
 much!
 
  Would you happen to know if this has improved
 since build 63 or if  
  chipset has any effect one way or the other?
  
 
  Naw.  Without having information on how exactly the
 controller/disk  
  firmware really works, we're merely speculating
 that the firmware is  
  where the problem is.  Getting that information
 from the disk vendors  
  is ahem tricky.

 
 Unfortunately, testing has not support the theory
 that the problem is with
 the controller hardware, driver, disk or disk
 firmware.  So far every
 valid measurement with using FPDMA READ/WRITE (NCQ)
 v. READ/WRITE
 DMA EXT. has shown anywhere from less that 1%
 improvement using NCQ
 to up to 22% improvement.  The biggest improvements
 are seen
 when the disk caches are disabled, but I have
 measured up to 19% improvement
 w.r.t. time spent waiting for I/Os to complete with
 the caches enabled.
  More investigation is needed.

 
 Absolutely more investigation is needed.
  eric

Just a thought or two off the top of my head; is the caching daemon (bdflush or 
something to that effect) on when you are performing these tests.  I think it 
flushes every 20 or 30 seconds by default, IIRC?  

I'm not sure, but this sounds like a buffering thing where it's waiting for a 
full buffer to flush the changes.  Are these disks in ATA/DMA/UDMA/PIO, SATA, 
or SCSI interfaces?  Are these disks Western Digitals, I've heard their caching 
algorithms aren't optimized at all (strictly heresy).  

It could be a delay on the channel if it's PATA and the other drive on the 
channel is being accessed...  

Perhaps this is a cache coherency problem (is the arch. x86, IA1/2, SPARC, 
PPC... single or SMP... memory timings?)?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thank you!

2007-07-16 Thread Brian D. Horn
Scott Lovenberg wrote:
 eric kustarz wrote:
 
 On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:

   
   
 You sir, are a gentleman and a scholar!
 
  Seriously, this is exactly  
 
 the information I was looking for, thank you very
   
 much!
 
 Would you happen to know if this has improved
 
 since build 63 or if  
 
 chipset has any effect one way or the other?
 
 
 Naw.  Without having information on how exactly the
   
 controller/disk  
 
 firmware really works, we're merely speculating
   
 that the firmware is  
 
 where the problem is.  Getting that information
   
 from the disk vendors  
 
 is ahem tricky.
   
   
 Unfortunately, testing has not support the theory
 that the problem is with
 the controller hardware, driver, disk or disk
 firmware.  So far every
 valid measurement with using FPDMA READ/WRITE (NCQ)
 v. READ/WRITE
 DMA EXT. has shown anywhere from less that 1%
 improvement using NCQ
 to up to 22% improvement.  The biggest improvements
 are seen
 when the disk caches are disabled, but I have
 measured up to 19% improvement
 w.r.t. time spent waiting for I/Os to complete with
 the caches enabled.
 
 More investigation is needed.
   
   
 Absolutely more investigation is needed.
 
 eric
   

 Just a thought or two off the top of my head; is the caching daemon (bdflush 
 or something to that effect) on when you are performing these tests.  I think 
 it flushes every 20 or 30 seconds by default, IIRC?  

 I'm not sure, but this sounds like a buffering thing where it's waiting for a 
 full buffer to flush the changes.  Are these disks in ATA/DMA/UDMA/PIO, SATA, 
 or SCSI interfaces?  Are these disks Western Digitals, I've heard their 
 caching algorithms aren't optimized at all (strictly heresy).  
   
Given we are talking about NCQ, which is a SATA only feature, we are 
talking about SATA controllers
and disks.  Also, these I/Os are being schedule to be done immediately 
and the discussion was talking
about sequential reading from one or more ZFS files.  No writes.
 It could be a delay on the channel if it's PATA and the other drive on the 
 channel is being accessed...  

 Perhaps this is a cache coherency problem (is the arch. x86, IA1/2, SPARC, 
 PPC... single or SMP... memory timings?)?
   
The vast number of tests were done using Opteron based machines (mostly 
Sun Fire x4500),. but not
entirely.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thank you!

2007-07-15 Thread Brian D. Horn
eric kustarz wrote:
 On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:

   
 You sir, are a gentleman and a scholar!  Seriously, this is exactly  
 the information I was looking for, thank you very much!

 Would you happen to know if this has improved since build 63 or if  
 chipset has any effect one way or the other?
 

 Naw.  Without having information on how exactly the controller/disk  
 firmware really works, we're merely speculating that the firmware is  
 where the problem is.  Getting that information from the disk vendors  
 is ahem tricky.
   

Unfortunately, testing has not support the theory that the problem is with
the controller hardware, driver, disk or disk firmware.  So far every
valid measurement with using FPDMA READ/WRITE (NCQ) v. READ/WRITE
DMA EXT. has shown anywhere from less that 1% improvement using NCQ
to up to 22% improvement.  The biggest improvements are seen
when the disk caches are disabled, but I have measured up to 19% improvement
w.r.t. time spent waiting for I/Os to complete with the caches enabled.
 More investigation is needed.
   

Absolutely more investigation is needed.
 eric

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thank you!

2007-07-09 Thread eric kustarz

On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:

 You sir, are a gentleman and a scholar!  Seriously, this is exactly  
 the information I was looking for, thank you very much!

 Would you happen to know if this has improved since build 63 or if  
 chipset has any effect one way or the other?

Naw.  Without having information on how exactly the controller/disk  
firmware really works, we're merely speculating that the firmware is  
where the problem is.  Getting that information from the disk vendors  
is ahem tricky.

More investigation is needed.

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss