Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Dupuy, Robert
My take on the responses I've received the last days, is that it isn't
genuine.

 

 

 



From: Tim Cook [mailto:t...@cook.ms] 
Sent: 2009-10-20 20:57
To: Dupuy, Robert
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Sun Flash Accelerator F20

 

On Tue, Oct 20, 2009 at 3:58 PM, Robert Dupuy rdu...@umpublishing.org
wrote:

there is no consistent latency measurement in the industry

You bring up an important point, as did another poster earlier
in the thread, and certainly its an issue that needs to be addressed.


I'd be surprised if anyone could answer such a question while
simultaneously being credible.


http://download.intel.com/design/flash/nand/extreme/extreme-sata-ssd-pro
duct-brief.pdf

Intel:  X-25E read latency 75 microseconds

http://www.sun.com/storage/disk_systems/sss/f5100/specs.xml

Sun:  F5100 read latency 410 microseconds

http://www.fusionio.com/PDFs/Data_Sheet_ioDrive_2.pdf

Fusion-IO:  read latency less than 50 microseconds

Fusion-IO lists theirs as .05ms


I find the latency measures to be useful.

I know it isn't perfect, and I agree benchmarks can be
deceiving, heck I criticized one vendors benchmarks in this thread
already :)

But, I did find, that for me, I just take a very simple, single
thread, read as fast you can approach, and get the # of random access
per second, as one type of measurement, that gives you some data, on the
raw access ability of the drive.

No doubt in some cases, you want to test multithreaded IO too,
but my application is very latency sensitive, so this initial test was
telling.

As I got into the actual performance of my app, the lower
latency drives, performed better than the higher latency drives...all of
this was on SSD.

(I did not test the F5100 personally, I'm talking about the SSD
drives that I did test).

So, yes, SSD and HDD are different, but latency is still
important.



Timeout, rewind, etc.  What workload do you have that 410microsecond
latency is detrimental?  More to the point, what workload do you have
that you'd rather have 5microsecond latency with 1/10th the IOPS?
Whatever it is, I've never run across such a workload in the real world.
It sounds like you're comparing paper numbers for the sake of
comparison, rather than to solve a real-world problem...

BTW, latency does not give you # of random access per second.
5microsecond latency for one access != # of random access per second,
sorry.
--Tim 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Dupuy, Robert
I've already explained how you can scale up IOP #'s and unless that is
your real workload, you won't see that in practice.

See, running a high # of parallel jobs spread evenly across.

I don't find the conversation genuine, so I'm not going to continue it.


-Original Message-
From: Richard Elling [mailto:richard.ell...@gmail.com] 
Sent: 2009-10-20 16:39
To: Dupuy, Robert
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Sun Flash Accelerator F20

On Oct 20, 2009, at 1:58 PM, Robert Dupuy wrote:

 there is no consistent latency measurement in the industry

 You bring up an important point, as did another poster earlier in  
 the thread, and certainly its an issue that needs to be addressed.

 I'd be surprised if anyone could answer such a question while  
 simultaneously being credible.


http://download.intel.com/design/flash/nand/extreme/extreme-sata-ssd-pro
duct-brief.pdf

 Intel:  X-25E read latency 75 microseconds

... but they don't say where it was measured or how big it was...

 http://www.sun.com/storage/disk_systems/sss/f5100/specs.xml

 Sun:  F5100 read latency 410 microseconds

... for 1M transfers... I have no idea what the units are, though...  
bytes?

 http://www.fusionio.com/PDFs/Data_Sheet_ioDrive_2.pdf

 Fusion-IO:  read latency less than 50 microseconds

 Fusion-IO lists theirs as .05ms

...at the same time they quote 119,790 IOPS @ 4KB.  By my calculator,
that is 8.3 microseconds per IOP, so clearly the latency itself doesn't
have a direct impact on IOPs.

 I find the latency measures to be useful.

Yes, but since we are seeing benchmarks showing 1.6 MIOPS (mega-IOPS :-)
on a system which claims 410 microseconds of latency, it really isn't
clear to me how to apply the numbers to capacity planning. To wit, there
is some limit to the number of concurrent IOPS that can be processed per
device, so do I need more devices, faster devices, or devices which can
handle more concurrent IOPS?

 I know it isn't perfect, and I agree benchmarks can be deceiving,  
 heck I criticized one vendors benchmarks in this thread already :)

 But, I did find, that for me, I just take a very simple, single  
 thread, read as fast you can approach, and get the # of random  
 access per second, as one type of measurement, that gives you some  
 data, on the raw access ability of the drive.

 No doubt in some cases, you want to test multithreaded IO too, but  
 my application is very latency sensitive, so this initial test was  
 telling.

cool.

 As I got into the actual performance of my app, the lower latency  
 drives, performed better than the higher latency drives...all of  
 this was on SSD.

Note: the F5100 has SAS expanders which add latency.
  -- richard

 (I did not test the F5100 personally, I'm talking about the SSD  
 drives that I did test).

 So, yes, SSD and HDD are different, but latency is still important.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-21 Thread Dupuy, Robert
 This is one of the skimpiest specification sheets that I have ever 
seen for an enterprise product.

At least it shows the latency.

This is some kind of technology cult, I've wondered into.


I won't respond further.

-Original Message-
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] 
Sent: 2009-10-20 21:54
To: Richard Elling
Cc: Dupuy, Robert; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Sun Flash Accelerator F20

On Tue, 20 Oct 2009, Richard Elling wrote:
 
 Intel:  X-25E read latency 75 microseconds

 ... but they don't say where it was measured or how big it was...

Probably measured using a logic analyzer and measuring the time from 
the last bit of the request going in, to the first bit of the response 
coming out.  It is not clear if this latency is a minimum, maximum, 
median, or average.  It is not clear if this latency is while the 
device is under some level of load, or if it is in a quiescent state.

This is one of the skimpiest specification sheets that I have ever 
seen for an enterprise product.

 Sun:  F5100 read latency 410 microseconds

 ... for 1M transfers... I have no idea what the units are, though...
bytes?

Sun's testing is likely done while attached to a system and done with 
some standard loading factor rather than while in a quiescent state.

 ...at the same time they quote 119,790 IOPS @ 4KB.  By my calculator,
 that is 8.3 microseconds per IOP, so clearly the latency itself
doesn't
 have a direct impact on IOPs.

I would be interested to know how many IOPS an OS like Solaris is able 
to push through a single device interface.  The normal driver stack is 
likely limited as to how many IOPS it can sustain for a given LUN 
since the driver stack is optimized for high latency devices like disk 
drives.  If you are creating a driver stack, the design decisions you 
make when requests will be satisfied in about 12ms would be much 
different than if requests are satisfied in 50us.  Limitations of 
existing software stacks are likely reasons why Sun is designing 
hardware with more device interfaces and more independent devices.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss