Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-23 Thread Kjetil Torgrim Homme
Miles Nordin car...@ivy.net writes: kth == Kjetil Torgrim Homme kjeti...@linpro.no writes: kth the SCSI layer handles the replaying of operations after a kth reboot or connection failure. how? I do not think it is handled by SCSI layers, not for SAS nor iSCSI. sorry, I was

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-22 Thread Kjetil Torgrim Homme
Miles Nordin car...@ivy.net writes: There will probably be clients that might seem to implicitly make this assuption by mishandling the case where an iSCSI target goes away and then comes back (but comes back less whatever writes were in its write cache). Handling that case for NFS was

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-22 Thread Miles Nordin
kth == Kjetil Torgrim Homme kjeti...@linpro.no writes: kth basically iSCSI just defines a reliable channel for SCSI. pft. AIUI a lot of the complexity in real stacks is ancient protocol arcania for supporting multiple initiators and TCQ regardless of whther the physical target supports

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-20 Thread Richard Elling
On Feb 18, 2010, at 4:55 AM, Phil Harman wrote: This discussion is very timely, but I don't think we're done yet. I've been working on using NexentaStor with Sun's DVI stack. The demo I've been playing with glues SunRays to VirtualBox instances using ZFS zvols over iSCSI for the boot

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI client assumes all writes are synchronised to nonvolatile storage,

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ross Walker
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Phil Harman
On 19/02/2010 21:57, Ragnar Sundblad wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI client assumes all writes

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.20, Ross Walker wrote: On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ragnar Sundblad
On 19 feb 2010, at 23.22, Phil Harman wrote: On 19/02/2010 21:57, Ragnar Sundblad wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Brent Jones
On Wed, Feb 17, 2010 at 11:03 PM, Matt registrat...@flash.shanje.com wrote: No SSD Log device yet.  I also tried disabling the ZIL, with no effect on performance. Also - what's the best way to test local performance?  I'm _somewhat_ dumb as far as opensolaris goes, so if you could provide

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Markus Kovero
No one has said if they're using dks, rdsk, or file-backed COMSTAR LUNs yet. I'm using file-backed COMSTAR LUNs, with ZIL currently disabled. I can get between 100-200MB/sec, depending on random/sequential and block sizes. Using dsk/rdsk, I was not able to see that level of performance at

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
Hi Matt Are the seeing low speeds on writes only or on both read AND write? Are you seeing low speed just with iSCSI or also with NFS or CIFS? I've tried updating to COMSTAR (although I'm not certain that I'm actually using it) To check, do this: # svcs -a | grep iscsi If

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Günther
hellobr there is a new beta v. 0.220 of napp-it, the free webgui for nexenta(core) 3 br new:br -bonnie benchmarks included a href=http://www.napp-it.org/bench.png; target=_blanksee screenshot/abr -bug fixesbr br if you look at the benchmark screenshot:br -pool daten: zfs3 of 7 x wd 2TB raid

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Tomas Ögren
On 18 February, 2010 - Günther sent me these 1,1K bytes: hellobr there is a new beta v. 0.220 of napp-it, the free webgui for nexenta(core) 3 br new:br -bonnie benchmarks included a href=http://www.napp-it.org/bench.png; target=_blanksee screenshot/abr -bug fixesbr br if you look at

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Günther
hello my intention was to show , how you can tune up a pool of drives (how much can you reach when using sas compared to 2 TB high capacity drives) and now the other results with same config and sas drives: pre wd 2TB x 7, z3, dedup and compress on, no ssd daten 12.6T start

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Eugen Leitl
On Wed, Feb 17, 2010 at 11:21:07PM -0800, Matt wrote: Just out of curiosity - what Supermicro chassis did you get? I've got the following items shipping to me right now, with SSD drives and 2TB main drives coming as soon as the system boots and performs normally (using 8 extra 500GB

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Phil Harman
This discussion is very timely, but I don't think we're done yet. I've been working on using NexentaStor with Sun's DVI stack. The demo I've been playing with glues SunRays to VirtualBox instances using ZFS zvols over iSCSI for the boot image, with all the associated ZFS snapshot/clone

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Matt
Responses inline : Hi Matt Are the seeing low speeds on writes only or on both read AND write? Lows speeds both reading and writing. Are you seeing low speed just with iSCSI or also with NFS or CIFS? Haven't gotten NFS or CIFS to work properly. Maybe I'm just too dumb to figure it

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Matt
One question though: Just this one SAS adaptor? Are you connecting to the drive backplane with one cable for the 4 internal SAS connectors? Are you using SAS or SATA drives? Will you be filling up 24 slots with 2 TByte drives, and are you sure you won't be oversubscribed with just 4x

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Marc Nicholas
On Thu, Feb 18, 2010 at 10:49 AM, Matt registrat...@flash.shanje.comwrote: Here's IOStat while doing writes : r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 1.0 256.93.0 2242.9 0.3 0.11.30.5 11 12 c0t0d0 0.0 253.90.0 2242.9 0.3 0.11.0

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Matt
Also - still looking for the best way to test local performance - I'd love to make sure that the volume is actually able to perform at a level locally to saturate gigabit. If it can't do it internally, why should I expect it to work over GbE? -- This message posted from opensolaris.org

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Bob Friesenhahn
On Thu, 18 Feb 2010, Günther wrote: i was surprised about the seqential write/ rewrite result. the wd 2 TB drives performs very well only in sequential write of characters but are horrible bad in blockwise write/ rewrite the 15k sas drives with ssd read cache performs 20 x better (10MB/s - 200

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Marc Nicholas
Run Bonnie++. You can install it with the Sun package manger and it'll appear under /usr/benchmarks/bonnie++ Look for the command line I posted a couple of days back for a decent set of flags to truly rate performance (using sync writes). -marc On Thu, Feb 18, 2010 at 11:05 AM, Matt

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
Hi Matt Haven't gotten NFS or CIFS to work properly. Maybe I'm just too dumb to figure it out, but I'm ending up with permissions errors that don't let me do much. All testing so far has been with iSCSI. So until you can test NFS or CIFS, we don't know if it's a general performance problem,

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
Another things you could check, which has been reported to cause a problem, is if network or disk drivers share an interrupt with a slow device, like say a usb device. So try: # echo ::interrupts -d | mdb -k ... and look for multiple driver names on an INT#. Regards Nigel Smith -- This message

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-17 Thread Matt
Just wanted to add that I'm in the exact same boat - I'm connecting from a Windows system and getting just horrid iSCSI transfer speeds. I've tried updating to COMSTAR (although I'm not certain that I'm actually using it) to no avail, and I tried updating to the latest DEV version of

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-17 Thread Brent Jones
On Wed, Feb 17, 2010 at 10:42 PM, Matt registrat...@flash.shanje.com wrote: I've got a very similar rig to the OP showing up next week (plus an infiniband card) I'd love to get this performing up to GB Ethernet speeds, otherwise I may have to abandon the iSCSI project if I can't get it to

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-17 Thread Matt
No SSD Log device yet. I also tried disabling the ZIL, with no effect on performance. Also - what's the best way to test local performance? I'm _somewhat_ dumb as far as opensolaris goes, so if you could provide me with an exact command line for testing my current setup (exactly as it

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-17 Thread Matt
Just out of curiosity - what Supermicro chassis did you get? I've got the following items shipping to me right now, with SSD drives and 2TB main drives coming as soon as the system boots and performs normally (using 8 extra 500GB Barracuda ES.2 drives as test drives).

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-16 Thread Richard Elling
On Feb 15, 2010, at 11:34 PM, Ragnar Sundblad wrote: On 15 feb 2010, at 23.33, Bob Beverage wrote: On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I've seen exactly the same thing. Basically, terrible transfer rates with Windows and the server sitting there

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-16 Thread Brian E. Imhoff
Some more back story. I initially started with Solaris 10 u8, and was getting 40ish MB/s reads, and 65-70MB/s writes, which was still a far cry from the performance I was getting with OpenFiler. I decided to try Opensolaris 2009.06, thinking that since it was more state of the art up to date

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-16 Thread Richard Elling
On Feb 16, 2010, at 9:44 AM, Brian E. Imhoff wrote: Some more back story. I initially started with Solaris 10 u8, and was getting 40ish MB/s reads, and 65-70MB/s writes, which was still a far cry from the performance I was getting with OpenFiler. I decided to try Opensolaris 2009.06,

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-16 Thread Eric D. Mudama
On Tue, Feb 16 at 9:44, Brian E. Imhoff wrote: But, at the end of the day, this is quite a bomb: A single raidz2 vdev has about as many IOs per second as a single disk, which could really hurt iSCSI performance. If I have to break 24 disks up in to multiple vdevs to get the expected

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Peter Tribble
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN box, and am experiencing absolutely poor / unusable performance. ... From here, I discover the iscsi target on our Windows server 2008 R2

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Bob Beverage
On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I've seen exactly the same thing. Basically, terrible transfer rates with Windows and the server sitting there completely idle. I am also seeing this behaviour. It started somewhere around snv111 but I am not

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-15 Thread Ragnar Sundblad
On 15 feb 2010, at 23.33, Bob Beverage wrote: On Wed, Feb 10, 2010 at 10:06 PM, Brian E. Imhoff beimh...@hotmail.com wrote: I've seen exactly the same thing. Basically, terrible transfer rates with Windows and the server sitting there completely idle. I am also seeing this behaviour.

[zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Brian E. Imhoff
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN box, and am experiencing absolutely poor / unusable performance. Where to begin... The Hardware setup: Supermicro 4U 24 Drive Bay Chassis Supermicro X8DT3 Server Motherboard 2x Xeon E5520 Nehalem 2.26 Quad Core CPUs

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Will Murnane
On Wed, Feb 10, 2010 at 17:06, Brian E. Imhoff beimh...@hotmail.com wrote: I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN box, and am experiencing absolutely poor / unusable performance. I then, Create a zpool, using raidz2, using all 24 drives, 1 as a hotspare:

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Frank Cusack
On 2/10/10 2:06 PM -0800 Brian E. Imhoff wrote: I then, Create a zpool, using raidz2, using all 24 drives, 1 as a hotspare: zpool create tank raidz2 c1t0d0 c1t1d0 [] c1t22d0 spare c1t23d00 Well there's one problem anyway. That's going to be horribly slow no matter what.

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread David Dyer-Bennet
On Wed, February 10, 2010 16:28, Will Murnane wrote: On Wed, Feb 10, 2010 at 17:06, Brian E. Imhoff beimh...@hotmail.com wrote: I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN box, and am experiencing absolutely poor / unusable performance. I then, Create a

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Tim Cook
On Wed, Feb 10, 2010 at 4:06 PM, Brian E. Imhoff beimh...@hotmail.comwrote: I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN box, and am experiencing absolutely poor / unusable performance. Where to begin... The Hardware setup: Supermicro 4U 24 Drive Bay Chassis

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Bob Friesenhahn
On Wed, 10 Feb 2010, Frank Cusack wrote: On 2/10/10 2:06 PM -0800 Brian E. Imhoff wrote: I then, Create a zpool, using raidz2, using all 24 drives, 1 as a hotspare: zpool create tank raidz2 c1t0d0 c1t1d0 [] c1t22d0 spare c1t23d00 Well there's one problem anyway. That's going to be

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Marc Nicholas
Definitely use Comstar as Tim says. At home I'm using 4*WD Caviar Blacks on an AMD Phenom x4 @ 1.Ghz and only 2GB of RAM. I'm running svn132. No HBA - onboard SB700 SATA ports.$ I can, with IOmeter, saturate GigE from my WinXP laptop via iSCSI. Can you toss the RAID controller aside an use

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Kjetil Torgrim Homme
Bob Friesenhahn bfrie...@simple.dallas.tx.us writes: On Wed, 10 Feb 2010, Frank Cusack wrote: The other three commonly mentioned issues are: - Disable the naggle algorithm on the windows clients. for iSCSI? shouldn't be necessary. - Set the volume block size so that it matches the

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Marc Nicholas
How does lowering the flush interval help? If he can't ingress data fast enough, faster flushing is a Bad Thibg(tm). -marc On 2/10/10, Kjetil Torgrim Homme kjeti...@linpro.no wrote: Bob Friesenhahn bfrie...@simple.dallas.tx.us writes: On Wed, 10 Feb 2010, Frank Cusack wrote: The other three

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Brent Jones
On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas geekyth...@gmail.com wrote: How does lowering the flush interval help? If he can't ingress data fast enough, faster flushing is a Bad Thibg(tm). -marc On 2/10/10, Kjetil Torgrim Homme kjeti...@linpro.no wrote: Bob Friesenhahn

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Marc Nicholas
This is a Windows box, not a DB that flushes every write. The drives are capable of over 2000 IOPS (albeit with high latency as its NCQ that gets you there) which would mean, even with sync flushes, 8-9MB/sec. -marc On 2/10/10, Brent Jones br...@servuhome.net wrote: On Wed, Feb 10, 2010 at

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Brent Jones
On Wed, Feb 10, 2010 at 4:05 PM, Brent Jones br...@servuhome.net wrote: On Wed, Feb 10, 2010 at 3:12 PM, Marc Nicholas geekyth...@gmail.com wrote: How does lowering the flush interval help? If he can't ingress data fast enough, faster flushing is a Bad Thibg(tm). -marc On 2/10/10, Kjetil

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-10 Thread Kjetil Torgrim Homme
[please don't top-post, please remove CC's, please trim quotes. it's really tedious to clean up your post to make it readable.] Marc Nicholas geekyth...@gmail.com writes: Brent Jones br...@servuhome.net wrote: Marc Nicholas geekyth...@gmail.com wrote: Kjetil Torgrim Homme kjeti...@linpro.no