Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Brent Jones
On Wed, Feb 17, 2010 at 11:03 PM, Matt wrote: > No SSD Log device yet.  I also tried disabling the ZIL, with no effect on > performance. > > Also - what's the best way to test local performance?  I'm _somewhat_ dumb as > far as opensolaris goes, so if you could provide me with an exact command

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Markus Kovero
> No one has said if they're using dks, rdsk, or file-backed COMSTAR LUNs yet. > I'm using file-backed COMSTAR LUNs, with ZIL currently disabled. > I can get between 100-200MB/sec, depending on random/sequential and block > sizes. > > Using dsk/rdsk, I was not able to see that level of performanc

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Daniel Carosone
On Wed, Feb 17, 2010 at 11:37:54PM -0500, Ethan wrote: > > It seems to me that you could also use the approach of 'zpool replace' for > That is true. It seems like it then have to rebuild from parity for every > drive, though, which I think would take rather a long while, wouldn't it? No longer th

Re: [zfs-discuss] zfs snapshot of zone fails with permission denied (EPERM [sys_mount])

2010-02-18 Thread Pavel Heimlich
filed http://defect.opensolaris.org/bz/show_bug.cgi?id=14648 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
Hi Matt Are the seeing low speeds on writes only or on both read AND write? Are you seeing low speed just with iSCSI or also with NFS or CIFS? > I've tried updating to COMSTAR > (although I'm not certain that I'm actually using it) To check, do this: # svcs -a | grep iscsi If 'svc:/system/i

Re: [zfs-discuss] Best practice for setting ACL

2010-02-18 Thread CD
It's been a while, and finally I got the time to do some testing -- Actually I only knew about aclinherit -- which I've found is best set as passthrough. Setting aclmode to passthrough, solved the issues I experienced earlier. Wonderful! Thanks alot! -- This message posted from opensolaris.org

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Günther
hello there is a new beta v. 0.220 of napp-it, the free webgui for nexenta(core) 3 new: -bonnie benchmarks included http://www.napp-it.org/bench.png"; target="_blank">see screenshot -bug fixes if you look at the benchmark screenshot: -pool daten: zfs3 of 7 x wd 2TB raid edition (WD2002FYPS), de

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Tomas Ögren
On 18 February, 2010 - Günther sent me these 1,1K bytes: > hello > there is a new beta v. 0.220 of napp-it, the free webgui for nexenta(core) 3 > > new: > -bonnie benchmarks included http://www.napp-it.org/bench.png"; > target="_blank">see screenshot > -bug fixes > > if you look at the benchma

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Günther
hello my intention was to show , how you can tune up a pool of drives (how much can you reach when using sas compared to 2 TB high capacity drives) and now the other results with same config and sas drives: wd 2TB x 7, z3, dedup and compress on, no ssd daten 12.6T start 2010.02.

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Eugen Leitl
On Wed, Feb 17, 2010 at 11:21:07PM -0800, Matt wrote: > Just out of curiosity - what Supermicro chassis did you get? I've got the > following items shipping to me right now, with SSD drives and 2TB main drives > coming as soon as the system boots and performs normally (using 8 extra 500GB > Bar

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Phil Harman
This discussion is very timely, but I don't think we're done yet. I've been working on using NexentaStor with Sun's DVI stack. The demo I've been playing with glues SunRays to VirtualBox instances using ZFS zvols over iSCSI for the boot image, with all the associated ZFS snapshot/clone goodness

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
Ok, I've done all the tests I plan to complete. For highest performance, it seems: . The measure I think is the most relevant for typical operation is the fastest random read /write / mix. (Thanks Bob, for suggesting I do this test.) The winner is clearly striped mirrors in ZFS .

Re: [zfs-discuss] ZFS confused about disks?

2010-02-18 Thread Peter Eriksson
I figured I'd post the solution to this problem here also. Anyway, I solved the problem the old-fashioned way: Tell Solaris to fake the disk device ID's... I added the following to /kernel/drv/ssd.conf: > ssd-config-list= > "EUROLOGC", "unsupported-hack"; > > unsupported-hack=1,0x8,0,0,

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-18 Thread Maurice Volaski
For those who've been suffering this problem and who have non-Sun jbods, could you please let me know what model of jbod and cables (including length thereof) you have in your configuration. For those of you who have been running xVM without MSI support, could you please confirm whether the devic

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Matt
Responses inline : > Hi Matt > Are the seeing low speeds on writes only or on both > read AND write? > Lows speeds both reading and writing. > Are you seeing low speed just with iSCSI or also with > NFS or CIFS? Haven't gotten NFS or CIFS to work properly. Maybe I'm just too dumb to figure i

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Matt
> One question though: > Just this one SAS adaptor? Are you connecting to the > drive > backplane with one cable for the 4 internal SAS > connectors? > Are you using SAS or SATA drives? Will you be filling > up 24 > slots with 2 TByte drives, and are you sure you won't > be > oversubscribed wit

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Marc Nicholas
On Thu, Feb 18, 2010 at 10:49 AM, Matt wrote: > Here's IOStat while doing writes : > > r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device >1.0 256.93.0 2242.9 0.3 0.11.30.5 11 12 c0t0d0 >0.0 253.90.0 2242.9 0.3 0.11.00.4 10 11 c0t1d0 >1.0

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Matt
Also - still looking for the best way to test local performance - I'd love to make sure that the volume is actually able to perform at a level locally to saturate gigabit. If it can't do it internally, why should I expect it to work over GbE? -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance - napp-it + benchmarks

2010-02-18 Thread Bob Friesenhahn
On Thu, 18 Feb 2010, Günther wrote: i was surprised about the seqential write/ rewrite result. the wd 2 TB drives performs very well only in sequential write of characters but are horrible bad in blockwise write/ rewrite the 15k sas drives with ssd read cache performs 20 x better (10MB/s -> 200

Re: [zfs-discuss] zpool status output confusing

2010-02-18 Thread Tonmaus
Hello Cindy, I have got my LSI controllers and exchanged them for the Areca. The result is stunning: 1. exported pool (in this strange state I reported here) 2. changed controller and re-ordered the drives as before posting this matter (c-b-a back to a-b-c) 3. Booted Osol 4. imported pool Resu

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Marc Nicholas
Run Bonnie++. You can install it with the Sun package manger and it'll appear under /usr/benchmarks/bonnie++ Look for the command line I posted a couple of days back for a decent set of flags to truly rate performance (using sync writes). -marc On Thu, Feb 18, 2010 at 11:05 AM, Matt wrote: > Al

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
Hi Matt > Haven't gotten NFS or CIFS to work properly. > Maybe I'm just too dumb to figure it out, > but I'm ending up with permissions errors that don't let me do much. > All testing so far has been with iSCSI. So until you can test NFS or CIFS, we don't know if it's a general performance probl

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Bob Friesenhahn
On Thu, 18 Feb 2010, Edward Ned Harvey wrote: Ok, I’ve done all the tests I plan to complete.  For highest performance, it seems: · The measure I think is the most relevant for typical operation is the fastest random read /write / mix.  (Thanks Bob, for suggesting I do this test.) Th

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Nigel Smith
Another things you could check, which has been reported to cause a problem, is if network or disk drivers share an interrupt with a slow device, like say a usb device. So try: # echo ::interrupts -d | mdb -k ... and look for multiple driver names on an INT#. Regards Nigel Smith -- This message p

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Ethan
On Thu, Feb 18, 2010 at 04:14, Daniel Carosone wrote: > On Wed, Feb 17, 2010 at 11:37:54PM -0500, Ethan wrote: > > > It seems to me that you could also use the approach of 'zpool replace' > for > > That is true. It seems like it then have to rebuild from parity for every > > drive, though, which

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Cindy Swearingen
Hi Ethan, Great job putting this pool back together... I would agree with the disk-by-disk replacement by using the zpool replace command. You can read about this command here: http://docs.sun.com/app/docs/doc/817-2271/gazgd?a=view Having a recent full backup of your data before making any mor

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Victor Latushkin
Ethan wrote: So, current plan: - export the pool. - format c9t1d0 to have one slice being the entire disk. - import. should be degraded, missing c9t1d0p0. - replace missing c9t1d0p0 with c9t1d0 (should this be c9t1d0s0? my understanding is that zfs will treat the two about the same, since it ad

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Ethan
On Thu, Feb 18, 2010 at 13:22, Victor Latushkin wrote: > Ethan wrote: > >> So, current plan: >> - export the pool. >> - format c9t1d0 to have one slice being the entire disk. >> - import. should be degraded, missing c9t1d0p0. >> - replace missing c9t1d0p0 with c9t1d0 (should this be c9t1d0s0? my >

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Victor Latushkin
Ethan wrote: On Thu, Feb 18, 2010 at 13:22, Victor Latushkin mailto:victor.latush...@sun.com>> wrote: Ethan wrote: So, current plan: - export the pool. - format c9t1d0 to have one slice being the entire disk. - import. should be degraded, missing c9t1d0p0.

[zfs-discuss] zfs -e

2010-02-18 Thread Anatoly Legkodymov
Good day, I'm trying new 'zfs recv -e' option, but it doesn't work fine for incremental streams. First I do: zfs create data/a zfs create data/a/b zfs create data/to zfs snapshot data/a/b...@a zfs send -R data/a/b...@a | zfs recv -e -d data/to This stage works fine. Second stage: zfs snapshot d

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-18 Thread John
> >>For those who've been suffering this problem and > who have non-Sun > >>jbods, could you please let me know what model of > jbod and cables > >>(including length thereof) you have in your > configuration. > >> We are seeing the problem on both Sun and non-Sun hardware. On our Sun thumper x454

Re: [zfs-discuss] Error: value too large for defined data type

2010-02-18 Thread Axelle Apvrille
Incidentally, I just wanted to let you know that solved my problem :) I can now parse my samba mounts which were complaining about value too large. Changed to the standard ls. Yet, this does not completely solve my problem, because file parsers do not manage to list files of my mount, even if I d

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-18 Thread martyscholes
Well said. -original message- Subject: Re: [zfs-discuss] Proposed idea for enhancement - damage control From: Bob Friesenhahn Date: 02/17/2010 11:10 On Wed, 17 Feb 2010, Marty Scholes wrote: > > Bob, the vast majority of your post I agree with. At the same time, I might > disagree with a coupl

[zfs-discuss] Problems with sudden zfs capacity loss on snv_79a

2010-02-18 Thread Julius Roberts
Yes snv_79a is old, yes we're working separately on migrating to snv_111b or later. But i need to solve this problem ASAP to buy me some more time for that implementation. We pull data from a variety of sources onto our zpool called Backups, then we snapshot them. We keep around 20 or so and the

Re: [zfs-discuss] How to resize ZFS partion or add a new one?

2010-02-18 Thread Frank Thieme
Hi! I tried this guide as my setup was similar. I had a Linux installation (extended partitions) followed by a OpenSolaris (snv_132) installation. 1. Add the current NFS partition as a mirror of the existing ZFS one: eg zpool attach rpool c0d0s0 c0d0p1 This worked, 1.

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-18 Thread Miles Nordin
> "dc" == Daniel Carosone writes: dc> single-disk laptops are a pretty common use-case. It does not help this case. It helps the case where a single laptop disk fails and you recover it with dd conv=noerror,sync. This case is uncommon because few people know how to do it, or bother.

[zfs-discuss] improve meta data performance

2010-02-18 Thread Chris Banal
We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops of which about 90% are meta data. In hind sight it would have been significantly better to use a mirrored configuration but we opted for 4 x (9+2) raidz2 at the time. We can not take the downtime necessary to change the zpo

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Daniel Carosone
On Thu, Feb 18, 2010 at 12:42:58PM -0500, Ethan wrote: > On Thu, Feb 18, 2010 at 04:14, Daniel Carosone wrote: > Although I do notice that right now, it imports just fine using the p0 > devices using just `zpool import q`, no longer having to use import -d with > the directory of symlinks to p0 de

Re: [zfs-discuss] Killing an EFI label

2010-02-18 Thread Cindy Swearingen
Hi David, Its a life-long curse to describe the format utility. Trust me. :-) I think you want to relabel some disks with an EFI label to SMI label to be used in your ZFS root pool, and you have overlapping slices on one disk. I don't think ZFS would let you attach this disk. To fix the overlap

Re: [zfs-discuss] improve meta data performance

2010-02-18 Thread Tomas Ögren
On 18 February, 2010 - Chris Banal sent me these 1,8K bytes: > We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops > of which about 90% are meta data. In hind sight it would have been > significantly better to use a mirrored configuration but we opted for 4 x > (9+2) raidz2

Re: [zfs-discuss] improve meta data performance

2010-02-18 Thread Andrey Kuzmin
Try an inexpensive MLC SSD (Intel/Micron) for L2ARC. Won't help metadat updates, but should boost reads. Regards, Andrey On Thu, Feb 18, 2010 at 11:23 PM, Chris Banal wrote: > We have a SunFire X4500 running Solaris 10U5 which does about 5-8k nfs ops > of which about 90% are meta data. In hin

Re: [zfs-discuss] How to resize ZFS partion or add a new one?

2010-02-18 Thread Cindy Swearingen
Frank, I can't comment on everything happening here, but please review the ZFS root partition information in this section: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Replacing/Relabeling the Root Pool Disk The p0 partition identifies the larger Solaris partition,

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Ethan
On Thu, Feb 18, 2010 at 15:31, Daniel Carosone wrote: > On Thu, Feb 18, 2010 at 12:42:58PM -0500, Ethan wrote: > > On Thu, Feb 18, 2010 at 04:14, Daniel Carosone wrote: > > Although I do notice that right now, it imports just fine using the p0 > > devices using just `zpool import q`, no longer h

Re: [zfs-discuss] Killing an EFI label

2010-02-18 Thread David Dyer-Bennet
On Thu, February 18, 2010 14:39, Cindy Swearingen wrote: > Its a life-long curse to describe the format utility. Trust me. :-) Oh, I believe it! But thanks for confirming. > I think you want to relabel some disks with an EFI label to SMI label > to be used in your ZFS root pool, and you have o

[zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-18 Thread Robert
At the risk of getting myself flamed with my very first post, will someone please point me to the 'Idiots Guide to Running a NAS with ZFS/OpenSolaris'? - - - sig - - - ...What I lack in knowledge I try to make up in witty humor. -- This message posted from opensolaris.

Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-18 Thread David Dyer-Bennet
On Thu, February 18, 2010 16:21, Robert wrote: > At the risk of getting myself flamed with my very first post, will someone > please point me to the 'Idiots Guide to Running a NAS with > ZFS/OpenSolaris'? I wish. I especially wish it had been around the summer of 2006. This is one of the better

Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-18 Thread Nigel Smith
Hi Robert Have a look at these links: http://delicious.com/nwsmith/opensolaris-nas Regards Nigel Smith -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/

Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-18 Thread Günther
hello you could try my napp-it zfs server it's a zfs server, a minimal opensolaris-based server installation, together with a web-gui - suitable also for non-solaris people - easy to install, ready to run use either nexenta (new version based on snv 133) or opensolaris/eon - at the moment i would

Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-18 Thread Thomas Burgess
On Thu, Feb 18, 2010 at 5:21 PM, Robert > wrote: > At the risk of getting myself flamed with my very first post, will someone > please point me to the 'Idiots Guide to Running a NAS with ZFS/OpenSolaris'? > > - - - sig - - - > ...What I lack in knowledge I try to make up

Re: [zfs-discuss] Problems with sudden zfs capacity loss on snv_79a

2010-02-18 Thread Giovanni Tirloni
On Thu, Feb 18, 2010 at 1:19 AM, Julius Roberts wrote: > Yes snv_79a is old, yes we're working separately on migrating to > snv_111b or later. But i need to solve this problem ASAP to buy me > some more time for that implementation. > > We pull data from a variety of sources onto our zpool called

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-18 Thread James C. McPherson
On 19/02/10 12:51 AM, Maurice Volaski wrote: For those who've been suffering this problem and who have non-Sun jbods, could you please let me know what model of jbod and cables (including length thereof) you have in your configuration. For those of you who have been running xVM without MSI suppo

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-18 Thread Adam Leventhal
Hey Bob, > My own conclusions (supported by Adam Leventhal's excellent paper) are that > > - maximum device size should be constrained based on its time to > resilver. > > - devices are growing too large and it is about time to transition to > the next smaller physical size. I don't disagre

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-18 Thread Bob Friesenhahn
On Thu, 18 Feb 2010, Adam Leventhal wrote: It is unreasonable to spend more than 24 hours to resilver a single drive. Why? Human factors. People usually go to work once per day so it makes sense that they should be able to perform at least once maintenance action per day. Ideally the re

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
> A most excellent set of tests. We could use some units in the PDF > file though. Oh, hehehe. ;-) The units are written in the raw txt files. On your tests, the units were ops/sec, and in mine, they were Kbytes/sec. If you like, you can always grab the xlsx and modify it to your tastes, and

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Edward Ned Harvey
> A most excellent set of tests. We could use some units in the PDF > file though. Oh, by the way, you originally requested the 12G file to be used in benchmark, and later changed to 4G. But by that time, two of the tests had already completed on the 12G, and I didn't throw away those results, b

Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-18 Thread Yuri Vorobyev
Hello. We are seeing the problem on both Sun and non-Sun hardware. On our Sun thumper x4540, we can reproduce it on all 3 devices. Our configuration is large stripes with only 2 vdevs. Doing a simple scrub will show the typical mpt timeout. We are running snv_131. Somebody observed similar p

Re: [zfs-discuss] zpool status output confusing

2010-02-18 Thread Sanjeev
Moshe, You might want to check if you have multiple paths to these disks. - Sanjeev On Wed, Feb 17, 2010 at 07:59:28PM -0800, Moshe Vainer wrote: > I have another very weird one, looks like a reoccurance of the same issue but > with the new firmware. > > We have the following disks: > > AVAILA

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Bob Friesenhahn
On Thu, 18 Feb 2010, Edward Ned Harvey wrote: Actually, that's easy. Although the "zpool create" happens instantly, all the hardware raid configurations required an initial resilver. And they were exactly what you expect. Write 1 Gbit/s until you reach the size of the drive. I watched the pro

Re: [zfs-discuss] ZFS performance benchmarks in various configurations

2010-02-18 Thread Daniel Carosone
On Thu, Feb 18, 2010 at 10:39:48PM -0600, Bob Friesenhahn wrote: > This sounds like an initial 'silver' rather than a 'resilver'. Yes, in particular it will be entirely seqential. ZFS resilver is in txg order and involves seeking. > What I am interested in is the answer to these sort of questio

[zfs-discuss] ZFS mirrored boot disks

2010-02-18 Thread Terry Hull
I have a machine with the Supermicro 8 port SATA card installed. I have had no problem creating a mirrored boot disk using the oft-repeated scheme: prtvtoc /dev/rdsk/c4t0d0s2 | fmthard -s – /dev/rdsk/c4t1d0s2 zpool attach rpool c4t0d0s0 c4t1d0s0 wait for sync installgrub -m /boot/grub/stage1 /bo