Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Axel Denfeld
Hi, i think, the local ZFS filesystem with raidz on the 7210 is not the problem (when there are fast HDs), but you can test it with e.g. bonnie++ (downloadable at sunfreeware.com), also NFS should not be the problem because iscsi is also very slow(isn´t it?). some other ideas are: Network

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Saso Kiselkov
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 If I remember correctly, ESX always uses synchronous writes over NFS. If so, adding a dedicated log device (such as a DDRdrive) might help you out here. You should be able to test it by disabling the ZIL for a short while and see if performance

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Simone Caldana
Hi, In a setup similar to yours I changed from a single 15 disks raidz2 to 7 mirros of 2 disks each. The change in performance was stellar. The key point in serving things for VMware is that it always issue synchronous writes, wheter on iscsi or NFS. When you have tens of VM the resulting

Re: [zfs-discuss] ZFS with SAN's and HA

2010-08-27 Thread LaoTsao 老曹
On 8/27/2010 12:25 AM, Michael Dodwell wrote: Lao, I had a look at the HAStoragePlus etc and from what i understand that's to mirror local storage across 2 nodes for services to be able to access 'DRBD style'. not true, HAS+ use shred storage. in this case since ZFS is not clustered FS so

[zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
For quite some time I'm bitten by the fact that on my laptop (currently running self-built snv_147) zpool status rpool and format disagree about the device name of the root disk: r...@masaya 14 zpool status rpool pool: rpool state: ONLINE status: The pool is formatted using an older on-disk

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread LaoTsao 老曹
hi may be boot a livecd then export and import the zpool? regards On 8/27/2010 8:27 AM, Rainer Orth wrote: For quite some time I'm bitten by the fact that on my laptop (currently running self-built snv_147) zpool status rpool and format disagree about the device name of the root disk:

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Mark J Musante
On Fri, 27 Aug 2010, Rainer Orth wrote: zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) correctly believe it's c11t0d0(s3) instead. Any suggestions? Try removing the symlinks or using 'devfsadm -C' as suggested here:

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
LaoTsao 老曹 laot...@gmail.com writes: may be boot a livecd then export and import the zpool? I've already tried all sorts of contortions to regenerate /etc/path_to_inst to no avail. This is simply a case of `should not happen'. Rainer --

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Eff Norwood
Saso is correct - ESX/i always uses F_SYNC for all writes and that is for sure your performance killer. Do a snoop | grep sync and you'll see the sync write calls from VMWare. We use DDRdrives in our production VMWare storage and they are excellent for solving this problem. Our cluster supports

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread David Magda
On Fri, August 27, 2010 08:46, Eff Norwood wrote: Saso is correct - ESX/i always uses F_SYNC for all writes and that is for sure your performance killer. Do a snoop | grep sync and you'll see the sync write calls from VMWare. We use DDRdrives in our production VMWare storage and they are

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Eff Norwood
David asked me what I meant by filled up. If you make the unwise decision to use an SSD as your ZIL, at some point days to weeks after you install it, all of the pages will be allocated and you will suddenly find the device to be slower than a conventional disk drive. This is due to the way

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Mark J Musante mark.musa...@oracle.com writes: On Fri, 27 Aug 2010, Rainer Orth wrote: zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) correctly believe it's c11t0d0(s3) instead. Any suggestions? Try removing the symlinks or using 'devfsadm -C' as suggested here:

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ross Walker
On Aug 27, 2010, at 1:04 AM, Mark markwo...@yahoo.com wrote: We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I installed I selected the best bang for the buck on the speed vs capacity chart. We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Sean Sprague
Rainer, devfsadm -C alone didn't make a difference, but clearing out /dev/*dsk and running devfsadm -Cv did help. I am glad it helped; but removing anything from /dev/*dsk is a kludge that cannot be accepted/condoned/supported. Regards... Sean.

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Sean, I am glad it helped; but removing anything from /dev/*dsk is a kludge that cannot be accepted/condoned/supported. no doubt about this: two parts of the kernel (zfs vs. devfs?) disagreeing about devices mustn't happen. Rainer --

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 05:51:38AM -0700, David Magda wrote: On Fri, August 27, 2010 08:46, Eff Norwood wrote: Saso is correct - ESX/i always uses F_SYNC for all writes and that is for sure your performance killer. Do a snoop | grep sync and you'll see the sync write calls from VMWare. We

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-27 Thread Bob Friesenhahn
On Thu, 26 Aug 2010, George Wilson wrote: David Magda wrote: On Wed, August 25, 2010 23:00, Neil Perrin wrote: Does a scrub go through the slog and/or L2ARC devices, or only the primary storage components? A scrub will go through slogs and primary storage devices. The L2ARC device is

[zfs-discuss] labelfix binary for sparc

2010-08-27 Thread Benjamin Brumaire
Hi, sometimes ago Jeff Bonwick provides source code and x86-binary for a tool to recover detached disks. Refs at http://www.opensolaris.org/jive/thread.jspa?messageID=229969 or http://opensolaris.org/jive/thread.jspa?messageID=303895#303895 Does someone have an binary for sparc at hand? I

[zfs-discuss] Unrecoverable error in zpool and finding faulty hardware

2010-08-27 Thread Warren Strange
I have been running some large VirtualBox guest images on OpenSolaris (b134) - and have on three occasions had my zpool develop unrecoverable errors. The corruption developed in the VirtualBox disk image files. These are large files with intense activity- so better chance of seeing errors I

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Cindy Swearingen
Hi Rainer, I'm no device expert but we see this problem when firmware updates or other device/controller changes change the device ID associated with the devices in the pool. In general, ZFS can handle controller/device changes if the driver generates or fabricates device IDs. You can view

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Rainer Orth
Hi Cindy, I'll investigate more next week since I'm in a hurry to leave, but one point now: I'm no device expert but we see this problem when firmware updates or other device/controller changes change the device ID associated with the devices in the pool. This is the internal disk in a

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Eric D. Mudama
On Fri, Aug 27 at 6:16, Eff Norwood wrote: David asked me what I meant by filled up. If you make the unwise decision to use an SSD as your ZIL, at some point days to weeks after you install it, all of the pages will be allocated and you will suddenly find the device to be slower than a

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Mark
Hey thanks for the replies everyone. Saddly most of those options will not work, since we are using a SUN Unified Storage 7210, the only option is to buy the SUN SSD's for it, which is about $15k USD for a pair. We also don't have the ability to shut off ZIL or any of the other options that

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Marion Hakanson
markwo...@yahoo.com said: So the question is with a proper ZIL SSD from SUN, and a RAID10... would I be able to support all the VM's or would it still be pushing the limits a 44 disk pool? If it weren't a closed 7000-series appliance, I'd suggest running the zilstat script. It should make it

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 11:57:17AM -0700, Marion Hakanson wrote: markwo...@yahoo.com said: So the question is with a proper ZIL SSD from SUN, and a RAID10... would I be able to support all the VM's or would it still be pushing the limits a 44 disk pool? If it weren't a closed

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Mark
It does, its on a pair of large APC's. Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are mounted inside a couple Windows VM's. I'd have to migrate all our VM's to iSCSI, which I'm willing to do if it would help and not cause other issues. So far the 7210

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 12:46:42PM -0700, Mark wrote: It does, its on a pair of large APC's. Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are mounted inside a couple Windows VM's. I'd have to migrate all our VM's to iSCSI, which I'm willing to do if it would

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread John
Wouldn't it be possible to saturate the SSD ZIL with enough backlogged sync writes? What I mean is, doesn't the ZIL eventually need to make it to the pool, and if the pool as a whole (spinning disks) can't keep up with 30+ vm's of write requests, couldn't you fill up the ZIL that way? --

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ware Adams
On Aug 27, 2010, at 2:32 PM, Mark wrote: Saddly most of those options will not work, since we are using a SUN Unified Storage 7210, the only option is to buy the SUN SSD's for it, which is about $15k USD for a pair. We also don't have the ability to shut off ZIL or any of the other

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 01:22:15PM -0700, John wrote: Wouldn't it be possible to saturate the SSD ZIL with enough backlogged sync writes? What I mean is, doesn't the ZIL eventually need to make it to the pool, and if the pool as a whole (spinning disks) can't keep up with 30+ vm's of write

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Paul Choi
No. From what I've seen, ZFS will periodically flush writes from the ZIL to disk. You may run into a read starvation situation where ZFS is so busy flushing to disk that you won't get reads. If you have VMs where developers expect low latency interactivity, they get unhappy. Trust me. :)

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Mertol Özyöney
Hi Mark; I have installed several 7000 series systems, some running 100's of VM's. I can help try to help you but to find where exactly the problem is I may need more information. I can understand that you have no ZIL's. So most probably you are using the 7110 with 250 GB drives. All 7000

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Eff Norwood
By all means please try it to validate it yourself and post your results from hour one, day one and week one. In a ZIL use case, although the data set is small it is always writing a small ever changing (from the SSDs perspective) data set. The SSD does not know to release previously written

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ray Van Dolson
On Fri, Aug 27, 2010 at 03:51:39PM -0700, Eff Norwood wrote: By all means please try it to validate it yourself and post your results from hour one, day one and week one. In a ZIL use case, although the data set is small it is always writing a small ever changing (from the SSDs perspective)

[zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Robert Milkowski
Hi, When I set readonly=on on a dataset then no new files are allowed to be created. However writes to already opened files are allowed. This is rather counter intuitive - if I set a filesystem as read-only I would expect it not to allow any modifications to it. I think it shouldn't behave

Re: [zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Ian Collins
On 08/28/10 11:13 AM, Robert Milkowski wrote: Hi, When I set readonly=on on a dataset then no new files are allowed to be created. However writes to already opened files are allowed. This is rather counter intuitive - if I set a filesystem as read-only I would expect it not to allow any

Re: [zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Ian Collins
On 08/28/10 12:05 PM, Ian Collins wrote: On 08/28/10 11:13 AM, Robert Milkowski wrote: Hi, When I set readonly=on on a dataset then no new files are allowed to be created. However writes to already opened files are allowed. This is rather counter intuitive - if I set a filesystem as

Re: [zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Nicolas Williams
On Sat, Aug 28, 2010 at 12:05:53PM +1200, Ian Collins wrote: Think of this from the perspective of an application. How would write failure be reported? open(2) returns EACCES if the file can not be written but there isn't a corresponding return from write(2). Any open file descriptors would

Re: [zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins However writes to already opened files are allowed. Think of this from the perspective of an application. How would write failure be reported? Both very good points. But I

[zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-27 Thread Darin Perusich
Hello All, I'm sure this has been discussed previously but I haven't been able to find an answer to this. I've added another raidz1 vdev to an existing storage pool and the increased available storage isn't reflected in the 'zfs list' output. Why is this? The system in question is runnning

Re: [zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Ian Collins
On 08/28/10 12:45 PM, Edward Ned Harvey wrote: Another specific example ... Suppose you zfs send from a primary server to a backup server. You want the filesystems to be readonly on the backup fileserver, in order to receive incrementals. If you make a mistake, and start writing to the backup

Re: [zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian Collins so it should behave in the same way as an unmount in the presence of open files. +1 You can unmount lazy, or force, or by default, the unmount fails in the presence of open

[zfs-discuss] get quota showed in precision of byte?

2010-08-27 Thread Fred Liu
Hi, Is it possible to do zfs get -??? quota filesystem ? Thanks. Fred ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Edward Ned Harvey
From: Ian Collins [mailto:i...@ianshome.com] On 08/28/10 12:45 PM, Edward Ned Harvey wrote: Another specific example ... Suppose you zfs send from a primary server to a backup server. You want the filesystems to be readonly on the backup fileserver, in order to receive

[zfs-discuss] native ZFS on Linux

2010-08-27 Thread David Magda
This just popped up: In terms of how native ZFS for Linux is being handled by [KQ Infotec], they are releasing their ported ZFS code under the Common Development Distribution License and will not be attempting to go for mainline integration. Instead, this company will just be releasing

Re: [zfs-discuss] get quota showed in precision of byte?

2010-08-27 Thread Fred Liu
I get the answer: -p. -Original Message- From: Fred Liu Sent: 星期六, 八月 28, 2010 9:00 To: zfs-discuss@opensolaris.org Subject: get quota showed in precision of byte? Hi, Is it possible to do zfs get -??? quota filesystem ? Thanks. Fred

Re: [zfs-discuss] native ZFS on Linux

2010-08-27 Thread Anurag Agarwal
Hi, Thanks for posting information about this port here. Just to add few points: * Currently we are planning to do a closed beta for this port, which is based on b121, we will be doing a proper release around end of this year, which will be based on latest build b141. If you are interested in

Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-27 Thread Edho P Arief
On Sat, Aug 28, 2010 at 7:54 AM, Darin Perusich darin.perus...@cognigencorp.com wrote: Hello All, I'm sure this has been discussed previously but I haven't been able to find an answer to this. I've added another raidz1 vdev to an existing storage pool and the increased available storage isn't

Re: [zfs-discuss] Write Once Read Many on ZFS

2010-08-27 Thread jason matthews
I can think of two rather ghetto ways to go. 1. write data then set the read-only property. If you need to make updates cycle back to rw, write data, set read only. 2. Write data, snapshot the fs, expose the snapshot instead of the r/w file system. Your mileage may vary depending on the