Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-31 Thread Henrik Bjornstrom - Sun Microsystems
Hi ! Have anyone given an answer to this that I have missed ? I have a customer that have the same question and I want to give him a correct answer. /Henrik Ketan wrote: I created a snapshot and subsequent clone of a zfs volume. But now i 'm not able to remove the snapshot it gives me

Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread Bob Friesenhahn
On Mon, 31 Aug 2009, en...@businessgrade.com wrote: Hi. I've been doing some simple read/write tests using filebench on a mirrored pool. Essentially, I've been scaling up the number of disks in the pool before each test between 4, 8 and 12. I've noticed that for individual disks, ZFS write

Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread Mertol Ozyoney
Hi; You may be hitting a bottleneck at your HBA. Try using multiple HBA's or drive channels Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +90212335 Email mertol.ozyo...@sun.com -Original

Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread Richard Elling
There is around a zillion possible reasons for this. In my experience, most folks don't or can't create enough load. Make sure you have enough threads creating work. Other than that, the scientific method would suggest creating experiments, making measurements, running regressions, etc. --

Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread eneal
Quoting Bob Friesenhahn bfrie...@simple.dallas.tx.us: On Mon, 31 Aug 2009, en...@businessgrade.com wrote: Hi. I've been doing some simple read/write tests using filebench on a mirrored pool. Essentially, I've been scaling up the number of disks in the pool before each test between 4, 8

Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread eneal
Quoting Mertol Ozyoney mertol.ozyo...@sun.com: Hi; You may be hitting a bottleneck at your HBA. Try using multiple HBA's or drive channels Mertol I'm pretty sure it's not a HBA issue. As I commented, my per-disk write throughput stayed pretty consistent for 4, 8 and 12 disk pools and

Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-31 Thread Richard Elling
From the ZFS man page: Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be

Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-31 Thread Lori Alt
On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote: Hi ! Have anyone given an answer to this that I have missed ? I have a customer that have the same question and I want to give him a correct answer. /Henrik Ketan wrote: I created a snapshot and subsequent clone of a zfs

Re: [zfs-discuss] Status/priority of 6761786

2009-08-31 Thread Pawel Jakub Dawidek
On Thu, Aug 27, 2009 at 01:37:11PM -0600, Dave wrote: Can anyone from Sun comment on the status/priority of bug ID 6761786? Seems like this would be a very high priority bug, but it hasn't been updated since Oct 2008. Has anyone else with thousands of volume snapshots experienced the hours

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-08-31 Thread Scott Meilicke
As I understand it, when you expand a pool, the data do not automatically migrate to the other disks. You will have to rewrite the data somehow, usually a backup/restore. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Status/priority of 6761786

2009-08-31 Thread Menno Lageman
On 08/31/09 19:54, Pawel Jakub Dawidek wrote: On Thu, Aug 27, 2009 at 01:37:11PM -0600, Dave wrote: Can anyone from Sun comment on the status/priority of bug ID 6761786? Seems like this would be a very high priority bug, but it hasn't been updated since Oct 2008. Has anyone else with

Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-31 Thread Lori Alt
On 08/29/09 05:41, Robert Milkowski wrote: casper@sun.com wrote: Randall Badilla wrote: Hi all: First; it is possible modify the boot zpool rpool after OS installation...? I install the OS on the whole 72GB harddisk.. it is mirrored so If I want to decrease the rpool; for example

[zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Jason
I've been looking to build my own cheap SAN to explore HA scenarios with VMware hosts, though not for a production environment. I'm new to opensolaris but I am familiar with other clustered HA systems. The features of ZFS seem like they would fit right in with attempting to build an HA

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Tim Cook
On Mon, Aug 31, 2009 at 3:42 PM, Jason wheelz...@hotmail.com wrote: I've been looking to build my own cheap SAN to explore HA scenarios with VMware hosts, though not for a production environment. I'm new to opensolaris but I am familiar with other clustered HA systems. The features of ZFS

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Jason
Well, I knew a guy who was involved in a project to do just that for a production environment. Basically they abandoned using that because there was a huge performance hit using ZFS over NFS. I didn’t get the specifics but his group is usually pretty sharp. I’ll have to check back with him.

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Tim Cook
On Mon, Aug 31, 2009 at 4:26 PM, Jason wheelz...@hotmail.com wrote: Well, I knew a guy who was involved in a project to do just that for a production environment. Basically they abandoned using that because there was a huge performance hit using ZFS over NFS. I didn’t get the specifics but

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Jason
Specifically I remember storage vmotion being supported on NFS last as well as jumbo frames. Just the impression I get from past features, perhaps they are doing better with that. I know the performance problem had specifically to do with ZFS and the way it handled something. I know lots of

Re: [zfs-discuss] zpool scrub started resilver, not scrub

2009-08-31 Thread Albert Chin
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote: # cat /etc/release Solaris Express Community Edition snv_105 X86 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms.

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread David Magda
On Aug 31, 2009, at 17:29, Tim Cook wrote: I've got MASSIVE deployments of VMware on NFS over 10g that achieve stellar performance (admittedly, it isn't on zfs). Without a separate ZIL device NFS would probably be slower with NFS-- hence why Sun's own appliances use SSDs.

Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-31 Thread Jorgen Lundman
The mv8 is a marvell based chipset, and it appears there are no Solaris drivers for it. There doesn't appear to be any movement from Sun or marvell to provide any either. Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 and AOC-SAT2-MV8, which use Marvell MV88SX and

Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-31 Thread Tim Cook
On Mon, Aug 31, 2009 at 8:26 PM, Jorgen Lundman lund...@gmo.jp wrote: The mv8 is a marvell based chipset, and it appears there are no Solaris drivers for it. There doesn't appear to be any movement from Sun or marvell to provide any either. Do you mean specifically Marvell 6480 drivers? I