Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-31 Thread Henrik Bjornstrom - Sun Microsystems

Hi !

Have anyone given an answer to this that I have missed ? I have a 
customer that have the same question and I want to give him a correct 
answer.


/Henrik

Ketan wrote:
I created a snapshot and subsequent clone of a zfs volume. But now i 'm not able to remove the snapshot it gives me following error 


zfs destroy newpool/ldom2/zdi...@bootimg
cannot destroy 'newpool/ldom2/zdi...@bootimg': snapshot has dependent clones
use '-R' to destroy the following datasets:
newpool/ldom2/zdisk0

and if i promote the clone then the original volume becomes the dependent clone 
, is there a way to destroy just the snapshot leaving the clone and original 
volume intact ?
  



--
Henrik Bjornstrom

Sun MicrosystemsEmail:  henrik.bjornst...@sun.com
Box 51  Phone:  +46 8  631 1315
164 94  KISTA   
SWEDEN 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread Bob Friesenhahn

On Mon, 31 Aug 2009, en...@businessgrade.com wrote:

Hi. I've been doing some simple read/write tests using filebench on a 
mirrored pool. Essentially, I've been scaling up the number of disks in the 
pool before each test between 4, 8 and 12. I've noticed that for individual 
disks, ZFS write performance scales very well between 4, 8 and 12 disks. This 
may be due to the fact that I'm using a SSD as a logging device. But I'm 
seeing individual disk performance drop by as much as 14MB per disk between 4 
and 12 disks. Across the entire pool that means I've lost 168MB of raw 
throughput just by adding two mirror sets. I'm curious to know if there are 
any dials I can turn to improve this. System details are below:


Sun is currently working on several prefetch bugs (complete loss of 
prefetch  insufficient prefetch) which have been identified. 
Perhaps you were not on this list in July when a huge amount of 
discussion traffic was dominated by the topic Why is Solaris 10 ZFS 
performance so terrible?, 
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/029340.html;. 
It turned out that the subject was over-specific since current 
OpenSolaris suffers from the same issues as proven by test results run 
by many people on a wide variety of hardware.


Eventually Rich Morris posted a preliminary analysis of the 
performance problem  at 
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/030169.html;


Hopefully Sun will get the prefetch algorithm and timing perfected so 
that we may enjoy the full benefit of our hardware.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread Mertol Ozyoney
Hi;

You may be hitting a bottleneck at your HBA. Try using multiple HBA's or
drive channels
Mertol 



Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com



-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of
en...@businessgrade.com
Sent: Monday, August 31, 2009 5:16 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS read performance scalability

Hi. I've been doing some simple read/write tests using filebench on a  
mirrored pool. Essentially, I've been scaling up the number of disks  
in the pool before each test between 4, 8 and 12. I've noticed that  
for individual disks, ZFS write performance scales very well between  
4, 8 and 12 disks. This may be due to the fact that I'm using a SSD as  
a logging device. But I'm seeing individual disk performance drop by  
as much as 14MB per disk between 4 and 12 disks. Across the entire  
pool that means I've lost 168MB of raw throughput just by adding two  
mirror sets. I'm curious to know if there are any dials I can turn to  
improve this. System details are below:

HW: Dual Quad Core 2.33 Xeon 8GB RAM
Disks: Seagate Savio 10K 146GB and LSI 1068e HBA latest firmware
OS: SCXE snv_121

Thank in advance..








This email and any files transmitted with it are confidential and are  
intended solely for the use of the individual or entity to whom they  
are addressed. This communication may contain material protected by  
the attorney-client privilege. If you are not the intended recipient,  
be advised that any use, dissemination, forwarding, printing or  
copying is strictly prohibited. If you have received this email in  
error, please contact the sender and delete all copies.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread Richard Elling

There is around a zillion possible reasons for this. In my experience,
most folks don't or can't create enough load. Make sure you have
enough threads creating work.  Other than that, the scientific method
would suggest creating experiments, making measurements, running
regressions, etc.
 -- richard

On Aug 31, 2009, at 7:16 AM, en...@businessgrade.com wrote:

Hi. I've been doing some simple read/write tests using filebench on  
a mirrored pool. Essentially, I've been scaling up the number of  
disks in the pool before each test between 4, 8 and 12. I've noticed  
that for individual disks, ZFS write performance scales very well  
between 4, 8 and 12 disks. This may be due to the fact that I'm  
using a SSD as a logging device. But I'm seeing individual disk  
performance drop by as much as 14MB per disk between 4 and 12 disks.  
Across the entire pool that means I've lost 168MB of raw throughput  
just by adding two mirror sets. I'm curious to know if there are any  
dials I can turn to improve this. System details are below:


HW: Dual Quad Core 2.33 Xeon 8GB RAM
Disks: Seagate Savio 10K 146GB and LSI 1068e HBA latest firmware
OS: SCXE snv_121

Thank in advance..







This email and any files transmitted with it are confidential and  
are intended solely for the use of the individual or entity to whom  
they are addressed. This communication may contain material  
protected by the attorney-client privilege. If you are not the  
intended recipient, be advised that any use, dissemination,  
forwarding, printing or copying is strictly prohibited. If you have  
received this email in error, please contact the sender and delete  
all copies.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread eneal

Quoting Bob Friesenhahn bfrie...@simple.dallas.tx.us:


On Mon, 31 Aug 2009, en...@businessgrade.com wrote:

Hi. I've been doing some simple read/write tests using filebench on  
 a mirrored pool. Essentially, I've been scaling up the number of   
disks in the pool before each test between 4, 8 and 12. I've   
noticed that for individual disks, ZFS write performance scales   
very well between 4, 8 and 12 disks. This may be due to the fact   
that I'm using a SSD as a logging device. But I'm seeing individual  
 disk performance drop by as much as 14MB per disk between 4 and 12  
 disks. Across the entire pool that means I've lost 168MB of raw   
throughput just by adding two mirror sets. I'm curious to know if   
there are any dials I can turn to improve this. System details are   
below:


Sun is currently working on several prefetch bugs (complete loss of
prefetch  insufficient prefetch) which have been identified. Perhaps
you were not on this list in July when a huge amount of discussion
traffic was dominated by the topic Why is Solaris 10 ZFS performance
so terrible?,
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/029340.html;. It
turned out that the subject was over-specific since current OpenSolaris
suffers from the same issues as proven by test results run by many
people on a wide variety of hardware.

Eventually Rich Morris posted a preliminary analysis of the performance
problem  at
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/030169.html;

Hopefully Sun will get the prefetch algorithm and timing perfected so
that we may enjoy the full benefit of our hardware.



Thanks Bob. Can you or anyone else comment on how this bug would  
interact with a zvol that's being remotely accessed? I can see clearly  
how this would come into play in on a local ZFS filesystem, but how  
about a remote system using the zvol through iscsi or FC?




This email and any files transmitted with it are confidential and are  
intended solely for the use of the individual or entity to whom they  
are addressed. This communication may contain material protected by  
the attorney-client privilege. If you are not the intended recipient,  
be advised that any use, dissemination, forwarding, printing or  
copying is strictly prohibited. If you have received this email in  
error, please contact the sender and delete all copies.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance scalability

2009-08-31 Thread eneal

Quoting Mertol Ozyoney mertol.ozyo...@sun.com:


Hi;

You may be hitting a bottleneck at your HBA. Try using multiple HBA's or
drive channels
Mertol




I'm pretty sure it's not a HBA issue. As I commented, my per-disk  
write throughput stayed pretty consistent for 4, 8 and 12 disk pools  
and varied between 80 and 90MB. The overall rough average was about  
85MB per second, per disk.




This email and any files transmitted with it are confidential and are  
intended solely for the use of the individual or entity to whom they  
are addressed. This communication may contain material protected by  
the attorney-client privilege. If you are not the intended recipient,  
be advised that any use, dissemination, forwarding, printing or  
copying is strictly prohibited. If you have received this email in  
error, please contact the sender and delete all copies.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-31 Thread Richard Elling

From the ZFS man page:
 Clones can only be created from a snapshot. When a  snapshot
 is  cloned,  it  creates  an implicit dependency between the
 parent and child. Even though the clone is created somewhere
 else  in the dataset hierarchy, the original snapshot cannot
 be destroyed as long as a clone exists.  The  origin  pro-
 perty exposes this dependency, and the destroy command lists
 any such dependencies, if they exist.

 The  clone  parent-child  dependency  relationship  can   be
 reversed  by using the promote subcommand. This causes the
 origin file system to become a clone of the specified file
 system,  which  makes it possible to destroy the file system
 that the clone was created from.

This implies that the question being asked is incomplete. What are
they trying to do?
 -- richard

On Aug 31, 2009, at 7:30 AM, Henrik Bjornstrom - Sun Microsystems wrote:


Hi !

Have anyone given an answer to this that I have missed ? I have a  
customer that have the same question and I want to give him a  
correct answer.


/Henrik

Ketan wrote:
I created a snapshot and subsequent clone of a zfs volume. But now  
i 'm not able to remove the snapshot it gives me following error

zfs destroy newpool/ldom2/zdi...@bootimg
cannot destroy 'newpool/ldom2/zdi...@bootimg': snapshot has  
dependent clones

use '-R' to destroy the following datasets:
newpool/ldom2/zdisk0

and if i promote the clone then the original volume becomes the  
dependent clone , is there a way to destroy just the snapshot  
leaving the clone and original volume intact ?





--
Henrik Bjornstrom

Sun MicrosystemsEmail:  henrik.bjornst...@sun.com
Box 51  Phone:  +46 8  631 1315
164 94  KISTA   SWEDEN
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-31 Thread Lori Alt

On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote:

Hi !

Have anyone given an answer to this that I have missed ? I have a 
customer that have the same question and I want to give him a correct 
answer.


/Henrik

Ketan wrote:
I created a snapshot and subsequent clone of a zfs volume. But now i 
'm not able to remove the snapshot it gives me following error

zfs destroy newpool/ldom2/zdi...@bootimg
cannot destroy 'newpool/ldom2/zdi...@bootimg': snapshot has dependent 
clones

use '-R' to destroy the following datasets:
newpool/ldom2/zdisk0

and if i promote the clone then the original volume becomes the 
dependent clone , is there a way to destroy just the snapshot leaving 
the clone and original volume intact ?

no. As long as a clone exists, its origin snapshot must exist as well.

lori


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-31 Thread Pawel Jakub Dawidek
On Thu, Aug 27, 2009 at 01:37:11PM -0600, Dave wrote:
 Can anyone from Sun comment on the status/priority of bug ID 6761786? 
 Seems like this would be a very high priority bug, but it hasn't been 
 updated since Oct 2008.
 
 Has anyone else with thousands of volume snapshots experienced the hours 
 long import process?

It might not be direct ZFS fault. I tried to reproduce this on FreeBSD
and I was able to import pool with ~2000 ZVOLs and ~1 ZVOL snapshots
in few minutes. Those were empty ZVOLs and empty snapshots, so keep that
in mind. All in all creating /dev/ entries might be slow in Solaris
that's why experience this behaviour when importing ZFS pool with many
ZVOLs and many ZVOL snapshots (note that every ZVOL snapshot is a device
entry in /dev/zvol/, not like with file systems where snapshots are
mounted on .zfs/snapshot/name lookup and not on import time).

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
p...@freebsd.org   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpTg9d63ool5.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-08-31 Thread Scott Meilicke
As I understand it, when you expand a pool, the data do not automatically 
migrate to the other disks. You will have to rewrite the data somehow, usually 
a backup/restore.

-Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-31 Thread Menno Lageman

On 08/31/09 19:54, Pawel Jakub Dawidek wrote:

On Thu, Aug 27, 2009 at 01:37:11PM -0600, Dave wrote:
Can anyone from Sun comment on the status/priority of bug ID 6761786? 
Seems like this would be a very high priority bug, but it hasn't been 
updated since Oct 2008.


Has anyone else with thousands of volume snapshots experienced the hours 
long import process?


It might not be direct ZFS fault. I tried to reproduce this on FreeBSD
and I was able to import pool with ~2000 ZVOLs and ~1 ZVOL snapshots
in few minutes. Those were empty ZVOLs and empty snapshots, so keep that
in mind. All in all creating /dev/ entries might be slow in Solaris
that's why experience this behaviour when importing ZFS pool with many
ZVOLs and many ZVOL snapshots (note that every ZVOL snapshot is a device
entry in /dev/zvol/, not like with file systems where snapshots are
mounted on .zfs/snapshot/name lookup and not on import time).



Indeed, that (devfsadm taking a long time) is probably 6822622 'zpool 
import with a large number of zvols is very slow'. Alas the information 
available on b.o.o is extremely thin.


I ran into this slow import with lots of snapshots of ZVOLs myself some 
builds ago. A boot would take around 20 minutes. A good way to see if 
you are suffering from this problem is to temporarily comment out the 
line '/usr/sbin/zfs volinit' from /lib/svc/method/devices-local. Booting 
should be much faster then. (I have since disabled automatic snapshots 
on ZVOLs and my system boots in reasonable time again).


Menno
--
Menno Lageman - Sun Microsystems - http://blogs.sun.com/menno
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-31 Thread Lori Alt

On 08/29/09 05:41, Robert Milkowski wrote:

casper@sun.com wrote:

Randall Badilla wrote:
   

Hi all:
First; it is possible modify the boot zpool rpool after OS 
installation...? I install the OS on the whole 72GB harddisk.. it 
is mirrored so If I want to decrease the rpool; for example resize 
to a 36GB slice it can be done?
As far I remember on UFS/SVM I was able to resize boot OS disk via 
detach mirror (so tranforming to one-way mirror); ajust the 
partitions then attach de mirror. After sync boot form the resized 
mirror; re-doing the resize on the remaining mirror and attach 
mirror and reboot.

Dowtime reduced to a reboot times.

  
Yes, you can follow same procedure with zfs (details will differ of 
course).



You can actually change the partitions while you're using the slice.
But after changing the size of both slices you may need to reboot

I've used it also when going from ufs to zfs for boot.

  


But the OP wants to decrease a slice size which if it would work at 
all could lead to loss of data.
You can't decrease the size of a root pool this way (or any way, right 
now).  This is just a specific case of bug 4852783 (reduce pool 
capacity).  A fix for this is underway, but is not yet available.


Lori






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Jason
I've been looking to build my own cheap SAN to explore HA scenarios with VMware 
hosts, though not for a production environment.  I'm new to opensolaris but I 
am familiar with other clustered HA systems.  The features of ZFS seem like 
they would fit right in with attempting to build an HA storage platform for 
VMware hosts on inexpensive hardware.

Here is what I am thinking.  I want to have at least two clustered nodes (may 
be virtual running off the local storage of the VMware host) that act as the 
front end of the SAN.  These will not have any real storage themselves, but 
will be initiators for backend computers with the actual disks in them.  I want 
to be able to add and remove/replace at will so I figure the backends will just 
be fairly dumb iSCSI targets that just present each disk.  That way the front 
ends are close to the hardware for zfs to work best but would not limit a raid 
set to the capacity of a single enclosure.  

I'd like to present a RAIDZ2 array as a block device to VMware, how would that 
work?  Could that then be clustered so the iSCSI target is HA?  Am I completely 
off base or is there an easier way?  My goal is to be able to kill any one box 
(or multiple) and still keep the storage available for VMware, but still get a 
better total storage to usable ratio than just a plain mirror (2:1).  I also 
want to be able to add and remove storage dynamically.  You know, champagne on 
a beer budget. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Tim Cook
On Mon, Aug 31, 2009 at 3:42 PM, Jason wheelz...@hotmail.com wrote:

 I've been looking to build my own cheap SAN to explore HA scenarios with
 VMware hosts, though not for a production environment.  I'm new to
 opensolaris but I am familiar with other clustered HA systems.  The features
 of ZFS seem like they would fit right in with attempting to build an HA
 storage platform for VMware hosts on inexpensive hardware.

 Here is what I am thinking.  I want to have at least two clustered nodes
 (may be virtual running off the local storage of the VMware host) that act
 as the front end of the SAN.  These will not have any real storage
 themselves, but will be initiators for backend computers with the actual
 disks in them.  I want to be able to add and remove/replace at will so I
 figure the backends will just be fairly dumb iSCSI targets that just present
 each disk.  That way the front ends are close to the hardware for zfs to
 work best but would not limit a raid set to the capacity of a single
 enclosure.

 I'd like to present a RAIDZ2 array as a block device to VMware, how would
 that work?  Could that then be clustered so the iSCSI target is HA?  Am I
 completely off base or is there an easier way?  My goal is to be able to
 kill any one box (or multiple) and still keep the storage available for
 VMware, but still get a better total storage to usable ratio than just a
 plain mirror (2:1).  I also want to be able to add and remove storage
 dynamically.  You know, champagne on a beer budget. :)


Any particular reason you want to present block storage to VMware?  It works
as well, if not better over NFS, and saves a LOT of headaches.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Jason
Well, I knew a guy who was involved in a project to do just that for a 
production environment.  Basically they abandoned using that because there was 
a huge performance hit using ZFS over NFS.  I didn’t get the specifics but his 
group is usually pretty sharp.  I’ll have to check back with him.  So mainly 
just to avoid that, but also VMware tends to roll out storage features on NFS 
last after fibre and iSCSI.

*sorry if this is duplicate... Learning the workings of this discussion forum 
as well*
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Tim Cook
On Mon, Aug 31, 2009 at 4:26 PM, Jason wheelz...@hotmail.com wrote:

 Well, I knew a guy who was involved in a project to do just that for a
 production environment.  Basically they abandoned using that because there
 was a huge performance hit using ZFS over NFS.  I didn’t get the specifics
 but his group is usually pretty sharp.  I’ll have to check back with him.
  So mainly just to avoid that, but also VMware tends to roll out storage
 features on NFS last after fibre and iSCSI.

 *sorry if this is duplicate... Learning the workings of this discussion
 forum as well*
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




That's not true at all.  Dynamic grow and shrink has been available on NFS
forever.  You STILL can't shrink vmfs, and they've JUST added grow
capabilities.  Not to mention it being thin provisioned by default.  As for
performance, I have a tough time believing his performance issues were
because of NFS, and not some other underlying bug.

I've got MASSIVE deployments of VMware on NFS over 10g that achieve stellar
performance (admittedly, it isn't on zfs).

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread Jason
Specifically I remember storage vmotion being supported on NFS last as well as 
jumbo frames.  Just the impression I get from past features, perhaps they are 
doing better with that.

I know the performance problem had specifically to do with ZFS and the way it 
handled something.  I know lots of implementations with just straight NFS so I 
know that works... I'm not opposed to NFS but I was hoping what he saw was just 
a combination of ZFS over NFS as he said he didn't know if it would happen over 
iSCSI.  So I thought I'd try that first.  I'll have to see if I can get the 
details from him tomorrow.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool scrub started resilver, not scrub

2009-08-31 Thread Albert Chin
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
 # cat /etc/release
   Solaris Express Community Edition snv_105 X86
Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
Assembled 15 December 2008

 So, why is a resilver in progress when I asked for a scrub?

Still seeing the same problem with snv_114.
  # cat /etc/release
Solaris Express Community Edition snv_114 X86
 Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
  Use is subject to license terms.
Assembled 04 May 2009

How do I scrub this pool?

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-08-31 Thread David Magda

On Aug 31, 2009, at 17:29, Tim Cook wrote:

I've got MASSIVE deployments of VMware on NFS over 10g that achieve  
stellar

performance (admittedly, it isn't on zfs).


Without a separate ZIL device NFS would probably be slower with NFS-- 
hence why Sun's own appliances use SSDs.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-31 Thread Jorgen Lundman
The mv8 is a marvell based chipset, and it appears there are no Solaris 
drivers for it.  There doesn't appear to be any movement from Sun or 
marvell to provide any either.




Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 
and AOC-SAT2-MV8, which use Marvell MV88SX and works very well in 
Solaris. (Package SUNWmv88sx).


Lund

--
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-31 Thread Tim Cook
On Mon, Aug 31, 2009 at 8:26 PM, Jorgen Lundman lund...@gmo.jp wrote:

 The mv8 is a marvell based chipset, and it appears there are no Solaris
 drivers for it.  There doesn't appear to be any movement from Sun or marvell
 to provide any either.


 Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 and
 AOC-SAT2-MV8, which use Marvell MV88SX and works very well in Solaris.
 (Package SUNWmv88sx).

 Lund

 --
 Jorgen Lundman   | lund...@lundman.net
 Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
 Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
 Japan| +81 (0)3 -3375-1767  (home)


Interesting, there was a big thread that this card was in over at hardocp,
and they said with 2009.06 it didn't work.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss