[zfs-discuss] How can i make my zpool as faulted.

2008-10-17 Thread yuvraj
Hi Friends, I have create my own Zpool on Solaris 10 also created ZFS for the same. Now i wanna make that zpool as a faulted one. If you are aware of those command. Please do reply for the same problem. You may reply me on [EMAIL PROTECTED] Thanks in advance.

Re: [zfs-discuss] How can i make my zpool as faulted.

2008-10-17 Thread Sanjeev
Yuvraj, Can you please post the details of the zpool ? 'zpool status' should give you that. You could pull out one of the disks. Thanks and regards, Sanjeev. On Thu, Oct 16, 2008 at 11:22:43PM -0700, yuvraj wrote: Hi Friends, I have create my own Zpool on Solaris 10 also

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-17 Thread Ross
Some of that is very worrying Miles, do you have bug ID's for any of those problems? I'm guessing the problem of the device being reported ok after the reboot could be this one: http://bugs.opensolaris.org/view_bug.do?bug_id=6582549 And could the errors after the reboot be one of these?

Re: [zfs-discuss] Enable compression on ZFS root

2008-10-17 Thread Darren J Moffat
dick hoogendijk wrote: Vincent Fox wrote: Or perhaps compression should be the default. No way please! Things taking even more memory should never be the default. An installation switch would be nice though. Freedom of coice ;-) Compression does not take more memory, the data is always

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-17 Thread Al Hopper
On Thu, Oct 16, 2008 at 6:52 AM, Tomas Ă–gren [EMAIL PROTECTED] wrote: On 16 October, 2008 - Ross sent me these 1,1K bytes: I might be misunderstanding here, but I don't see how you're going to improve on zfs set primarycache=metadata. You complain that ZFS throws away 96kb of data if you're

Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Ross
Ok, just did some more testing on this machine to try to find where my bottlenecks are. Something very odd is going on here. As best I can tell there are two separate problems now: - something is throttling network output to 10MB/s - something is throttling zfs send to around 20MB/s The

Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Dimitri Aivaliotis
Hi Ross, On Fri, Oct 17, 2008 at 1:35 PM, Ross [EMAIL PROTECTED] wrote: Ok, just did some more testing on this machine to try to find where my bottlenecks are. Something very odd is going on here. As best I can tell there are two separate problems now: - something is throttling network

Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Ross
Yup, that's one of the first things I checked when it came out with figures so close to 10MB/s. All three servers are running full duplex gigabit though, as reported by both Solaris and the switch. And both the NFS at 60+MB/s, and the zfs send / receive are all going over the same network link,

[zfs-discuss] Why does ZFS pool require root privileges to access?

2008-10-17 Thread Todd E. Moore
Issue: Privileged (root) account required to access zpool imported from Mac OS/X. Just installed b119 bits onto my OS/X 10.5.5 system today in an attempt to share VirtualBox disk image files between my Mac and my OpenSolaris (2008.11 b99) laptop. Install worked well and I was able to

Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Scott Williamson
Hi All, I have opened a ticket with sun support #66104157 regarding zfs send / receive and will let you know what I find out. Keep in mind that this is for Solaris 10 not opensolaris. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-17 Thread Bob Friesenhahn
On Fri, 17 Oct 2008, Al Hopper wrote: a) inexpensive, large capacity SATA drives running at 7,200 RPM and providing, approximately, 300 IOPS. b) expensive, small capacity, SAS drives running at 15k RPM and providing, approx, 700 IOPS. Al, Where are you getting the above IOPS estimates from?

Re: [zfs-discuss] Why does ZFS pool require root privileges to access?

2008-10-17 Thread Todd E. Moore
Additional Information after continuing to tinker: After importing the zpool, if I use "root" to manually 'chmod' the file permissions on the zpool's mount point, then non-privilege users can access the pool. This alone doesn't solve the problem since all files in the pool need to be

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-17 Thread Al Hopper
On Fri, Oct 17, 2008 at 10:51 AM, Bob Friesenhahn [EMAIL PROTECTED] wrote: On Fri, 17 Oct 2008, Al Hopper wrote: a) inexpensive, large capacity SATA drives running at 7,200 RPM and providing, approximately, 300 IOPS. b) expensive, small capacity, SAS drives running at 15k RPM and providing,

Re: [zfs-discuss] Best practice recommendations for ZFS + Virtualbox

2008-10-17 Thread David Abrahams
on Wed Oct 15 2008, Miles Nordin carton-AT-Ivy.NET wrote: s == Steve [EMAIL PROTECTED] writes: s the use of zfs s clones/snapshots encompasses the entire zfs filesystem I use one ZFS filesystem per VDI file. It might be better to use vmdk's and zvol's, but right now that's not

Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Miles Nordin
r == Ross [EMAIL PROTECTED] writes: r figures so close to 10MB/s. All three servers are running r full duplex gigabit though there is one tricky way 100Mbit/s could still bite you, but it's probably not happening to you. It mostly affects home users with unmanaged switches:

Re: [zfs-discuss] Best practice recommendations for ZFS + Virtualbox

2008-10-17 Thread Miles Nordin
da == David Abrahams [EMAIL PROTECTED] writes: da how to deal with backups to my Amazon s3 storage area. Does da zfs send avoid duplicating common data in clones and da snapshots? how can you afford to use something so expensive as S3 for backups? Anyway 'zfs send' does avoid

Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Richard Elling
Scott Williamson wrote: Hi All, I have opened a ticket with sun support #66104157 regarding zfs send / receive and will let you know what I find out. Thanks. Keep in mind that this is for Solaris 10 not opensolaris. Keep in mind that any changes required for Solaris 10 will first be

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-17 Thread Marcelo Leal
Hello all, I think he got some point here... maybe that would be an interesting feature for that kind of workload. Caching all the metadata, would make the rsync task more fast (for many files). Try to cache the data is really waste of time, because the data will not be read again, and will

Re: [zfs-discuss] Best practice recommendations for ZFS + Virtualbox

2008-10-17 Thread David Abrahams
on Fri Oct 17 2008, Miles Nordin carton-AT-Ivy.NET wrote: da == David Abrahams [EMAIL PROTECTED] writes: da how to deal with backups to my Amazon s3 storage area. Does da zfs send avoid duplicating common data in clones and da snapshots? how can you afford to use something so

Re: [zfs-discuss] Improving zfs send performance

2008-10-17 Thread Scott Williamson
On Fri, Oct 17, 2008 at 2:48 PM, Richard Elling [EMAIL PROTECTED]wrote: Keep in mind that any changes required for Solaris 10 will first be available in OpenSolaris, including any changes which may have already been implemented. For me (who uses SOL10) it is the only way I can get

[zfs-discuss] bad ZFS-backed iSCSI target

2008-10-17 Thread Chris Cheetham
I have a volume shared via iSCSI that has become unusable. Both target and initiator nodes are running nevada b99. Running newfs on the initiator node fails immediately with an I/O error (no other details). The pool in which the bad volume resides includes other volumes exported via

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-17 Thread Tano
Do you have an active interface on the OpenSolaris box that is configured for 0.0.0.0 right now? Not anymore: By default, since you haven't configured the tpgt on the iscsi target, solaris will broadcast all active interfaces in its SendTargets response. On the ESX side, ESX will attempt to

Re: [zfs-discuss] Tuning for a file server, disabling data cache (almost)

2008-10-17 Thread David Collier-Brown
Marcelo Leal [EMAIL PROTECTED] wrote: Hello all, I think he got some point here... maybe that would be an interesting feature for that kind of workload. Caching all the metadata, would make t the rsync task more fast (for many files). Try to cache the data is really waste of time, because

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-17 Thread Mike La Spina
Hello Tano, The issue here is not the target or VMware but a missing GUID on the target as the issue. Observe the target smf properties using iscsitadm list target -v You have iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi Connections: 0 ACL list: TPGT list:

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-17 Thread Tano
Hi, I rebooted the server after I submitted the information to release the locks set up on my ESX host. After the reboot: I reran the iscsitadm list target -v and the GUIDs showed up. Only interesting problem: the GUID's are identical (any problems with that?) [EMAIL PROTECTED]:~# iscsitadm

Re: [zfs-discuss] Best practice recommendations for ZFS + Virtualbox

2008-10-17 Thread Miles Nordin
da == David Abrahams [EMAIL PROTECTED] writes: da Is there a cheaper alternative that will securely and da persistently store a copy of my data offsite? rented dedicated servers with disks in them? I have not shopped for this, but for backups it just needs to not lose your data at the

Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-10-17 Thread Roch Bourbonnais
Leave the default recordsize. With 128K recordsize, files smaller than 128K are stored as single record tightly fitted to the smallest possible # of disk sectors. Reads and writes are then managed with fewer ops. Not tuning the recordsize is very generally more space efficient and more