Re: [zfs-discuss] One dataset per user?

2010-07-28 Thread Juergen Nickelsen
Edward Ned Harvey solar...@nedharvey.com writes:

 There are legitimate specific reasons to use separate filesystems
 in some circumstances. But if you can't name one reason why it's
 better ... then it's not better for you.

Having separate filesystems per user lets you create user-specific
quotas and reservations, lets you allow users to make their own
snapshots, and lets you do zfs send/recv replication of single user
home directories (for backup or move to another pool), and even
allow the users to do that on their own.

-- 
Usenet is not a right. It is a right, a left, a jab, and a sharp
uppercut to the jaw. The postman hits! You have new mail.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] modified mdb and zdb

2010-07-28 Thread Jonathan Cifuentes

Hi,
I would really apreciate if any of you can help me get the modified mdb and zdb 
(in any version of OpenSolaris) for digital forensic reserch purpose. 
Thank you.

Jonathan Cifuentes
_
Invite your mail contacts to join your friends list with Windows Live Spaces. 
It's easy!
http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmkt=en-us___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] root pool expansion

2010-07-28 Thread Gary Gendel
Right now I have a machine with a mirrored boot setup.  The SAS drives are 43Gs 
and the root pool is getting full.

I do a backup of the pool nightly, so I feel confident that I don't need to 
mirror the drive and can break the mirror and expand the pool with the detached 
drive.

I understand how to do this on a normal pool, but is there any restrictions for 
doing this on the root pool?  Are there any grub issues?

Thanks,
Gary
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root pool expansion

2010-07-28 Thread Mark J Musante

On Wed, 28 Jul 2010, Gary Gendel wrote:


Right now I have a machine with a mirrored boot setup.  The SAS drives are 43Gs 
and the root pool is getting full.

I do a backup of the pool nightly, so I feel confident that I don't need to 
mirror the drive and can break the mirror and expand the pool with the detached 
drive.

I understand how to do this on a normal pool, but is there any restrictions for 
doing this on the root pool?  Are there any grub issues?


You cannot stripe a root pool.  Best you could do in this instance is to 
create a new pool from the detached mirror.  You may want to consider 
keepting the redundancy of the mirror config so that zfs can automatically 
repair any corruption it detects.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-28 Thread Edward Ned Harvey
 From: Richard Elling [mailto:richard.ell...@gmail.com]
 
  http://arc.opensolaris.org/caselog/PSARC/2010/193/mail
 
 Agree.  This is a better solution because some configurable parameters
 are hidden from zfs get all

Forgive me for not seeing it ... That link is extremely dense, and 34 pages
long ...

Is there an option, that will capture properties better than get all?
What is the suggested solution?

I don't see anything in man zfs ... but maybe it's only available in a
later version of zfs?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 + spare or raidz3 and no spare for nine 1.5 TB SATA disks?

2010-07-28 Thread Richard Elling
On Jul 27, 2010, at 10:37 PM, Jack Kielsmeier wrote:

 The only other zfs pool in my system is a mirrored rpool (2 500 gb disks). 
 This is for my own personal use, so it's not like the data is mission 
 critical in some sort of production environment.
 
 The advantage I can see with going with raidz2 + spare over raidz3 and no 
 spare is I would spend much less time running in a degraded state when a 
 drive fails (I'd have to RMA the drive and wait most likely a week or more 
 for a replacement).

raidz3 with no spare will be better than raidz2+spare in all single-set cases.

 The disadvantage of raidz2 + spare is the event of a triple disk failure. 
 This is most likely not going to occur with 9 disks, but certainly is 
 possible. If 3 disks fail before one can be rebuilt with the spare, the data 
 will be lost.
 
 So, I guess the main question I have is, how much a performance hit is 
 noticed when a raidz3 array is running in a degraded state?

The performance will be similar, but in the non-degraded case, the raidz3 
will perform better for small, random reads.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 + spare or raidz3 and no spare for nine 1.5 TB SATA disks?

2010-07-28 Thread Roy Sigurd Karlsbakk
 The performance will be similar, but in the non-degraded case, the
 raidz3
 will perform better for small, random reads.

Why is this? The two will have the same amount of data drives

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root pool expansion

2010-07-28 Thread Cindy Swearingen

Hi Gary,

If your root pool is getting full, you can replace the root pool
disk with a larger disk. My recommendation is to attach the replacement
disk, let the replacement disk resilver, install the boot blocks, and
then detach the smaller disk. The system will see the expanded space
automatically.

A mirrored root pool configuration is a good suggestion too.

You need to follow the steps in the ZFS troubleshooting wiki (as
suggested) to label and partition the replacement disk.

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Replacing/Relabeling the Root Pool Disk

Thanks,

Cindy


On 07/28/10 07:32, Gary Gendel wrote:

Right now I have a machine with a mirrored boot setup.  The SAS drives are 43Gs 
and the root pool is getting full.

I do a backup of the pool nightly, so I feel confident that I don't need to 
mirror the drive and can break the mirror and expand the pool with the detached 
drive.

I understand how to do this on a normal pool, but is there any restrictions for 
doing this on the root pool?  Are there any grub issues?

Thanks,
Gary

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 + spare or raidz3 and no spare for nine 1.5 TB SATA disks?

2010-07-28 Thread Richard Elling
On Jul 28, 2010, at 8:34 AM, Roy Sigurd Karlsbakk wrote:

 The performance will be similar, but in the non-degraded case, the
 raidz3
 will perform better for small, random reads.
 
 Why is this? The two will have the same amount of data drives

The simple small, random read model for homogeneous drives:
I = small, random IOPS of one drive
D = number of data disks
P = number of parity disks
total IOPS = I * (D+P)/D

raidz2: P=2
total IOPS = I * (D+2)/D

raidz3: P=3
total IOPS = I * (D+3)/D

 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] COMSTAR iscsi replication - dataset busy

2010-07-28 Thread Bruno Sousa
Hi all,

I have in lab two servers running snv_134 and while doing some
experiences with iscsi volumes and replication i came up to a road-block
that i would like to ask for your help.
So in server A i have a lun created in COMSTAR without any views attach
to it and i can zfs send it to server B without problems.

Now on server B i created a view on that volume and there's a linux
machine accessing this volume over iscsi. As soon as i created the view
of the lun on server B the process of zfs send incremental (or not btw)
stops working , giving me the message of dataset busy.

So, if i stop the linux machine of accessing the lun in server B, the
problem keeps on, if i remove the COMSTAR view the problem keeps on, but
i remove the lu in COMSTAR then it works..
Is there any other way to bypass this issue?


Thanks,
Bruno



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 + spare or raidz3 and no spare for nine 1.5 TB SATA disks?

2010-07-28 Thread Jack Kielsmeier
Thanks,

Looks like I'll be using raidz3.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
I am trying to give a general user permissions to create zfs filesystems in the 
rpool.

zpool set=delegation=on rpool
zfs allow user create rpool

both run without any issues.

zfs allow rpool reports the user does have create permissions.

zfs create rpool/test
cannot create rpool/test : permission denied.

Can you not allow to the rpool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Cindy Swearingen

Hi Mark,

A couple of things are causing this to fail:

1. The user needs permissions to the underlying mount point.

2. The user needs both create and mount permissions to create ZFS datasets.

See the syntax below, which might vary depending on your Solaris
release.

Thanks,

Cindy

# chmod A+user:cindys:add_subdirectory:fd:allow /rpool
# zfs allow cindys create,mount rpool
# su - cindys
% /usr/sbin/zfs create rpool/cindys


On 07/28/10 11:23, Mike DeMarco wrote:

I am trying to give a general user permissions to create zfs filesystems in the 
rpool.

zpool set=delegation=on rpool
zfs allow user create rpool

both run without any issues.

zfs allow rpool reports the user does have create permissions.

zfs create rpool/test
cannot create rpool/test : permission denied.

Can you not allow to the rpool?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Mike DeMarco
Thanks adding mount did allow me to create it but does not allow me to create 
the mountpoint.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs allow does not work for rpool

2010-07-28 Thread Cindy Swearingen

Mike,

Did you also give the user permissions to the underlying mount point:

# chmod A+user:user-name:add_subdirectory:fd:allow /rpool

If so, please let me see the syntax and error messages.

Thanks,

Cindy

On 07/28/10 12:23, Mike DeMarco wrote:

Thanks adding mount did allow me to create it but does not allow me to create 
the mountpoint.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-28 Thread Saxon, Will
 -Original Message-
 From: zfs-discuss-boun...@opensolaris.org 
 [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of sol
 Sent: Wednesday, July 28, 2010 3:12 PM
 To: Richard Elling; Gregory Gee
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] Tips for ZFS tuning for NFS store 
 of VM images
 
 Richard Elling wrote:
  Gregory Gee wrote:
   I am using OpenSolaris to host VM images over NFS for 
 XenServer.  I'm looking 
 for tips on what parameters can be set to help optimize my 
 ZFS pool that holds 
 my VM images.
  There is nothing special about tuning for VMs, the normal 
 NFS tuning applies.
 
 
 That's not been my experience. Out of the box VMware server 
 would not work with 
 the VMs stored on a zfs pool via NFS. I've not yet found out 
 why but the 
 analytics showed millions of getattr/access/lookup compared 
 to read/write.
 
 A partial workaround was to turn off access time on the share 
 and to mount with 
 noatime,actimeo=60
 
 But that's not perfect because when left along the VM got 
 into a stuck state. 
 I've never seen that state before when the VM was hosted on a 
 local disk. 
 Hosting VMs on NFS is not working well so far...

My guess is that it's a VMware Server + NFS client issue, not a VMs on NFS 
issue. I'm using an OpenSolaris b134 system as an 'experimental' NFS datastore 
for VMware vSphere/ESX and it works great. I've had as many as 30 mixed-IO VMs 
running on the system with no reported issues. 

-Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How can a mirror lose a file?

2010-07-28 Thread sol
Hi

Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror.  Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks?  However the
zpool status shows checksum errors not I/O errors and I'm not sure what
that means in this case.

I thought that a zfs mirror would be the ultimate in protection but it's not!
Any ideas why and how to protect against this in the future?

(BTW it's osol official release 2009.06 snv_111b)

# zpool status -v
  pool: liver
 state: ONLINE
status: One or more devices has experienced an error resulting in 
data corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the entire 
pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed after 3h31m with 1 errors 
config:

NAME  STATE  READ WRITE CKSUM
liver ONLINE  0 0 1
 mirror   ONLINE  0 0 2
  c9d0p0  ONLINE  0 0 2
  c10d0p0 ONLINE  0 0 2

errors: Permanent errors have been detected in the following files:



  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How can a mirror lose a file?

2010-07-28 Thread Richard Elling
On Jul 28, 2010, at 12:41 PM, sol wrote:
 Having just done a scrub of a mirror I've lost a file and I'm curious how this
 can happen in a mirror.  Doesn't it require the almost impossible scenario
 of exactly the same sector being trashed on both disks?  However the
 zpool status shows checksum errors not I/O errors and I'm not sure what
 that means in this case.

It means that the data read back from the disk is not what ZFS thought
it wrote.

 I thought that a zfs mirror would be the ultimate in protection but it's 
 not!

Are you saying you would rather have the data silently corrupted?

 Any ideas why and how to protect against this in the future?

This can happen if there is a failure in a common system component
during the write (eg. main memory, HBA, PCI bus, CPU, bridges, etc.)

 (BTW it's osol official release 2009.06 snv_111b)

On more modern releases, the details of the corruption are shown in the 
FMA dump.  However, this feature does not exist in OpenSolaris 2009.06.
 -- richard

 
 # zpool status -v
  pool: liver
 state: ONLINE
 status: One or more devices has experienced an error resulting in 
 data corruption.  Applications may be affected.
 action: Restore the file in question if possible.  Otherwise restore the 
 entire 
 pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed after 3h31m with 1 errors 
 config:
 
 NAME  STATE  READ WRITE CKSUM
 liver ONLINE  0 0 1
 mirror   ONLINE  0 0 2
  c9d0p0  ONLINE  0 0 2
  c10d0p0 ONLINE  0 0 2
 
 errors: Permanent errors have been detected in the following files:

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How can a mirror lose a file?

2010-07-28 Thread Ian Collins

On 07/29/10 07:41 AM, sol wrote:

Hi

Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror.  Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks?  However the
zpool status shows checksum errors not I/O errors and I'm not sure what
that means in this case.

I thought that a zfs mirror would be the ultimate in protection but it's not!
Any ideas why and how to protect against this in the future?

   

Bad memory? Use ECC memory and test it.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tips for ZFS tuning for NFS store of VM images

2010-07-28 Thread Richard Elling
On Jul 28, 2010, at 12:11 PM, sol wrote:

 Richard Elling wrote:
 Gregory Gee wrote:
 I am using OpenSolaris to host VM images over NFS for XenServer.  I'm 
 looking 
 for tips on what parameters can be set to help optimize my ZFS pool that 
 holds 
 my VM images.
 There is nothing special about tuning for VMs, the normal NFS tuning applies.
 
 
 That's not been my experience. Out of the box VMware server would not work 
 with 
 the VMs stored on a zfs pool via NFS.

I do this regularly and know many people who run this way.

 I've not yet found out why but the 
 analytics showed millions of getattr/access/lookup compared to read/write.

These are requests from the client being serviced by the server.
To find out why the client is sending such requests, you'll need to 
look at the client.

 A partial workaround was to turn off access time on the share and to mount 
 with 
 noatime,actimeo=60

Yes, these are common NFS tunables.
 -- richard

 But that's not perfect because when left along the VM got into a stuck 
 state. 
 I've never seen that state before when the VM was hosted on a local disk. 
 Hosting VMs on NFS is not working well so far...




-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How can a mirror lose a file?

2010-07-28 Thread Cindy Swearingen

Hi Sol,

What kind of disks?

You should be able to use the fmdump -eV command to identify when the
checksum errors occurred.

Thanks,

Cindy



On 07/28/10 13:41, sol wrote:

Hi

Having just done a scrub of a mirror I've lost a file and I'm curious how this
can happen in a mirror.  Doesn't it require the almost impossible scenario
of exactly the same sector being trashed on both disks?  However the
zpool status shows checksum errors not I/O errors and I'm not sure what
that means in this case.

I thought that a zfs mirror would be the ultimate in protection but it's not!
Any ideas why and how to protect against this in the future?

(BTW it's osol official release 2009.06 snv_111b)

# zpool status -v
  pool: liver
 state: ONLINE
status: One or more devices has experienced an error resulting in 
data corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the entire 
pool from backup.

   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed after 3h31m with 1 errors 
config:


NAME  STATE  READ WRITE CKSUM
liver ONLINE  0 0 1
 mirror   ONLINE  0 0 2
  c9d0p0  ONLINE  0 0 2
  c10d0p0 ONLINE  0 0 2

errors: Permanent errors have been detected in the following files:



  
___

zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-28 Thread Edward Ned Harvey
 From: Darren J Moffat [mailto:darr...@opensolaris.org]
 
 It basically says that 'zfs send' gets a new '-b' option so send back
 properties, and 'zfs recv' gets a '-o' and '-x' option to allow
 explicit set/ignore of properties in the stream.  It also adds a '-r'
 option for 'zfs set'.
 
 If/when the approved changes integrate it will look like:
 
 Based on the source code change history for onnv-gate it doesn't appear
 to have integrated yet.

Ahh.  So, for now I'm sticking with zpool get all and zfs get all stored
in a text file, unless somebody has a better idea...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How can a mirror lose a file?

2010-07-28 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Richard Elling
 
 This can happen if there is a failure in a common system component
 during the write (eg. main memory, HBA, PCI bus, CPU, bridges, etc.)

I bet that's the cause.  Because as sol said ... Doesn't it require the
almost impossible scenario of exactly the same sector being trashed on both
disks?

Basically, yeah.  And it's time to start thinking up ways almost
impossible isn't quite as impossible as you thought it was.  Regular
scrubs, snapshots, and backups are your friends.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]

2010-07-28 Thread Robert Milkowski


fyi

--
Robert Milkowski
http://milek.blogspot.com


 Original Message 
Subject:zpool import despite missing log [PSARC/2010/292 Self Review]
Date:   Mon, 26 Jul 2010 08:38:22 -0600
From:   Tim Haley tim.ha...@oracle.com
To: psarc-...@sun.com
CC: zfs-t...@sun.com



I am sponsoring the following case for George Wilson.  Requested binding
is micro/patch.  Since this is a straight-forward addition of a command
line option, I think itqualifies for self review.  If an ARC member
disagrees, let me know and I'll convert to a fast-track.

Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI
This information is Copyright (c) 2010, Oracle and/or its affiliates.
All rights reserved.
1. Introduction
1.1. Project/Component Working Name:
 zpool import despite missing log
1.2. Name of Document Author/Supplier:
 Author:  George Wilson
1.3  Date of This Document:
26 July, 2010

4. Technical Description

OVERVIEW:

 ZFS maintains a GUID (global unique identifier) on each device and
 the sum of all GUIDs of a pool are stored into the ZFS uberblock.
 This sum is used to determine the availability of all vdevs
 within a pool when a pool is imported or opened.  Pools which
 contain a separate intent log device (e.g. a slog) will fail to
 import when that device is removed or is otherwise unavailable.
 This proposal aims to address this particular issue.

PROPOSED SOLUTION:

 This fast-track introduce a new command line flag to the
 'zpool import' sub-command.  This new option, '-m', allows
 pools to import even when a log device is missing.  The contents
 of that log device are obviously discarded and the pool will
 operate as if the log device were offlined.

MANPAGE DIFFS:

   zpool import [-o mntopts] [-p property=value] ... [-d dir | -c
cachefile]
-  [-D] [-f] [-R root] [-n] [-F] -a
+  [-D] [-f] [-m] [-R root] [-n] [-F] -a


   zpool import [-o mntopts] [-o property=value] ... [-d dir | -c
cachefile]
-  [-D] [-f] [-R root] [-n] [-F] pool |id [newpool]
+  [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool]

   zpool import [-o mntopts] [ -o property=value] ... [-d dir |
- -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a
+ -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a

   Imports all  pools  found  in  the  search  directories.
   Identical to the previous command, except that all pools

+ -m
+
+Allows a pool to import when there is a missing log device

EXAMPLES:

1). Configuration with a single intent log device:

# zpool status tank
   pool: tank
state: ONLINE
 scan: none requested
 config:

 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0 0
   c7t0d0ONLINE   0 0 0
 logs
   c5t0d0ONLINE   0 0 0

errors: No known data errors

# zpool import tank
The devices below are missing, use '-m' to import the pool anyway:
 c5t0d0 [log]

cannot import 'tank': one or more devices is currently unavailable

# zpool import -m tank
# zpool status tank
   pool: tank
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas
exist for
 the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
   scan: none requested
config:

 NAME   STATE READ WRITE CKSUM
 tank   DEGRADED 0 0 0
   c7t0d0   ONLINE   0 0 0
 logs
   1693927398582730352  UNAVAIL  0 0 0  was
/dev/dsk/c5t0d0

errors: No known data errors

2). Configuration with mirrored intent log device:

# zpool add tank log mirror c5t0d0 c5t1d0
zr...@diskmonster:/dev/dsk# zpool status tank
   pool: tank
  state: ONLINE
   scan: none requested
config:

 NAMESTATE READ WRITE CKSUM
 tankONLINE   0 0 0
   c7t0d0ONLINE   0 0 0
 logs
   mirror-1  ONLINE   0 0 0
 c5t0d0  ONLINE   0 0 0
 c5t1d0  ONLINE   0 0 0

errors: No known data errors

# zpool import 429789444028972405
The devices below are missing, use '-m' to import the pool anyway:
 mirror-1 [log]
   c5t0d0
   c5t1d0

# zpool import -m tank
# zpool status tank
   pool: tank
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas
exist for
 the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
   scan: none requested
config:

 NAME

[zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Karol
I appear to be getting between 2-9MB/s reads from individual disks in my zpool 
as shown in iostat -v 
I expect upwards of 100MBps per disk, or at least aggregate performance on par 
with the number of disks that I have.

My configuration is as follows:
Two Quad-core 5520 processors
48GB ECC/REG ram
2x LSI 9200-8e SAS HBAs (2008 chipset)
Supermicro 846e2 enclosure with LSI sasx36 expander backplane
20 seagate constellation 2TB SAS harddrives
2x 8GB Qlogic dual-port FC adapters in target mode
4x Intel X25-E 32GB SSDs available (attached via LSI sata-sas interposer)
mpt_sas driver
multipath enabled, all four LSI ports connected for 4 paths available:
f_sym, load-balance logical-block region size 11 on seagate drives
f_asym_sun, load-balance none, on intel ssd drives

currently not using the SSDs in the pools since it seems I have a deeper issue 
here.
Pool configuration is four 2-drive mirror vdevs in one pool, and the same in 
another pool. 2 drives are for OS and 2 drives aren't being used at the moment.

Where should I go from here to figure out what's wrong?
Thank you in advance - I've spent days reading and testing but I'm not getting 
anywhere. 

 P.S: I need the aid of some Genius here.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Richard Jahnel
How many iops per spindle are you getting?

A rule of thumb I use is to expect no more than 125 iops per spindle for 
regular HDDs.

SSDs are a different story of course. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Karol
Hi r2ch

The operations column shows about 370 operations for read - per spindle
(Between 400-900 for writes)
How should I be measuring iops?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]

2010-07-28 Thread James Dickens
+1


On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski mi...@task.gda.pl wrote:


 fyi

 --
 Robert Milkowski
 http://milek.blogspot.com


  Original Message   Subject: zpool import despite missing
 log [PSARC/2010/292 Self Review]  Date: Mon, 26 Jul 2010 08:38:22 -0600  From:
 Tim Haley tim.ha...@oracle.com tim.ha...@oracle.com  To:
 psarc-...@sun.com  CC: zfs-t...@sun.com

 I am sponsoring the following case for George Wilson.  Requested binding
 is micro/patch.  Since this is a straight-forward addition of a command
 line option, I think itqualifies for self review.  If an ARC member
 disagrees, let me know and I'll convert to a fast-track.

 Template Version: @(#)sac_nextcase 1.70 03/30/10 SMI
 This information is Copyright (c) 2010, Oracle and/or its affiliates.
 All rights reserved.
 1. Introduction
 1.1. Project/Component Working Name:
  zpool import despite missing log
 1.2. Name of Document Author/Supplier:
  Author:  George Wilson
 1.3  Date of This Document:
 26 July, 2010

 4. Technical Description

 OVERVIEW:

  ZFS maintains a GUID (global unique identifier) on each device and
  the sum of all GUIDs of a pool are stored into the ZFS uberblock.
  This sum is used to determine the availability of all vdevs
  within a pool when a pool is imported or opened.  Pools which
  contain a separate intent log device (e.g. a slog) will fail to
  import when that device is removed or is otherwise unavailable.
  This proposal aims to address this particular issue.

 PROPOSED SOLUTION:

  This fast-track introduce a new command line flag to the
  'zpool import' sub-command.  This new option, '-m', allows
  pools to import even when a log device is missing.  The contents
  of that log device are obviously discarded and the pool will
  operate as if the log device were offlined.

 MANPAGE DIFFS:

zpool import [-o mntopts] [-p property=value] ... [-d dir | -c
 cachefile]
 -  [-D] [-f] [-R root] [-n] [-F] -a
 +  [-D] [-f] [-m] [-R root] [-n] [-F] -a


zpool import [-o mntopts] [-o property=value] ... [-d dir | -c
 cachefile]
 -  [-D] [-f] [-R root] [-n] [-F] pool |id [newpool]
 +  [-D] [-f] [-m] [-R root] [-n] [-F] pool |id [newpool]

zpool import [-o mntopts] [ -o property=value] ... [-d dir |
 - -c cachefile] [-D] [-f] [-n] [-F] [-R root] -a
 + -c cachefile] [-D] [-f] [-m] [-n] [-F] [-R root] -a

Imports all  pools  found  in  the  search  directories.
Identical to the previous command, except that all pools

 + -m
 +
 +Allows a pool to import when there is a missing log device

 EXAMPLES:

 1). Configuration with a single intent log device:

 # zpool status tank
pool: tank
 state: ONLINE
  scan: none requested
  config:

  NAMESTATE READ WRITE CKSUM
  tankONLINE   0 0 0
c7t0d0ONLINE   0 0 0
  logs
c5t0d0ONLINE   0 0 0

 errors: No known data errors

 # zpool import tank
 The devices below are missing, use '-m' to import the pool anyway:
  c5t0d0 [log]

 cannot import 'tank': one or more devices is currently unavailable

 # zpool import -m tank
 # zpool status tank
pool: tank
   state: DEGRADED
 status: One or more devices could not be opened.  Sufficient replicas
 exist for
  the pool to continue functioning in a degraded state.
 action: Attach the missing device and online it using 'zpool online'.
 see: http://www.sun.com/msg/ZFS-8000-2Q
scan: none requested
 config:

  NAME   STATE READ WRITE CKSUM
  tank   DEGRADED 0 0 0
c7t0d0   ONLINE   0 0 0
  logs
1693927398582730352  UNAVAIL  0 0 0  was
 /dev/dsk/c5t0d0

 errors: No known data errors

 2). Configuration with mirrored intent log device:

 # zpool add tank log mirror c5t0d0 c5t1d0
 zr...@diskmonster:/dev/dsk# zpool status tank
pool: tank
   state: ONLINE
scan: none requested
 config:

  NAMESTATE READ WRITE CKSUM
  tankONLINE   0 0 0
c7t0d0ONLINE   0 0 0
  logs
mirror-1  ONLINE   0 0 0
  c5t0d0  ONLINE   0 0 0
  c5t1d0  ONLINE   0 0 0

 errors: No known data errors

 # zpool import 429789444028972405
 The devices below are missing, use '-m' to import the pool anyway:
  mirror-1 [log]
c5t0d0
c5t1d0

 # zpool import -m tank
 # zpool status tank
pool: tank
   state: DEGRADED
 status: One or more devices could not be opened.  Sufficient replicas
 exist for
  the pool to continue