Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread grant lowe
Ok. I blew it. I didn't add enough information. Here's some more detail:

Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring
performance with the results of the bonnie++ output and comparing with with
the the zpool iostat output. It's with the zpool iostat I'm not seeing a
lot of writes.

Like I said, I'm new to this and if I need to provide anything else I will.
Thanks, all.


On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:

 Hi all,

 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB
 memory RIght now oracle . I've been trying to load test the box with
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more
 than a couple K for writes. Any suggestions? Or should I take this to a
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load
 testing. Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Stress test zfs

2012-01-04 Thread grant lowe
Hi all,

I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB
memory RIght now oracle . I've been trying to load test the box with
bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more
than a couple K for writes. Any suggestions? Or should I take this to a
bonnie++ mailing list? Any help is appreciated. I'm kinda new to load
testing. Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool I/O error

2010-03-19 Thread Grant Lowe
Hi all,

I'm trying to delete a zpool and when I do, I get this error:

# zpool destroy oradata_fs1
cannot open 'oradata_fs1': I/O error
# 

The pools I have on this box look like this:

#zpool list
NAME  SIZE   USED  AVAILCAP  HEALTH  ALTROOT
oradata_fs1   532G   119K   532G 0%  DEGRADED  -
rpool 136G  28.6G   107G21%  ONLINE  -
#

Why can't I delete this pool? This is on Solaris 10 5/09 s10s_u7.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Grant Lowe
Please help me out here. I've got a V240 with the root drive, c2t0d0 mirrored 
to c2t1d0. The mirror is having problems, and I'm unsure of the exact procedure 
to pull the mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Grant Lowe
Well, this system is Solaris 05/09, with patches form November. No snapshots 
running and no internal contollers. It's a file serving and attached to a HDS 
disk array. Help and please respond ASAP as this is production! Even an IM 
would be helpful.

--- On Wed, 3/10/10, Cindy Swearingen cindy.swearin...@sun.com wrote:

 From: Cindy Swearingen cindy.swearin...@sun.com
 Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Date: Wednesday, March 10, 2010, 1:09 PM
 Hi Grant,
 
 I don't have a v240 to test but I think you might need to
 unconfigure
 the disk first on this system.
 
 So I would follow the more complex steps.
 
 If this is a root pool, then yes, you would need to use the
 slice
 identifier, and make sure it has an SMI disk label.
 
 After the zpool replace operation and the disk resilvering
 is
 complete, apply the boot blocks.
 
 The steps would look like this:
 
 # zpool offline rpool c2t1d0
 #cfgadm -c unconfigure c1::dsk/c2t1d0
 (physically replace the drive)
 (confirm an SMI label and a s0 exists)
 # cfgadm -c configure c1::dsk/c2t1d0
 # zpool replace rpool c2t1d0s0
 # zpool online rpool c2t1d0s0
 # zpool status rpool /* to confirm the replacement/resilver
 is complete
 # installboot -F zfs /usr/platform/`uname
 -i`/lib/fs/zfs/bootblk
 /dev/rdsk/c2t1d0s0
 
 Thanks,
 
 Cindy
 
 
 On 03/10/10 13:28, Grant Lowe wrote:
  Please help me out here. I've got a V240 with the root
 drive, c2t0d0 mirrored to c2t1d0. The mirror is having
 problems, and I'm unsure of the exact procedure to pull the
 mirrored drive. I see in various googling:
  
  zpool replace rpool c2t1d0 c2t1d0
  
  or I've seen simply:
  
  zpool replace rpool c2t1d0
  
  or I've seen the much more complex:
  
  zpool offline rpooll c2t1d0
  cfgadm -c unconfigure c1::dsk/c2t1d0
  (replace the drive)
  cfgadm -c configure c1::dsk/c2t1d0
  zpool replace rpool c2t1d0s0
  zpool online rpool c2t1d0s0
  
  So which is it? Also, do I need to include the slice
 as in the last example?
  
  Thanks.
  
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Grant Lowe
Hi Enda,

This is what I get when I do the boot -L:

1} ok boot -L

Sun Fire V240, No Keyboard
Copyright 1998-2003 Sun Microsystems, Inc.  All rights reserved.
OpenBoot 4.13.2, 4096 MB memory installed, Serial #61311259.
Ethernet address 0:3:ba:a7:89:1b, Host ID: 83a7891b.



Rebooting with command: boot -L
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -L
1 s10s_u7wos_08
Select environment to boot: [ 1 - 1 ]: 1

To boot the selected entry, invoke:
boot [root-device] -Z rpool/ROOT/s10s_u7wos_08

Program terminated
{1} ok







- Original Message 
From: Enda O'Connor enda.ocon...@sun.com
To: cindy.swearin...@sun.com
Cc: Grant Lowe gl...@sbcglobal.net; zfs-discuss@opensolaris.org
Sent: Friday, August 28, 2009 8:18:55 AM
Subject: Re: [zfs-discuss] Boot error

Hi
What does boot -L show you?

Enda

On 08/28/09 15:59, cindy.swearin...@sun.com wrote:
 Hi Grant,
 
 I've had no more luck researching this, mostly because the error message can 
 mean different things in different scenarios.
 
 I did try to reproduce it and I can't.
 
 I noticed you are booting using boot -s, which I think means the system will 
 boot from the default boot disk, not the newly added disk.
 
 Can you boot from the secondary boot disk directly by using the boot
 path? On my 280r system, I would boot from the secondary disk like this:
 
 ok boot /p...@8,60/SUNW,q...@4/f...@0,0/d...@0,0
 
 Cindy
 
 
 On 08/27/09 23:54, Grant Lowe wrote:
 Hi Cindy,
 
 I tried booting from DVD but nothing showed up.  Thanks for the ideas, 
 though.  Maybe your other sources might have something?
 
 
 
 - Original Message 
 From: Cindy Swearingen cindy.swearin...@sun.com
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Sent: Thursday, August 27, 2009 6:24:00 PM
 Subject: Re: [zfs-discuss] Boot error
 
 Hi Grant,
 
 I don't have all my usual resources at the moment, but I would boot from 
 alternate media and use the format utility to check the partitioning on 
 newly added disk, and look for something like overlapping partitions. Or, 
 possibly, a mismatch between
 the actual root slice and the one you are trying to boot from.
 
 Cindy
 
 - Original Message -
 From: Grant Lowe gl...@sbcglobal.net
 Date: Thursday, August 27, 2009 5:06 pm
 Subject: [zfs-discuss] Boot error
 To: zfs-discuss@opensolaris.org
 
 
 I've got a 240z with Solaris 10 Update 7, all the latest patches from 
 Sunsolve.  I've installed a boot drive with ZFS.  I mirrored the drive with 
 zpool.  I installed the boot block.  The system had been working just fine. 
  But for some reason, when I try to boot, I get the error:
 
 {1} ok boot -s
 Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
 SunOS Release 5.10 Version Generic_141414-08 64-bit
 Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 Division by Zero
 {1} ok
 
 Any ideas?
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Grant Lowe
Well, what I ended up doing was reinstalling Solaris.  Fortunately this is a 
test box for now.  I've repeatedly pulled both the root drive and the mirrored 
drive.  The system behaved as normal.  The trick that worked for me was to 
reinstall, but select both drives for zfs.  Originally I selected only one 
drive for zfs.



- Original Message 
From: Grant Lowe gl...@sbcglobal.net
To: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 4:05:15 PM
Subject: [zfs-discuss] Boot error

I've got a 240z with Solaris 10 Update 7, all the latest patches from Sunsolve. 
 I've installed a boot drive with ZFS.  I mirrored the drive with zpool.  I 
installed the boot block.  The system had been working just fine.  But for some 
reason, when I try to boot, I get the error: 

{1} ok boot -s
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
SunOS Release 5.10 Version Generic_141414-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Division by Zero
{1} ok

Any ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Boot error

2009-08-27 Thread Grant Lowe
I've got a 240z with Solaris 10 Update 7, all the latest patches from Sunsolve. 
 I've installed a boot drive with ZFS.  I mirrored the drive with zpool.  I 
installed the boot block.  The system had been working just fine.  But for some 
reason, when I try to boot, I get the error: 

{1} ok boot -s
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
SunOS Release 5.10 Version Generic_141414-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Division by Zero
{1} ok

Any ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-27 Thread Grant Lowe
Hi Cindy,

I tried booting from DVD but nothing showed up.  Thanks for the ideas, though.  
Maybe your other sources might have something?



- Original Message 
From: Cindy Swearingen cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 6:24:00 PM
Subject: Re: [zfs-discuss] Boot error

Hi Grant,

I don't have all my usual resources at the moment, but I would 
boot from alternate media and use the format utility to check 
the partitioning on newly added disk, and look for something 
like overlapping partitions. Or, possibly, a mismatch between
the actual root slice and the one you are trying to boot from.

Cindy

- Original Message -
From: Grant Lowe gl...@sbcglobal.net
Date: Thursday, August 27, 2009 5:06 pm
Subject: [zfs-discuss] Boot error
To: zfs-discuss@opensolaris.org

 I've got a 240z with Solaris 10 Update 7, all the latest patches from 
 Sunsolve.  I've installed a boot drive with ZFS.  I mirrored the drive 
 with zpool.  I installed the boot block.  The system had been working 
 just fine.  But for some reason, when I try to boot, I get the error: 
 
 
 {1} ok boot -s
 Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
 SunOS Release 5.10 Version Generic_141414-08 64-bit
 Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
 Use is subject to license terms.
 Division by Zero
 {1} ok
 
 Any ideas?
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS delegation

2009-04-21 Thread Grant Lowe

Hi all,

Is there a simple way to grant blanket conditions to zpools?  I know about the 
individual commands, but I want to give our DBAs the permissions to snapshot, 
clone, promote, rollback, rename, mount, etc. anything within their zpools.  
I'm kind of new to delegations.  Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Destroying a zfs dataset

2009-04-17 Thread Grant Lowe

I was wondering if there is a solution for this.  I've been able to replicate a 
similar problem on a different server. Basically I'm still unable to use zfs 
destroy on a filesystem, that was a parent filesystem and is now a child 
filesystem after a promotion.

bash-3.00# zpool history
History for 'testpool':
2009-04-14.11:30:00 zpool create testpool mirror 
c14t60060160910B1600E492DCF1071EDE11d0 c14t60060160910B1600585DC330081EDE11d0
2009-04-14.11:30:58 zfs create testpool/testfs
2009-04-14.11:31:16 zfs create testpool/testfs2
2009-04-14.11:31:47 zfs set mountpoint=/devel/testfs testpool/testfs
2009-04-14.11:32:10 zfs create testpool/testfs/dir1
2009-04-14.11:32:12 zfs create testpool/testfs/dir2
2009-04-14.11:32:13 zfs create testpool/testfs/dir3
2009-04-14.12:25:37 zfs snapshot testpool/test...@snap1
2009-04-14.12:29:10 zfs rollback testpool/test...@snap1
2009-04-14.12:30:10 zfs destroy testpool/test...@snap1
2009-04-16.12:42:03 zfs snapshot testpool/test...@snap1
2009-04-16.12:45:30 zfs clone testpool/test...@snap1 testpool/testfs2/clone1
2009-04-16.12:46:57 zfs promote testpool/testfs2/clone1

bash-3.00# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
testpool   1.03M  9.78G20K  /testpool
testpool/testfs  76K  9.78G22K  /devel/testfs
testpool/testfs/dir1 18K  9.78G18K  /devel/testfs/dir1
testpool/testfs/dir2 18K  9.78G18K  /devel/testfs/dir2
testpool/testfs/dir3 18K  9.78G18K  /devel/testfs/dir3
testpool/testfs2790K  9.78G   758K  /testpool/testfs2
testpool/testfs2/clone1 772K  9.78G   756K  /testpool/testfs2/clone1
testpool/testfs2/clo...@snap116K  -   756K  -
bash-3.00# zfs destroy testpool/testfs2
cannot destroy 'testpool/testfs2': filesystem has children
use '-r' to destroy the following datasets:
testpool/testfs2/clo...@snap1
testpool/testfs2/clone1
bash-3.00# zfs destroy -r testpool/test...@snap1
cannot destroy 'testpool/testfs2/clo...@snap1': snapshot is cloned
no snapshots destroyed
bash-3.00# zfs destroy -r testpool/testfs2/clone1
cannot destroy 'testpool/testfs2/clone1': filesystem has dependent clones
use '-R' to destroy the following datasets:
testpool/testfs2
bash-3.00# zfs destroy -R testpool/testfs2/clone1
cannot determine dependent datasets: recursive dependency at 
'testpool/testfs2/clone1'
bash-3.00# 


- Original Message 
From: Grant Lowe gl...@sbcglobal.net
To: zfs-discuss@opensolaris.org
Sent: Wednesday, December 17, 2008 2:48:06 PM
Subject: Destroying a zfs dataset

I'm having a very difficult time destroying a zone.  Here's the skinny:

bash-3.00# zfs get origin | grep d01
r12_data/d01   origin
r12_data/d01/.clone.12052...@12042008  -
r12_data/d01-receive   origin-  
-
r12_data/d01-rece...@a origin-  
-
r12_data/d01/.clone.12052008   origin-  
-
r12_data/d01/.clone.12052...@12042008  origin-  
-
bash-3.00# bash-3.00# zfs destroy r12_data/d01/.clone.12052...@12042008
cannot destroy 'r12_data/d01/.clone.12052...@12042008': snapshot has dependent 
clones
use '-R' to destroy the following datasets:
r12_data/d01/.clone.12052008
r12_data/d01
bash-3.00# zfs destroy -R r12_data/d01/.clone.12052...@12042008
cannot determine dependent datasets: recursive dependency at 
'r12_data/d01/.clone.12052...@12042008'
bash-3.00# zfs destroy -r r12_data/d01/.clone.12052...@12042008
cannot destroy 'r12_data/d01/.clone.12052...@12042008': snapshot is cloned
no snapshots destroyed
bash-3.00# 

I must be missing a piece of the puzzle. What is that piece?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Grant Lowe

Hi Remco.

Yes, I realize that was asking for trouble.  It wasn't supposed to be a test of 
yanking a LUN.  We needed a LUN for a VxVM/VxFS system and that LUN was 
available.  I was just surprised at the panic, since the system was quiesced at 
the time.  But there is coming a time when we will be doing this.  Thanks for 
the feedback.  I appreciate it.




- Original Message 
From: Remco Lengers re...@lengers.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, April 9, 2009 5:31:42 AM
Subject: Re: [zfs-discuss] ZFS Panic

Grant,

Didn't see a response so I'll give it a go.

Ripping a disk away and silently inserting a new one is asking for trouble 
imho. I am not sure what you were trying to accomplish but generally replace a 
drive/lun would entail commands like

zpool offline tank c1t3d0
cfgadm | grep c1t3d0
sata1/3::dsk/c1t3d0disk connectedconfigured   ok
# cfgadm -c unconfigure sata1/3
Unconfigure the device at: /devices/p...@0,0/pci1022,7...@2/pci11ab,1...@1:3
This operation will suspend activity on the SATA device
Continue (yes/no)? yes
# cfgadm | grep sata1/3
sata1/3disk connectedunconfigured ok
Replace the physical disk c1t3d0
# cfgadm -c configure sata1/3

Taken from this page:

http://docs.sun.com/app/docs/doc/819-5461/gbbzy?a=view

..Remco

Grant Lowe wrote:
 Hi All,
 
 Don't know if this is worth reporting, as it's human error.  Anyway, I had a 
 panic on my zfs box.  Here's the error:
 
 marksburg /usr2/glowe grep panic /var/log/syslog
 Apr  8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after 
 panic: assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size, 
 FALSE, FTAG, numbufs, dbp), file: ../../common/fs/zfs/dmu.c, line: 580
 Apr  8 07:15:10 marksburg savecore: [ID 570001 auth.error] reboot after 
 panic: assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size, 
 FALSE, FTAG, numbufs, dbp), file: ../../common/fs/zfs/dmu.c, line: 580
 marksburg /usr2/glowe
 
 What we did to cause this is we pulled a LUN from zfs, and replaced it with a 
 new LUN.  We then tried to shutdown the box, but it wouldn't go down.  We had 
 to send a break to the box and reboot.  This is an oracle sandbox, so we're 
 not really concerned.  Ideas?
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Panic

2009-04-08 Thread Grant Lowe

Hi All,

Don't know if this is worth reporting, as it's human error.  Anyway, I had a 
panic on my zfs box.  Here's the error:

marksburg /usr2/glowe grep panic /var/log/syslog
Apr  8 06:57:17 marksburg savecore: [ID 570001 auth.error] reboot after panic: 
assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size, FALSE, 
FTAG, numbufs, dbp), file: ../../common/fs/zfs/dmu.c, line: 580
Apr  8 07:15:10 marksburg savecore: [ID 570001 auth.error] reboot after panic: 
assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size, FALSE, 
FTAG, numbufs, dbp), file: ../../common/fs/zfs/dmu.c, line: 580
marksburg /usr2/glowe

What we did to cause this is we pulled a LUN from zfs, and replaced it with a 
new LUN.  We then tried to shutdown the box, but it wouldn't go down.  We had 
to send a break to the box and reboot.  This is an oracle sandbox, so we're not 
really concerned.  Ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Grant Lowe

Hi Mike,

Yes, d25 is a clone of d24. Here are some data points about it:

bash-3.00# zfs get reservation r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  reservation  none  default
bash-3.00# zfs get quota r12_data/d25
NAME  PROPERTY  VALUE SOURCE
r12_data/d25  quota none  default
bash-3.00#
bash-3.00# zfs list r12_data
NAME   USED  AVAIL  REFER  MOUNTPOINT
r12_data   596G  62.7G  24.5K  none
bash-3.00#
bash-3.00# zfs list -t snapshot r12_data/d...@a
NAME USED  AVAIL  REFER  MOUNTPOINT
r12_data/d...@a   904K  -  39.9G  -
bash-3.00#

Thanks for the response.  Did you need any more data points from me?



- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 12:40:53 AM
Subject: Re: [zfs-discuss] Disk usage

Grant Lowe wrote:
 Hey all,

 I have a question/puzzle with zfs.  See the following:

 bash-3.00# df -h | grep d25 ; zfs list | grep d25

 FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
 r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
 df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
 r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

 NAMEUSED AVAIL REFER MOUNTPOINT
 r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
 zfs list says the db25 file system has 63GB available?
 r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


 Shouldn't the new filesystem (d25) size be what the clone was allocated?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
Hi Grant,

We'd need more info than that to figure what's actually going on.

Is d25 a clone of something? If so what? Can we see the specs of that as 
well.

Does d25 have any reservations or a quota?

What does zfs list of r12_data show?

Do you have snapshots? zfs list -t all will show you them.

Finally, the clone will only be the size of it's delta from the source.

HTH

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Grant Lowe

Hi Mike,

Yes, that does help things.  Thanks.

bash-3.00# zfs get compression r12_data/d25
NAME  PROPERTY VALUE SOURCE
r12_data/d25  compression  off   default
bash-3.00# zfs get compression r12_data/d24
NAME  PROPERTY VALUE SOURCE
r12_data/d24  compression  onlocal
bash-3.00#

bash-3.00# df -h | grep d24
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12
bash-3.00# df -h | grep d25
r12_data/d25   659G40G63G39%/opt/d25/oakwc12
bash-3.00#

When you asked me to do zfs list -o space, what option did you mean.  space 
isn't an option.




- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 8:32:49 AM
Subject: Re: [zfs-discuss] Disk usage

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).

So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).

However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)

Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.

Mike



Grant Lowe wrote:
 Hi Mike,

 Yes, d25 is a clone of d24. Here are some data points about it:

 bash-3.00# zfs get reservation r12_data/d25
 NAME  PROPERTY VALUE SOURCE
 r12_data/d25  reservation  none  default
 bash-3.00# zfs get quota r12_data/d25
 NAME  PROPERTY  VALUE SOURCE
 r12_data/d25  quota none  default
 bash-3.00#
 bash-3.00# zfs list r12_data
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 r12_data   596G  62.7G  24.5K  none
 bash-3.00#
 bash-3.00# zfs list -t snapshot r12_data/d...@a
 NAME USED  AVAIL  REFER  MOUNTPOINT
 r12_data/d...@a   904K  -  39.9G  -
 bash-3.00#

 Thanks for the response.  Did you need any more data points from me?



 - Original Message 
 From: Michael Ramchand mich...@ramchand.net
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Sent: Tuesday, March 17, 2009 12:40:53 AM
 Subject: Re: [zfs-discuss] Disk usage

 Grant Lowe wrote:
  
 Hey all,

 I have a question/puzzle with zfs.  See the following:

 bash-3.00# df -h | grep d25 ; zfs list | grep d25

 FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
 r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
 df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
 r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

 NAMEUSED AVAIL REFER MOUNTPOINT
 r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
 zfs list says the db25 file system has 63GB available?
 r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


 Shouldn't the new filesystem (d25) size be what the clone was allocated?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

 Hi Grant,

 We'd need more info than that to figure what's actually going on.

 Is d25 a clone of something? If so what? Can we see the specs of that as 
 well.

 Does d25 have any reservations or a quota?

 What does zfs list of r12_data show?

 Do you have snapshots? zfs list -t all will show you them.

 Finally, the clone will only be the size of it's delta from the source.

 HTH

  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk usage

2009-03-17 Thread Grant Lowe

If you meant available, here's the output of that:

bash-3.00# zfs list -o available r12_data
AVAIL
62.7G
bash-3.00# zfs list -o available r12_data/d24
AVAIL
2.14G
bash-3.00# zfs list -o available r12_data/d25
AVAIL
62.7G
bash-3.00#




- Original Message 
From: Michael Ramchand mich...@ramchand.net
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 8:32:49 AM
Subject: Re: [zfs-discuss] Disk usage

Well, it is kinda confusing...

In short, df -h will always return the size of the WHOLE pool for size 
(unless you've set a quota on the dataset in which case it says that), 
the amount of space that particular dataset is using for used, and the 
total amount of free space on the WHOLE pool for avail (unless you've 
got a quota or reservation set).

So in your original mail. 695G is the size of the DATA on the r12_data 
pool.  zfs list r12_data says you are using 596G. I think this is the 
RAW capacity used, and I reckon you are using compression. (zfs get 
compressratio r12_data will give you something like 1.1).

However, df -h of r12_data/d24 should have the identical 1st and 3rd 
fields, but they don't. (Could you re-run?)

Same goes for the zfs list commands.

Could you try doing zfs list -o space to get a fuller breakdown of how 
the space is being used.

Mike



Grant Lowe wrote:
 Hi Mike,

 Yes, d25 is a clone of d24. Here are some data points about it:

 bash-3.00# zfs get reservation r12_data/d25
 NAME  PROPERTY VALUE SOURCE
 r12_data/d25  reservation  none  default
 bash-3.00# zfs get quota r12_data/d25
 NAME  PROPERTY  VALUE SOURCE
 r12_data/d25  quota none  default
 bash-3.00#
 bash-3.00# zfs list r12_data
 NAME   USED  AVAIL  REFER  MOUNTPOINT
 r12_data   596G  62.7G  24.5K  none
 bash-3.00#
 bash-3.00# zfs list -t snapshot r12_data/d...@a
 NAME USED  AVAIL  REFER  MOUNTPOINT
 r12_data/d...@a   904K  -  39.9G  -
 bash-3.00#

 Thanks for the response.  Did you need any more data points from me?



 - Original Message 
 From: Michael Ramchand mich...@ramchand.net
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Sent: Tuesday, March 17, 2009 12:40:53 AM
 Subject: Re: [zfs-discuss] Disk usage

 Grant Lowe wrote:
  
 Hey all,

 I have a question/puzzle with zfs.  See the following:

 bash-3.00# df -h | grep d25 ; zfs list | grep d25

 FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
 r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
 df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
 r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

 NAMEUSED AVAIL REFER MOUNTPOINT
 r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
 zfs list says the db25 file system has 63GB available?
 r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


 Shouldn't the new filesystem (d25) size be what the clone was allocated?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

 Hi Grant,

 We'd need more info than that to figure what's actually going on.

 Is d25 a clone of something? If so what? Can we see the specs of that as 
 well.

 Does d25 have any reservations or a quota?

 What does zfs list of r12_data show?

 Do you have snapshots? zfs list -t all will show you them.

 Finally, the clone will only be the size of it's delta from the source.

 HTH

  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Grant Lowe

Another newbie question:

I have a new system with zfs. I create a directory:

bash-3.00# mkdir -p /opt/mis/oracle/data/db1

I do my zpool:

bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
c2t5006016B306005AAd18 c2t5006016B306005AAd19
bash-3.00# zfs create oracle/prd_data
bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
datasets of this type
bash-3.00#

What's the correct syntax?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Grant Lowe

Ok, Cindy.  Thanks. I would like to have one big pool and divide it into 
separate file systems for an Oracle database.  What I had before was a separate 
pool for each file system.  So does it look I have to go back to what I had 
before?



- Original Message 
From: cindy.swearin...@sun.com cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:20:18 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

Grant,

If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.

Is this a UFS directory, here:

# mkdir -p /opt/mis/oracle/data/db1

What are you trying to do?

Cindy

Grant Lowe wrote:
 Another newbie question:
 
 I have a new system with zfs. I create a directory:
 
 bash-3.00# mkdir -p /opt/mis/oracle/data/db1
 
 I do my zpool:
 
 bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
 c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
 c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
 c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
 c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
 c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
 c2t5006016B306005AAd18 c2t5006016B306005AAd19
 bash-3.00# zfs create oracle/prd_data
 bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1
 
 I'm trying to set a mountpoint.  But trying to mount it doesn't work.
 
 bash-3.00# zfs list
 NAME  USED  AVAIL  REFER  MOUNTPOINT
 oracle   44.0G   653G  25.5K  /oracle
 oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
 oracle/prd_data/db1  22.5K   697G  22.5K  -
 bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
 cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
 datasets of this type
 bash-3.00#
 
 What's the correct syntax?
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mounting zfs file systems

2009-03-17 Thread Grant Lowe
Great explanation.  Thanks, Lori.





From: Lori Alt lori@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: cindy.swearin...@sun.com; zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:52:04 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

no, this is an incorrect diagnosis.  The problem is that by
using the -V option, you created a volume, not a file system.  
That is, you created a raw device.  You could then newfs
a ufs file system within the volume, but that is almost certainly
not what you want.

Don't use -V when you create the oracle/prd_data/db1
dataset.  Then it will be a mountable  file system.  You
will need to give it a mount point however by setting the
mountpoint property, since the default mountpoint won't
be what you want.

Lori


On 03/17/09 15:45, Grant Lowe wrote: 
Ok, Cindy.  Thanks. I would like to have one big pool and divide it into 
separate file systems for an Oracle database.  What I had before was a separate 
pool for each file system.  So does it look I have to go back to what I had 
before?



- Original Message 
From: cindy.swearin...@sun.com cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:20:18 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems

Grant,

If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.

Is this a UFS directory, here:

# mkdir -p /opt/mis/oracle/data/db1

What are you trying to do?

Cindy

Grant Lowe wrote:
  
Another newbie question:

I have a new system with zfs. I create a directory:

bash-3.00# mkdir -p /opt/mis/oracle/data/db1

I do my zpool:

bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1 
c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5 
c2t5006016B306005AAd6 c2t5006016B306005AAd7 c2t5006016B306005AAd8 
c2t5006016B306005AAd9 c2t5006016B306005AAd10 c2t5006016B306005AAd11 
c2t5006016B306005AAd12 c2t5006016B306005AAd13 c2t5006016B306005AAd14 
c2t5006016B306005AAd15 c2t5006016B306005AAd16 c2t5006016B306005AAd17 
c2t5006016B306005AAd18 c2t5006016B306005AAd19
bash-3.00# zfs create oracle/prd_data
bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1

I'm trying to set a mountpoint.  But trying to mount it doesn't work.

bash-3.00# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
oracle   44.0G   653G  25.5K  /oracle
oracle/prd_data  44.0G   653G  24.5K  /oracle/prd_data
oracle/prd_data/db1  22.5K   697G  22.5K  -
bash-3.00# zfs set mountpoint=/opt/mis/oracle/data/db1 oracle/prd_data/db1
cannot set property for 'oracle/prd_data/db1': 'mountpoint' does not apply to 
datasets of this type
bash-3.00#

What's the correct syntax?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disk usage

2009-03-16 Thread Grant Lowe

Hey all,

I have a question/puzzle with zfs.  See the following:

bash-3.00# df -h | grep d25 ; zfs list | grep d25

FILESYSTEM SIZE   USED  AVAIL CAPACITY MOUNTED ON
r12_data/d25   *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system is 659GB?; 40GB used and 63GB available?
r12_data/d2442G40G   2.1G95%/opt/d24/oakwcr12

NAMEUSED AVAIL REFER MOUNTPOINT
r12_data/d25  760K  *62.7G*  39.9G  /opt/d25/oakwc12
zfs list says the db25 file system has 63GB available?
r12_data/d24 39.9G  2.14G  39.9G  /opt/d24/oakwcr12


Shouldn't the new filesystem (d25) size be what the clone was allocated?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Grant Lowe

Hi Erik,

A couple of questions about what you said in your email.  In synopsis 2, if 
hostA has gone belly up and is no longer accessible, then a step that is 
implied (or maybe I'm just inferring it) is to go to the SAN and reassign the 
LUN from hostA to hostB.  Correct?



- Original Message 
From: Erik Trimble erik.trim...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Wednesday, March 11, 2009 1:42:06 PM
Subject: Re: [zfs-discuss] ZFS on a SAN

I'm not 100% sure what your question here is, but let me give you a
(hopefully) complete answer:

(1) ZFS is NOT a clustered file system, in the sense that it is NOT
possible for two hosts to have the same LUN mounted at the same time,
even if both are hooked to a SAN and can normally see that LUN.

(2) ZFS can do failover, however.  If you have a LUN from a SAN on
hostA, create a ZFS pool in it, and use as normal.  Should you with to
failover the LUN to hostB, you need to do a 'zpool export zpool' on
hostA, then 'zpool import zpool' on hostB.  If hostA has been lost
completely (hung/died/etc) and you are unable to do an 'export' on it,
you can force the import on hostB via 'zpool import -f zpool'


ZFS requires that you import/export entire POOLS, not just filesystems.
So, given what you seem to want, I'd recommend this:

On the SAN, create (2) LUNs - one for your primary data, and one for
your snapshots/backups.

On hostA, create a zpool on the primary data LUN (call it zpool A), and
another zpool on the backup LUN (zpool B).  Take snapshots on A, then
use 'zfs send' and 'zfs receive' to copy the clone/snapshot over to
zpool B. then 'zpool export B'

On hostB, import the snapshot pool:  'zfs import B'



It might just be as easy to have two independent zpools on each host,
and just do a 'zfs send' on hostA, and 'zfs receive' on hostB to copy
the snapshot/clone over the wire.

-Erik



On Wed, 2009-03-11 at 13:18 -0700, Grant Lowe wrote:
 Hi All,
 
 I'm new on ZFS, so I hope this isn't too basic a question.  I have a host 
 where I setup ZFS.  The Oracle DBAs did their thing and I know have a number 
 of ZFS datasets with their respective clones and snapshots on serverA.  I 
 want to export some of the clones to serverB.  Do I need to zone serverB to 
 see the same LUNs as serverA?  Or does it have to have preexisting, empty 
 LUNs to import the clones?  Please help.  Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on a SAN

2009-03-11 Thread Grant Lowe

Hi All,

I'm new on ZFS, so I hope this isn't too basic a question.  I have a host where 
I setup ZFS.  The Oracle DBAs did their thing and I know have a number of ZFS 
datasets with their respective clones and snapshots on serverA.  I want to 
export some of the clones to serverB.  Do I need to zone serverB to see the 
same LUNs as serverA?  Or does it have to have preexisting, empty LUNs to 
import the clones?  Please help.  Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on a SAN

2009-03-11 Thread Grant Lowe

Hi Eric,

Thanks for the quick response.  Then on hostB, the new LUN will need the same 
amount of disk space for the pool, as on hostA, if I'm understanding you 
correctly.  Correct?  Thanks!



- Original Message 
From: Erik Trimble erik.trim...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Wednesday, March 11, 2009 1:42:06 PM
Subject: Re: [zfs-discuss] ZFS on a SAN

I'm not 100% sure what your question here is, but let me give you a
(hopefully) complete answer:

(1) ZFS is NOT a clustered file system, in the sense that it is NOT
possible for two hosts to have the same LUN mounted at the same time,
even if both are hooked to a SAN and can normally see that LUN.

(2) ZFS can do failover, however.  If you have a LUN from a SAN on
hostA, create a ZFS pool in it, and use as normal.  Should you with to
failover the LUN to hostB, you need to do a 'zpool export zpool' on
hostA, then 'zpool import zpool' on hostB.  If hostA has been lost
completely (hung/died/etc) and you are unable to do an 'export' on it,
you can force the import on hostB via 'zpool import -f zpool'


ZFS requires that you import/export entire POOLS, not just filesystems.
So, given what you seem to want, I'd recommend this:

On the SAN, create (2) LUNs - one for your primary data, and one for
your snapshots/backups.

On hostA, create a zpool on the primary data LUN (call it zpool A), and
another zpool on the backup LUN (zpool B).  Take snapshots on A, then
use 'zfs send' and 'zfs receive' to copy the clone/snapshot over to
zpool B. then 'zpool export B'

On hostB, import the snapshot pool:  'zfs import B'



It might just be as easy to have two independent zpools on each host,
and just do a 'zfs send' on hostA, and 'zfs receive' on hostB to copy
the snapshot/clone over the wire.

-Erik



On Wed, 2009-03-11 at 13:18 -0700, Grant Lowe wrote:
 Hi All,
 
 I'm new on ZFS, so I hope this isn't too basic a question.  I have a host 
 where I setup ZFS.  The Oracle DBAs did their thing and I know have a number 
 of ZFS datasets with their respective clones and snapshots on serverA.  I 
 want to export some of the clones to serverB.  Do I need to zone serverB to 
 see the same LUNs as serverA?  Or does it have to have preexisting, empty 
 LUNs to import the clones?  Please help.  Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on a SAN

2009-03-11 Thread Grant Lowe

Hi Eric,

Thanks.  That scenario makes sense.  I have a better of how to set things up 
now. It's a three-step process, which I didn't realize.

grant



- Original Message 
From: Erik Trimble erik.trim...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Wednesday, March 11, 2009 1:42:06 PM
Subject: Re: [zfs-discuss] ZFS on a SAN

I'm not 100% sure what your question here is, but let me give you a
(hopefully) complete answer:

(1) ZFS is NOT a clustered file system, in the sense that it is NOT
possible for two hosts to have the same LUN mounted at the same time,
even if both are hooked to a SAN and can normally see that LUN.

(2) ZFS can do failover, however.  If you have a LUN from a SAN on
hostA, create a ZFS pool in it, and use as normal.  Should you with to
failover the LUN to hostB, you need to do a 'zpool export zpool' on
hostA, then 'zpool import zpool' on hostB.  If hostA has been lost
completely (hung/died/etc) and you are unable to do an 'export' on it,
you can force the import on hostB via 'zpool import -f zpool'


ZFS requires that you import/export entire POOLS, not just filesystems.
So, given what you seem to want, I'd recommend this:

On the SAN, create (2) LUNs - one for your primary data, and one for
your snapshots/backups.

On hostA, create a zpool on the primary data LUN (call it zpool A), and
another zpool on the backup LUN (zpool B).  Take snapshots on A, then
use 'zfs send' and 'zfs receive' to copy the clone/snapshot over to
zpool B. then 'zpool export B'

On hostB, import the snapshot pool:  'zfs import B'



It might just be as easy to have two independent zpools on each host,
and just do a 'zfs send' on hostA, and 'zfs receive' on hostB to copy
the snapshot/clone over the wire.

-Erik



On Wed, 2009-03-11 at 13:18 -0700, Grant Lowe wrote:
 Hi All,
 
 I'm new on ZFS, so I hope this isn't too basic a question.  I have a host 
 where I setup ZFS.  The Oracle DBAs did their thing and I know have a number 
 of ZFS datasets with their respective clones and snapshots on serverA.  I 
 want to export some of the clones to serverB.  Do I need to zone serverB to 
 see the same LUNs as serverA?  Or does it have to have preexisting, empty 
 LUNs to import the clones?  Please help.  Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on a SAN

2009-03-11 Thread Grant Lowe

Hey Richard,

That explanation help clarify things.  I have another question, but maybe it'll 
be a new topic.  Basically I would like to export some stuff of vxfs file 
system on a different host, and import to zfs.  Is there a way to do that?

grant



- Original Message 
From: Richard Elling richard.ell...@gmail.com
To: Erik Trimble erik.trim...@sun.com
Cc: Grant Lowe gl...@sbcglobal.net; zfs-discuss@opensolaris.org
Sent: Wednesday, March 11, 2009 1:52:31 PM
Subject: Re: [zfs-discuss] ZFS on a SAN

Erik Trimble wrote:
 I'm not 100% sure what your question here is, but let me give you a
 (hopefully) complete answer:

 (1) ZFS is NOT a clustered file system, in the sense that it is NOT
 possible for two hosts to have the same LUN mounted at the same time,
 even if both are hooked to a SAN and can normally see that LUN.
  

Need to be clear on the terminology here.  Yes, it is possible for two
systems to have access to a single LUN and have ZFS file systems.
The ZFS limitation is at the vdev level (partition or slice), which is
below the LUN level.


 (2) ZFS can do failover, however.  If you have a LUN from a SAN on
 hostA, create a ZFS pool in it, and use as normal.  Should you with to
 failover the LUN to hostB, you need to do a 'zpool export zpool' on
 hostA, then 'zpool import zpool' on hostB.  If hostA has been lost
 completely (hung/died/etc) and you are unable to do an 'export' on it,
 you can force the import on hostB via 'zpool import -f zpool'
  

LUN masking or reservations occur at the LUN level, which is why it is
often better (safer) to design with the expectation that a LUN will be
only available to one host at a time.  Or, to say it differently, if you 
think
of one vdev/LUN, then you can innoculate yourself from later grief :-)


 ZFS requires that you import/export entire POOLS, not just filesystems.
 So, given what you seem to want, I'd recommend this:

 On the SAN, create (2) LUNs - one for your primary data, and one for
 your snapshots/backups.

 On hostA, create a zpool on the primary data LUN (call it zpool A), and
 another zpool on the backup LUN (zpool B).  Take snapshots on A, then
 use 'zfs send' and 'zfs receive' to copy the clone/snapshot over to
 zpool B. then 'zpool export B'

 On hostB, import the snapshot pool:  'zfs import B'



 It might just be as easy to have two independent zpools on each host,
 and just do a 'zfs send' on hostA, and 'zfs receive' on hostB to copy
 the snapshot/clone over the wire.
  

+1
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Destroying a zfs dataset

2008-12-17 Thread Grant Lowe
I'm having a very difficult time destroying a zone.  Here's the skinny:

bash-3.00# zfs get origin | grep d01
r12_data/d01   origin
r12_data/d01/.clone.12052...@12042008  -
r12_data/d01-receive   origin-  
-
r12_data/d01-rece...@a origin-  
-
r12_data/d01/.clone.12052008   origin-  
-
r12_data/d01/.clone.12052...@12042008  origin-  
-
bash-3.00# bash-3.00# zfs destroy r12_data/d01/.clone.12052...@12042008
cannot destroy 'r12_data/d01/.clone.12052...@12042008': snapshot has dependent 
clones
use '-R' to destroy the following datasets:
r12_data/d01/.clone.12052008
r12_data/d01
bash-3.00# zfs destroy -R r12_data/d01/.clone.12052...@12042008
cannot determine dependent datasets: recursive dependency at 
'r12_data/d01/.clone.12052...@12042008'
bash-3.00# zfs destroy -r r12_data/d01/.clone.12052...@12042008
cannot destroy 'r12_data/d01/.clone.12052...@12042008': snapshot is cloned
no snapshots destroyed
bash-3.00# 

I must be missing a piece of the puzzle. What is that piece?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS promotions

2008-12-12 Thread Grant Lowe
Hi All,

We have Oracle installed under ZFS.  I've 
created snapshots and clones. They work as advertised.  Say the DBAs 
want to upgrade to a new version of Oracle, without installing from 
scratch.  I would like them to be able to take the clone, upgrade that, 
and then promote the clones to new file systems.  But the way we have 
ZFS setup right now, you can't promote outside of the ZFS file system. 
For example, here's the current setup:

bash-3.00# zfs list 
NAME   USED  AVAIL  REFER  MOUNTPOINT 
r12_data   594G  65.4G  24.5K  none 
r12_d...@10202008 0  -  24.5K  - 
r12_data/.clone1  0  65.4G  24.5K  none 
r12_data/d01  41.7G  4.32G  41.4G  /opt/d01/oakwcr12 
r12_data/d...@12042008  257M  -  41.5G  - 
r12_data/d01/.clone.12052008  0  4.32G  41.5G  
/opt/d01/oakwcr12/.clone.12052008 
r12_data/d02  39.5G  2.53G  39.4G  /opt/d02/oakwcr12 
r12_data/d...@12042008  115M  -  39.4G  - 
r12_data/d02/.clone.12052008  0  2.53G  39.4G  
/opt/d02/oakwcr12/.clone.12052008 
r12_data/d03  38.8G  3.23G  38.4G  /opt/d03/oakwcr12 
r12_data/d...@12042008  398M  -  38.6G  - 
r12_data/d03/.clone.12052008  0  3.23G  38.6G  
/opt/d03/oakwcr12/.clone.12052008 
r12_data/d04  40.3G  3.69G  40.3G  /opt/d04/oakwcr12 
r12_data/d...@12042008 18.8M  -  40.3G  - 
r12_data/d04/.clone.12052008  0  3.69G  40.3G  
/opt/d04/oakwcr12/.clone.12052008 
r12_data/d05  32.3G  9.73G  32.1G  /opt/d05/oakwcr12 
r12_data/d...@12042008  208M  -  32.2G  - 
r12_data/d05/.clone.12052008  0  9.73G  32.2G  
/opt/d05/oakwcr12/.clone.12052008 
r12_data/d06  39.2G  2.84G  39.0G  /opt/d06/oakwcr12 
r12_data/d...@12042008  129M  -  39.1G  - 
r12_data/d06/.clone.12052008  0  2.84G  39.1G  
/opt/d06/oakwcr12/.clone.12052008 
r12_data/d07  31.1G  10.9G  31.1G  /opt/d07/oakwcr12 
r12_data/d...@12042008 53.9M  -  31.1G  - 
r12_data/d07/.clone.12052008  0  10.9G  31.1G  
/opt/d07/oakwcr12/.clone.12052008 
r12_data/d08  39.8G  2.22G  39.6G  /opt/d08/oakwcr12 
r12_data/d...@12042008  163M  -  39.7G  - 
r12_data/d08/.clone.12052008  0  2.22G  39.7G  
/opt/d08/oakwcr12/.clone.12052008 
r12_data/d09  40.0G  2.03G  39.9G  /opt/d09/oakwcr12 
r12_data/d...@12042008  103M  -  39.9G  - 
r12_data/d09/.clone.1205200817K  2.03G  39.9G  
/opt/d09/oakwcr12/.clone.12052008 
r12_data/d10  41.7G  2.35G  41.5G  /opt/d10/oakwcr12 
r12_data/d...@12042008  112M  -  41.5G  - 
r12_data/d10/.clone.1205200818K  2.35G  41.5G  
/opt/d10/oakwcr12/.clone.12052008 
r12_data/d11  38.9G  3.14G  38.7G  /opt/d11/oakwcr12 
r12_data/d...@12042008  146M  -  38.7G  - 
r12_data/d11/.clone.1205200818K  3.14G  38.7G  
/opt/d11/oakwcr12/.clone.12052008 
r12_data/d12  14.1G  27.9G  13.4G  /opt/d12/oakwcr12 
r12_data/d...@12042008  700M  -  13.4G  - 
r12_data/d12/.clone.1205200817K  27.9G  13.4G  
/opt/d12/oakwcr12/.clone.12052008 
r12_data/d21  36.6G  5.42G  36.6G  /opt/d21/oakwcr12 
r12_data/d...@12042008  258K  -  36.6G  - 
r12_data/d21/.clone.1205200817K  5.42G  36.6G  
/opt/d21/oakwcr12/.clone.12052008 
r12_data/d22  40.1G  1.89G  40.1G  /opt/d22/oakwcr12 
r12_data/d...@12042008  373K  -  40.1G  - 
r12_data/d22/.clone.1205200817K  1.89G  40.1G  
/opt/d22/oakwcr12/.clone.12052008 
r12_data/d23  39.8G  2.16G  39.8G  /opt/d23/oakwcr12 
r12_data/d...@12042008  780K  -  39.8G  - 
r12_data/d23/.clone.1205200817K  2.16G  39.8G  
/opt/d23/oakwcr12/.clone.12052008 
r12_data/d24  39.9G  2.14G  39.9G  /opt/d24/oakwcr12 
r12_data/d...@12042008  582K  -  39.9G  - 
r12_data/d24/.clone.1205200816K  2.14G  39.9G  
/opt/d24/oakwcr12/.clone.12052008 
r12_logz   204K  13.7G20K  /opt/l01/oakwrc12 
r12_l...@10202008 23.5K  -  24.5K  - 
r12_logz/.clone.1205200815K  13.7G18K  
/opt/l01/oakwrc12/.clone.12052008 
r12_oApps 55.9G  22.3G  55.2G  /opt/a01/oakwcr12 
r12_oa...@10202008 369M  -  55.2G  - 
r12_oApps/.clone.12052008  365M  22.3G  55.2G  
/opt/a01/oakwcr12/.clone.12052008 
r12_oWork 8.00G  31.1G  8.00G  /opt/w01/oakwcr12 
r12_ow...@1020200824.5K  -  8.00G  - 
r12_oWork/.clone.12052008   16K  31.1G  8.00G  
/opt/w01/oakwcr12/.clone.12052008 
r12_product   4.77G  13.8G  4.71G  /opt/p01/oakwcr12 
r12_prod...@10202008  28.8M  -  4.71G  - 
r12_product/.clone.12052008   28.3M  13.8G  4.71G  
/opt/p01/oakwcr12/.clone.12052008 
bash-3.00# 

Say the DBAs want to go from R12 to R13.  I