[zfs-discuss] Re: Migrating ZFS pool with zones from one host to another

2007-06-21 Thread mario heimel
before the zoneadm attach or boot you must create the configuration on the 
second host, manuell or with the detached config from first host.

zonecfg -z heczone 'create -a /hecpool/zones/heczone'
zoneadm -z heczone attach  ( to attach the requirements must fulfilled 
(pkgs and patches in sync) )
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Best practice for moving FS between pool on same machine?

2007-06-21 Thread Constantin Gonzalez
Hi,

Chris Quenelle wrote:
 Thanks, Constantin!  That sounds like the right answer for me.
 Can I use send and/or snapshot at the pool level?  Or do I have
 to use it on one filesystem at a time?  I couldn't quite figure this
 out from the man pages.

the ZFS team is working on a zfs send -r (recursive) option to be able
to recursively send and receive hierarchies of ZFS filesystems in one go,
including pools.

So you'll need to do it one filesystem at a time.

This is not always trivial: If you send a full snapshot, then an incremental
one and the target filesystem is mounted, you'll likely get an error that the
target filesystem was modified. Make sure the target filesystems are unmounted
and ideally marked as unmountable while performing the send/receives. Also,
you may want to use the -F option to receive which forces a rollback of the
target filesystem to the most recent snapshot.

I've written a script to do all of this, but it's only works on my system
certified.

I'd like to get some feedback and validation before I post it on my blog,
so anyone, let me know if you want to try it out.

Best regards,
   Constantin

-- 
Constantin GonzalezSun Microsystems GmbH, Germany
Platform Technology Group, Global Systems Engineering  http://www.sun.de/
Tel.: +49 89/4 60 08-25 91   http://blogs.sun.com/constantin/

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Marcel Schneider, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Slow write speed to ZFS pool (via NFS)

2007-06-21 Thread Roch - PAE

Joe S writes:
  After researching this further, I found that there are some known
  performance issues with NFS + ZFS. I tried transferring files via SMB, and
  got write speeds on average of 25MB/s.
  
  So I will have my UNIX systems use SMB to write files to my Solaris server.
  This seems weird, but its fast. I'm sure Sun is working on fixing this. I
  can't imagine running a Sun box with out NFS.
  

Call be a picky but :

There is no NFS over ZFS issue (IMO/FWIW).
There is a ZFS over NVRAM issue; well understood (not related to NFS).
There is a Samba vs NFS issue; not well understood (not related to ZFS).


This last bullet is probably better suited for
[EMAIL PROTECTED]


If ZFS is talking to storage array with NVRAM, then we have
an issue (not related to NFS) described by  :

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6462690
6462690 sd driver should set SYNC_NV bit when issuing SYNCHRONIZE CACHE 
to SBC-2 devices

The  above bug/rfe  lies  in the  SD   driver but very  much
triggered by ZFS particularly running NFS,  but not only. It
affects only NVRAM based storage and is being worked on.

If ZFS is talking to a JBOD, then the slowness is a
characteristic of NFS (not related to ZFS).

So FWIW on  JBOD, there is no  ZFS+NFS issue  in the sense
that  I  don't know   howwe couldchange ZFS  to   be
significantly better  at NFS nor  do  I know how to change  NFS 
that would help  _particularly_  ZFS.  Doesn't  mean  there is
none, I just don't know about them. So please ping me if you
highlight such an issue. So if one replaces ZFS by some other
filesystem and gets large speedup  I'm interested (make sure
the other  filesystem either runs  with  write cache off, or
flushes it on NFS commit).

So that leaves us with a Samba vs NFS issue (not related to
ZFS). We know that NFS is able to create file _at most_ at
one file per server I/O latency. Samba appears better and this is
what we need to investigate. It might be better in a way
that NFS can borrow (maybe through some better NFSV4 delegation
code) or Samba might be better by being careless with data.
If we find such an NFS improvement it will help all backend
filesystems not just ZFS.

Which is why I say: There is no NFS over ZFS issue.


-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [Fwd: What Veritas is saying vs ZFS]

2007-06-21 Thread Craig Morgan
Also introduces the Veritas sfop utility, which is the 'simplified'  
front-end to VxVM/VxFS.


As imitation is the sincerest form of flattery, this smacks of a  
desperate attempt to prove to their customers that Vx can be just as  
slick as ZFS.


More details at http://www.symantec.com/enterprise/products/ 
agents_options_details.jsp?pcid=2245pvid=203_1aoid=sf_simple_admin  
including a ref. guide ...


Craig

On 21 Jun 2007, at 08:03, Selim Daoud wrote:




From: Ric Hall [EMAIL PROTECTED]
Date: 20 June 2007 22:46:48 BDT
To: DMA Ambassadors [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: What Veritas is saying vs ZFS


Thought it might behoove us all to see this presentation from the
Veritas conference last week, and understand what they are saying  
vs ZFS

and our storage plans.

Some interesting performance claims to say the least

RicSTG7328_FINAL_June-07-07.pdf


--
Craig

Craig Morgan
t: +44 (0)791 338 3190
f: +44 (0)870 705 1726
e: [EMAIL PROTECTED]

 
~

 NOTICE:  This email message is for the sole use of the intended
 recipient(s) and may contain confidential and privileged information.
 Any unauthorized review, use, disclosure or distribution is  
prohibited.

 If you are not the intended recipient, please contact the sender by
 reply email and destroy all copies of the original message.
 
~




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Undo/reverse zpool create

2007-06-21 Thread Joubert Nel
Hi,

If I add an entire disk to a new pool by doing zpool create, is this 
reversible?

I.e. if there was data on that disk (e.g. it was the sole disk in a zpool in 
another system) can I get this back or is zpool create destructive?

Joubert
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Mario Goebbels
 Because you have to read the entire stripe (which probably spans all the
 disks) to verify the checksum.

Then I have a wrong idea of what a stripe is. I always thought it's the
interleave block size.

-mg


signature.asc
Description: This is a digitally signed message part
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] creating pool on slice which is mounted

2007-06-21 Thread satish s nandihalli
partition p
Current partition table (original):
Total disk cylinders available: 49771 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
7   homewm3814 - 49769   63.11GB(45956/0/0) 132353280

--- If i run the command zpool create pool name  7th slice (shown above 
which is mounted as home), will it cause any harm to the existing contents in 
home.

- Satish
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] creating pool on slice which is mounted

2007-06-21 Thread Tim Foster
On Thu, 2007-06-21 at 06:16 -0700, satish s nandihalli wrote:
 Part  TagFlag Cylinders SizeBlocks
 7   homewm3814 - 49769   63.11GB(45956/0/0) 132353280
 
 --- If i run the command zpool create pool name  7th slice (shown
 above which is mounted as home), will it cause any harm to the
 existing contents in home.

If you use zpool create -f pool name 7th slice, yes, it'll destroy
the filesystem that's on that slice and you won't be able to access any
of the data presently on it.

If you do zpool create pool 7th slice

where slice has a UFS filesystem, and is mounted, it should print an
error message, saying that

a) there's a filesystem present
b) it's mounted

 - so you need to unmount it, and use the -f flag.

cheers,
tim

In more detail:

(snip newfs of c2t1d0s1 to put a UFS filesystem on it)

# mount /dev/dsk/c2t1d0s1 /tmp/a
# zpool create pool c2t1d0s1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t1d0s1 is currently mounted on /tmp/a. Please see umount(1M).
# umount /tmp/a
#  zpool create pool c2t1d0s1
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t1d0s1 contains a ufs filesystem.
# zpool create -f pool c2t1d0s1
# 

-- 
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Bug in zpool history

2007-06-21 Thread Niclas Sodergard

Hi,

I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this

# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
2007-06-20.10:20:03 zfs clone syspool/[EMAIL PROTECTED] syspool/myrootfs
2007-06-20.10:23:21 zfs set bootfs=syspool/myrootfs syspool

As you can see it says I did a zfs set bootfs=... even though the
correct command should have been zpool set bootfs= Of course
this is purely cosmetical. I currently don't have access to a recent
nevada build so I just wonder if this is present there as well.

cheers,
Nickus

--
Have a look at my blog for sysadmins!
http://aspiringsysadmin.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Proper way to detach attach

2007-06-21 Thread Gary Gendel
Hi,

I've got some issues with my 5-disk SATA stack using two controllers. Some of 
the ports are acting strangely, so I'd like to play around and change which 
ports the disks are connected to. This means that I need to bring down the 
pool, swap some connections and then bring the pool back up. I may have to 
repeat this several times.

I just wanted to clarify the steps needed to do this so I don't lose everything.

Thanks,
Gary
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: marvell88sx error in command 0x2f: status 0x51

2007-06-21 Thread Rob Logan

 [hourly] marvell88sx error in command 0x2f: status 0x51

ah, its some kinda SMART or FMA query that

model WDC WD3200JD-00KLB0
firmware 08.05J08
serial number  WD-WCAMR2427571
supported features:
 48-bit LBA, DMA, SMART, SMART self-test
SATA1 compatible
capacity = 625142448 sectors

drives do not support but

model ST3750640AS
firmware 3.AAK
serial number 5QD02ES6
supported features:
 48-bit LBA, DMA, Native Command Queueing, SMART, SMART self-test
SATA1 compatible
queue depth 32
capacity = 1465149168 sectors

do...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Roch Bourbonnais


Le 20 juin 07 à 04:59, Ian Collins a écrit :




I'm not sure why, but when I was testing various configurations with
bonnie++, 3 pairs of mirrors did give about 3x the random read
performance of a 6 disk raidz, but with 4 pairs, the random read
performance dropped by 50%:

3x2
Blockread:  220464
Random read: 1520.1

4x2
Block read:  295747
Random read:  765.3

Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Did you recreate the pool from scratch or did you add a pair of disk  
to the existing mirror ?
If starting from scratch, I'm stumped. But for the later, the problem  
might lie in the data population.
The newly added mirror might have gotten a larger share of the added  
data and restitution did not target all disks

evenly.

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Richard Elling

Mario Goebbels wrote:

Because you have to read the entire stripe (which probably spans all the
disks) to verify the checksum.


Then I have a wrong idea of what a stripe is. I always thought it's the
interleave block size.


Nope.  A stripe generally refers to the logical block as spread across
physical devices.  For most RAID implementations (hardware, firmware,
or software), the interleave size is the stripe width divided by the number
of devices.  In ZFS, dynamic striping doesn't have this restriction, which
is how we can dynamically add physical devices to existing stripes.  Jeff
Bonwick describes this in the context of RAID-Z at
http://blogs.sun.com/bonwick/entry/raid_z

 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bug in zpool history

2007-06-21 Thread eric kustarz


On Jun 21, 2007, at 8:47 AM, Niclas Sodergard wrote:


Hi,

I was playing around with NexentaCP and its zfs boot facility. I tried
to figure out how what commands to run and I ran zpool history like
this

# zpool history
2007-06-20.10:19:46 zfs snapshot syspool/[EMAIL PROTECTED]
2007-06-20.10:20:03 zfs clone syspool/[EMAIL PROTECTED] syspool/ 
myrootfs

2007-06-20.10:23:21 zfs set bootfs=syspool/myrootfs syspool

As you can see it says I did a zfs set bootfs=... even though the
correct command should have been zpool set bootfs= Of course
this is purely cosmetical. I currently don't have access to a recent
nevada build so I just wonder if this is present there as well.


nice catch... i filed:
6572465 'zpool set bootfs=...' records history as 'zfs set bootfs=...'

expect a fix today simply passing 'FALSE' instead of 'TRUE' as  
the 'pool' parameter in zpool_log_history().


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Joubert Nel
 Joubert Nel wrote:
  Hi,
  
  If I add an entire disk to a new pool by doing
 zpool create, is this
  reversible?
  
  I.e. if there was data on that disk (e.g. it was
 the sole disk in a zpool
  in another system) can I get this back or is zpool
 create destructive?
 
 Short answer: you're stuffed, and no, it's not
 reversible.
 
 Long answer: see the short answer.

Darn!

 
 If the device was actually in use on another system,
 I
 would expect that libdiskmgmt would have warned you
 about
 this when you ran zpool create.

When I ran zpool create, the pool got created without a warning. 

What is strange, and maybe I'm naive here, is that there was no formatting of 
this physical disk so I'm optimistic that the data is still recoverable from 
it, even though the new pool shadows it.

Or is this way off mark?

Joubert
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Best practice for moving FS between pool on same machine?

2007-06-21 Thread Chris Quenelle

Sorry I can't volunteer to test your script.
I want to do the steps by hand to make sure I understand them.
If I have to do it all again, I'll get in touch.

Thanks for the advice!

--chris


Constantin Gonzalez wrote:

Hi,

Chris Quenelle wrote:

Thanks, Constantin!  That sounds like the right answer for me.
Can I use send and/or snapshot at the pool level?  Or do I have
to use it on one filesystem at a time?  I couldn't quite figure this
out from the man pages.


the ZFS team is working on a zfs send -r (recursive) option to be able
to recursively send and receive hierarchies of ZFS filesystems in one go,
including pools.

So you'll need to do it one filesystem at a time.

This is not always trivial: If you send a full snapshot, then an incremental
one and the target filesystem is mounted, you'll likely get an error that the
target filesystem was modified. Make sure the target filesystems are unmounted
and ideally marked as unmountable while performing the send/receives. Also,
you may want to use the -F option to receive which forces a rollback of the
target filesystem to the most recent snapshot.

I've written a script to do all of this, but it's only works on my system
certified.

I'd like to get some feedback and validation before I post it on my blog,
so anyone, let me know if you want to try it out.

Best regards,
   Constantin


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: New german white paper on ZFS

2007-06-21 Thread mario heimel
good work!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proper way to detach attach

2007-06-21 Thread Will Murnane

Run cfgadm to see what ports are recognized as hotswappable.  Run
cfgadm -cunconfigure portname and then make sure it's logically
disconnected with cfgadm, then pull the disk and put it in another
port.  Then run cfgadm -cconfigure newport and it'll be ready to be
imported again.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Eric Schrock
On Thu, Jun 21, 2007 at 11:03:39AM -0700, Joubert Nel wrote:
 
 When I ran zpool create, the pool got created without a warning. 

zpool(1M) will diallow creation of the disk if it contains data in
active use (mounted fs, zfs pool, dump device, swap, etc).  It will warn
if it contains a recognized filesystem (zfs, ufs, etc) that is not
currently mounted, but allow you to override it with '-f'.  What was
previously on the disk?

 What is strange, and maybe I'm naive here, is that there was no
 formatting of this physical disk so I'm optimistic that the data is
 still recoverable from it, even though the new pool shadows it.
 
 Or is this way off mark?

You are guaranteed to have lost all data within the vdev label portions
of the disk (see on-disk specification from opensolaris.org).  How much
else you lost depends on how long the device was active in the pool and
how much data was written to it.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Z-Raid performance with Random reads/writes

2007-06-21 Thread Ian Collins
Roch Bourbonnais wrote:

 Le 20 juin 07 à 04:59, Ian Collins a écrit :


 I'm not sure why, but when I was testing various configurations with
 bonnie++, 3 pairs of mirrors did give about 3x the random read
 performance of a 6 disk raidz, but with 4 pairs, the random read
 performance dropped by 50%:

 3x2
 Blockread:  220464
 Random read: 1520.1

 4x2
 Block read:  295747
 Random read:  765.3 

 Did you recreate the pool from scratch or did you add a pair of disk
 to the existing mirror ?
 If starting from scratch, I'm stumped. But for the later, the problem
 might lie in the data population.
 The newly added mirror might have gotten a larger share of the added
 data and restitution did not target all disks
 evenly.

From scratch.  Each test was run on a new pool with one filesystem.

Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZIL on user specified devices?

2007-06-21 Thread Bryan Wagoner
Quick question,

Are there any tunables, or is there any way to specify devices in a pool to use 
for the ZIL specifically? I've been thinking through architectures to mitigate 
performance problems on SAN and various other storage technologies where 
disabling ZIL or cache flushes has been necessary to make up for performance 
and was  wondering if there would be a way to specify a specific device or set 
of devices for the ZIL to use separate of the data devices so I wouldn't have 
to disable it in those circumstances. 

Thanks in advance!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Undo/reverse zpool create

2007-06-21 Thread Richard Elling

Joubert Nel wrote:

If the device was actually in use on another system, I
would expect that libdiskmgmt would have warned you about
this when you ran zpool create.


AFAIK, libdiskmgmt is not multi-node aware.  It does know about local
uses of the disk.  Remote uses of the disk, especially those shared with
other OSes, is a difficult problem to solve where there are no standards.
Reason #84612 why I hate SANs.

When I ran zpool create, the pool got created without a warning. 


If the device was not currently in use, why wouldn't it proceed?


What is strange, and maybe I'm naive here, is that there was no formatting of 
this physical disk so I'm optimistic that the data is still recoverable from it, even 
though the new pool shadows it.

Or is this way off mark?


If you define formatting as writing pertinent information to the disk
such that ZFS works, then it was formatted.  The uberblock and its replicas
only take a few iops.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss