Re: [zfs-discuss] Replicating ZFS filesystems with non-standard mount points

2008-07-27 Thread Trevor Watson
I have had the same problem too, but managed to work around it by setting the 
mountpoint to none before performing the ZFS send. But that only works on 
file-systems you can quiesce.


How about making a clone of your snapshot, then set the mounpoint of the clone 
to none, take a snapshot of the unmounted clone and then zfs send that?


Trev

Alan Burlison wrote:

Alan Burlison wrote:

So how do I tell zfs receive to create the new filesystems in pool3, but 
not actually try to mount them?


This is even more of an issue with ZFS root - as far as I can tell it's 
impossible to recursively back up all the filesystems in a root pool 
because of this - the reason being that one filesystem in the root pool 
will *always* be mounted as /, and when zfs receive creates the copied 
filesystem for / it also tries to mount it as /, and fails.


NAMEUSED  AVAIL  REFER  MOUNTPOINT
solaris6.89G  12.7G30K  /solaris
solaris/ROOT   6.88G  12.7G18K  /solaris/ROOT
solaris/ROOT/onnv_92   65.8M  12.7G  3.98G  /
solaris/ROOT/onnv_94   2.76G  12.7G  3.99G  /
solaris/ROOT/test  4.06G  12.7G  3.98G  /
solaris/ROOT/[EMAIL PROTECTED]  296K  -  3.98G  -

# zfs snapshot -r [EMAIL PROTECTED]
# zfs create -o mountpoint=none -o canmount=off backup/fire/solaris
# zfs send -R [EMAIL PROTECTED] | zfs receive -Fdv backup/fire/solaris
receiving full stream of [EMAIL PROTECTED] into backup/fire/[EMAIL PROTECTED]
received 30.5KB stream in 1 seconds (30.5KB/sec)
receiving full stream of solaris/[EMAIL PROTECTED] into 
backup/fire/solaris/[EMAIL PROTECTED]

received 13.6KB stream in 1 seconds (13.6KB/sec)
receiving full stream of solaris/ROOT/[EMAIL PROTECTED] into 
backup/fire/solaris/ROOT/[EMAIL PROTECTED]

cannot mount '/': directory is not empty

Shouldn't there be a flag (-n?) for use with receive -R that prevents 
the newly-copied filesystems from being mounted?  Or am I missing 
something obvious?






smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and SAN

2008-07-27 Thread Brad
> - on a sun cluster, luns are seen on both nodes. Can
> we prevent mistakes like creating a pool on already
> assigned luns ? for example, veritas wants a "force"
> flag. With ZFS i can do :
> node1: zpool create X add lun1 lun2
> node2 : zpool create Y add lun1 lun2
> and then, results are unexpected, but pool X will
> never switch again ;-) resource and zone are dead.

For our iSCSI SAN, we use iSNS to put LUNs into separate
discovery domains (default + domain per host). So as part
of creating or expanding a pool we first move LUNs to the
appropriate host's domain. Create would fail on node2 because
it wouldn't have visibility to the luns. Would that address your issue?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replicating ZFS filesystems with non-standard mount points

2008-07-27 Thread Alan Burlison
Alan Burlison wrote:

> So how do I tell zfs receive to create the new filesystems in pool3, but 
> not actually try to mount them?

This is even more of an issue with ZFS root - as far as I can tell it's 
impossible to recursively back up all the filesystems in a root pool 
because of this - the reason being that one filesystem in the root pool 
will *always* be mounted as /, and when zfs receive creates the copied 
filesystem for / it also tries to mount it as /, and fails.

NAMEUSED  AVAIL  REFER  MOUNTPOINT
solaris6.89G  12.7G30K  /solaris
solaris/ROOT   6.88G  12.7G18K  /solaris/ROOT
solaris/ROOT/onnv_92   65.8M  12.7G  3.98G  /
solaris/ROOT/onnv_94   2.76G  12.7G  3.99G  /
solaris/ROOT/test  4.06G  12.7G  3.98G  /
solaris/ROOT/[EMAIL PROTECTED]  296K  -  3.98G  -

# zfs snapshot -r [EMAIL PROTECTED]
# zfs create -o mountpoint=none -o canmount=off backup/fire/solaris
# zfs send -R [EMAIL PROTECTED] | zfs receive -Fdv backup/fire/solaris
receiving full stream of [EMAIL PROTECTED] into backup/fire/[EMAIL PROTECTED]
received 30.5KB stream in 1 seconds (30.5KB/sec)
receiving full stream of solaris/[EMAIL PROTECTED] into 
backup/fire/solaris/[EMAIL PROTECTED]
received 13.6KB stream in 1 seconds (13.6KB/sec)
receiving full stream of solaris/ROOT/[EMAIL PROTECTED] into 
backup/fire/solaris/ROOT/[EMAIL PROTECTED]
cannot mount '/': directory is not empty

Shouldn't there be a flag (-n?) for use with receive -R that prevents 
the newly-copied filesystems from being mounted?  Or am I missing 
something obvious?

-- 
Alan Burlison
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool replace not working

2008-07-27 Thread Marc Bevand
It looks like you *think* you are trying to add the new drive, when you are in 
fact re-adding the old (failing) one. A new drive should never show up as 
ONLINE in a pool with no action from your part, if only because it contains no 
partition and no vdev label with the right pool GUID.

If I am right, try to add the other drive.

If I am wrong, you somehow managed to confuse ZFS.. You can prevent ZFS from 
thinking c2d1 is already part of the pool by deleting the partition table on 
it:
  $ dd if=/dev/zero of=/dev/rdsk/c2d1p0 bs=512 count=1
  $ zpool import
  (it should show you the pool is now ready to be imported)
  $ zpool import tank
  $ zpool replace tank c2d1

At this point it should be resilvering...

-marc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool replace not working

2008-07-27 Thread Breandan Dezendorf
I had a drive fail in my home fileserver - I've replaced the drive,  
but I can't make the system see it properly.  I'm running nevada B85,  
with 5 750GB drives in a raidz1 pool named "tank" and booting off a  
separate 80 GB SATA drive I had laying around.

Without the new drive attached, I simply get the expected "UNAVAIL"  
message for the drive in zpool status and the pool imports just fine.   
However, with the new drive attached (to the same controller port, as  
I'm using all 6 of my ports) I get either "invalid vdev specification"  
or the pool shows up as "UNAVAIL" (but c2d1 shows up just fine), and I  
can't import the pool to run a "zpool replace -f tank c2d1".

Without drive attached:
bash-3.2# zpool status
   pool: tank
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas  
exist for
 the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
  scrub: resilver completed after 0h0m with 0 errors on Sat Jul 26  
21:18:07 2008
config:

 NAMESTATE READ WRITE CKSUM
 tankDEGRADED 0 0 0
   raidz1DEGRADED 0 0 0
 c1d1ONLINE   0 0 0
 c2d0ONLINE   0 0 0
 c2d1UNAVAIL  0 0 0  cannot open
 c3d0ONLINE   0 0 0
 c4d0ONLINE   0 0 0


With drive attached:
bash-3.2# zpool import
   pool: tank
 id: 3049365411720608557
  state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

 tankUNAVAIL  insufficient replicas
   raidz1UNAVAIL  corrupted data
 c1d1ONLINE
 c2d0ONLINE
 c2d1ONLINE
 c3d0ONLINE
 c4d0ONLINE


Suggestions?  Do I need to upgrade to a newer release of Nevada?  Do I  
need to pre-format the new drive in a fancy way?  I've got the system  
powered off for the time being, as I'm uncomfortable running without  
parity in place.

thanks,
Breandan Dezendorf
--
Network Systems Engineer
American University of Sharjah
  [EMAIL PROTECTED]
--



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs, raidz, spare and jbod

2008-07-27 Thread James C. McPherson
Claus Guttesen wrote:
...

>>> Jul 25 13:15:00 malene arcmsr: [ID 419778 kern.notice] arcmsr0: scsi
>>> id=1 lun=3 ccb='0xff02e0ca0800' outstanding command timeout
>>> Jul 25 13:15:00 malene arcmsr: [ID 610198 kern.notice] arcmsr0: scsi
>>> id=1 lun=3 fatal error on target, device was gone
>> The command timed out because your system configuration was unexpectedly
>> changed in a manner which arcmsr doesn't support.
> 
> Are there alternative jbod-capable sas-controllers in the same range
> as the arc-1680? That is compatible with solaris? I choosed the
> arc-1680 since it's well-supported on FreeBSD and Solaris.

I don't know, quite probably :) Have a look at the the HCL for both
Solaris 10, Solaris Express and OpenSolaris 2008.05 -

http://www.sun.com/bigadmin/hcl/
http://www.sun.com/bigadmin/hcl/data/sx/
http://www.sun.com/bigadmin/hcl/data/os/

>>> /usr/sbin/zpool status
>>>  pool: ef1
>>>  state: DEGRADED
>>> status: One or more devices are faulted in response to persistent errors.
>>>Sufficient replicas exist for the pool to continue functioning in a
>>>degraded state.
>>> action: Replace the faulted device, or use 'zpool clear' to mark the
>>> device
>>>repaired.
>>>  scrub: resilver in progress, 0.02% done, 5606h29m to go
>>> config:
>>>
>>>NAMESTATE READ WRITE CKSUM
>>>ef1 DEGRADED 0 0 0
>>>  raidz2DEGRADED 0 0 0
>>>spare   ONLINE   0 0 0
>>>  c3t0d0p0  ONLINE   0 0 0
>>>  c3t1d2p0  ONLINE   0 0 0
>>>c3t0d1p0ONLINE   0 0 0
>>>c3t0d2p0ONLINE   0 0 0
>>>c3t0d0p0FAULTED 35 1.61K 0  too many errors
>>>c3t0d4p0ONLINE   0 0 0
>>>c3t0d5p0DEGRADED 0 034  too many errors
>>>c3t0d6p0ONLINE   0 0 0
>>>c3t0d7p0ONLINE   0 0 0
>>>c3t1d0p0ONLINE   0 0 0
>>>c3t1d1p0ONLINE   0 0 0
>>>spares
>>>  c3t1d2p0  INUSE currently in use
>>>
>>> errors: No known data errors
>> a double disk failure while resilvering - not a good state for your
>> pool to be in.
> 
> The degraded disk came after I pulled the first disk and was not intended. :-)

That's usually the case :)


>> Can you wait for the resilver to complete? Every minute that goes
>> by tends to decrease the estimate on how long remains.
> 
> The resilver had approx. three hours remaining when the second disk
> was marked as degraded. After that the resilver process (and access as
> such) to the raidz2-pool stopped.

I think that's probably to be expected.

>> In addition, why are you using p0 devices rather than GPT-labelled
>> disks (or whole-disk s0 slices) ?
> 
> My ignorance. I'm a fairly seasoned FreeBSD-administrator and had
> previously used da0, da1, da2 etc. when I defined a similar raidz2 on
> FreeBSD. But when I installed solaris I initially saw lun 0 on target
> 0 and 1 and then tried the devices that I saw. And the p0-device in
> /dev/dsk was the first to respond to my zpool create-command. :^)

Not to worry - every OS handles things a little different in
that area.

> Modifying /kernel/drv/sd.conf made all the lun's visible.

Yes - by default the Areca will only present targets, not any
luns underneath so sd.conf modification is necessary. I'm working
on getting that fixed.




James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss