Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-05-29 Thread Erik Trimble

On 5/28/2010 1:24 PM, schatten wrote:

Hi,

whenever I create a new zfs my PC hangs at boot. Basically where the login 
screen should appear. After booting from livecd and removing the zfs the boot 
works again.
This also happened when I created a new zpool for the other half of my HDD.
Any idea why? How to solve it?
   


Schatten,

You need to be a buit more specific.

Describe exactly what you are doing, step by step.  Especially how your 
disks are laid out - is the boot disk a single OpenSolaris parition, or 
are you sharing with other OSes?


Do you mean that you have an already-working b134 install on a disk, and 
that you're trying to add another zfs filesystem to the rpool? Or are 
you trying to add another disk/partition as a whole new zpool?   Or 
something else?


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-05-29 Thread schatten
Okay.

I had/have a running snv134 install on one half of my disk. I created a zfs 
(zfs create rpool/VB) for my virtualbox. Then zfs set 
mountpoint=/export/home/schatten/VirtulBox rpool/VB. Then a reboot and it hangs 
right before the login should appear.
I removed the zfs with an OSOL livecd and booting works.

Then I tried to add the other half of my disk.
First formatting it, then zpool create c5d1p0 (not sure exactly, but zpool list 
showed the other half as up and running). Reboot, same as above.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-05-29 Thread schatten
I should note that all of it works. I have access to the ZFS/zpool while 
running OSOL. I can create files and stuff in the newly created zfs but the 
reboot hangs. Looks like the reboot has a flaw.
Not to mention the reboot is no real reboot. 2009.06 had a reboot that powered 
off the PC. snv134 reboot means that only the OS is being rebooted. Hard to 
describe what I mean. When I do a reboot OSOL brings me right back to the 
kernel info message, hostname etc...
Maybe that is related.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-05-29 Thread schatten
Upsala.
And another note: A shutdown brings the same results. Hangs before the login 
screen. So no matter if I do a reboot or a powercycle.
I also can't revert to OSOL 2009.06 as my hardware is not recognized. 2009.06 
won't find my two SLI graphiccards.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-05-29 Thread Erik Trimble

On 5/29/2010 12:22 AM, schatten wrote:

Okay.

I had/have a running snv134 install on one half of my disk. I created a zfs 
(zfs create rpool/VB) for my virtualbox. Then zfs set 
mountpoint=/export/home/schatten/VirtulBox rpool/VB. Then a reboot and it hangs 
right before the login should appear.
I removed the zfs with an OSOL livecd and booting works.

Then I tried to add the other half of my disk.
First formatting it, then zpool create c5d1p0 (not sure exactly, but zpool list 
showed the other half as up and running). Reboot, same as above.
   


OK, let me get this straight:

(1)  Your boot disk has a Solaris fdisk partition that takes up 50% of 
the actual disk space
(2)  Inside that fdisk partition, you have b134 installed, with the 
zpool being the default 'rpool'

(3)  There exists the following zfs filesystems:
rpool/export
rpool/export/home
rpool/export/home/schatten
(4)  You do a 'zfs create rpool/VB'
(5)  You then do 'zfs set mountpoint=/export/home/schatten/VB'
(6)  Everything works fine then, up until you reboot the system, after 
which is hangs before displaying the GDM login screen.


Right?



Also, you are NOT going to be able to use the 2nd fdisk partition on 
your boot drive - OpenSolaris only recognizes 1 Solaris fdisk partition 
per drive at this point.  It will recognize more than one Solaris 
*slice* inside an fdisk partition, though.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zfs mirror boot hang at boot

2010-05-29 Thread Matt Connolly
Hi,

I'm running snv_134 on 64-bit x86 motherboard, with 2 SATA drives. The zpool 
rpool uses whole disk of each drive. I've installed grub on both discs, and 
mirroring seems to be working great.

I just started testing what happens when a drive fails. I kicked off some 
activities and unplugged one of the drives while it was running, the system 
kept running, and zpool status indicated that one drive was removed. Awesome. I 
plugged it back in, and it recovered perfectly.

But with one of the drives unplugged, the system hangs at boot. On both drives 
(with the other unplugged) grub loads, and the system starts to boot. However, 
it gets stuck at the Hostname: Vault line and never gets to reading ZFS 
config like it would on a normal boot.

If I reconnect both drives then booting continues correctly.

If I detach a drive from the pool, then the system also correctly boots off a 
single connected drive. However, reattaching the 2nd drive causes a whole 
resilver to occur.

Is this a bug? Or is there some other thing you need to do to mark the drive as 
offline or something. Shame that you have to do that before rebooting! Would 
make it very hard to recover if the drive was physically dead

Thanks,
Matt
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs mirror boot hang at boot

2010-05-29 Thread Brandon High
On Sat, May 29, 2010 at 12:54 AM, Matt Connolly
matt.connolly...@gmail.com wrote:
 But with one of the drives unplugged, the system hangs at boot. On both 
 drives (with the other unplugged) grub loads, and the system starts to boot. 
 However, it gets stuck at the Hostname: Vault line and never gets to 
 reading ZFS config like it would on a normal boot.

It's a known bug in b134.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs/zpool] hang at boot

2010-05-29 Thread Erik Trimble

On 5/29/2010 12:48 AM, schatten wrote:

Yep, that is correct. The rpool also has stuff like swap and 1-2 other 
mountpoints I forgot. Just the default installation layout.
I am really not sure if I did something wrong or if there is a bug. But if it 
is a bug, why do only I see it?
   

Hmm

Can you try this when you create the VB filesystems:

mkdir -p /export/home/schatten/VB
zfs create -o mountpoint=/export/home/schatten/VB  rpool/VB

and let me know if it hangs as before?

I tried what you are doing on a machine running b118, and it works fine.



OpenSolaris only recognizes 1 Solaris fdisk partition
 

per drive at this point.
So I can create an zpool on my other HDD and OSOL will see and work with it?
   
Yes. You can either use the whole drive, or create fdisk partitions, 
giving ZFS one of the fdisk paritions. Either way, ZFS will use the 
other hard drives happily.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] nfs share of nested zfs directories?

2010-05-29 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Cassandra Pugh
 
    I was wondering if there is a special option to share out a set of
 nested
    directories?  Currently if I share out a directory with
 /pool/mydir1/mydir2
    on a system, mydir1 shows up, and I can see mydir2, but nothing in
 mydir2.
    mydir1 and mydir2 are each a zfs filesystem, each shared with the
 proper
    sharenfs permissions.
    Did I miss a browse or traverse option somewhere?

My understanding is thus:

If you set the sharenfs property, then the property is inherited by child
filesystems, and consequently automatically exported.
However, if you use the dfstab, you're doing it yourself manually, and the
child filesystems are not automatically exported.

Furthermore ... Exporting is only half of the problem.  There is still the
question of mounting.

I don't know how it works, but my understanding is that solaris/opensolaris
nfs clients automatically follow nested exports.  But linux is a different
matter.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv reliability

2010-05-29 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Gregory J. Benscoter
 
 After looking through the archives I haven’t been able to assess the
 reliability of a backup procedure which employs zfs send and recv.

If there's data corruption in the zfs send datastream, then the whole
datastream is lost.

If you are piping your zfs send into zfs receive then there is no
problem.  It's ok to do this via ssh, mbuffer, etc, provided that you're not
storing the zfs send datastream expecting to receive it later.  If you're
receiving it immediately, and there is any data corruption, the zfs receive
will fail, and you'll know immediately that there was something wrong.  If
you're not storing the data stream for later, you will not have bad data
sitting around undetected giving you a false sense of security.

There are two reasons why they say zfs send is not a backup solution.  The
issue above is one of them.  The other issue is:  You cannot restore a
subset of the filesystem.  You can only restore the *entire* filesystem.


 Currently I’m attempting to create a script that will allow me to write
 a zfs stream to a tape via tar like below.

Despite what I've said above, there are people who do it anyway.  Logic
similar to ... I do a full backup every week.  I am 99% certain I'll never
need it, and if I do, I am 99% certain the latest tape will be good.  And if
I'm wrong, then I'm 99% certain the one-week-older tape will be good
Couple this with This is not the only form of backups I'm doing...  AKA,
some people are willing to take the calculated risk of tapes possibly
corrupting data.


     # zfs send –R p...@something | tar –c  /dev/tape

Hmmm...  In the above, your data must all fit on a single tape.  In fact,
why use tar at all?  Just skip tar and write to tape.  My experience is that
performance this way is terrible.  Perhaps mbuffer would solve that?  I
never tried.

If your whole data stream will fit on a single tape, consider backing up to
external hard drive instead (or in addition.)  The cool thing about having a
backup on hard drive is (a) no restore time necessary; just mount it and use
it.  (b) yes, you can extract a subset of the filesystem.  (c) You've
already done the zfs receive so you are already sure the data is good.
You can see the filesystem, so you *really* know the data is good.  (d) if
you run out of space on the disk, you can just add more devices to the
external pool.  ;-)  But you've got to keep the group together.

The bad thing about backup to hard drive:  If it's an external drive, it's
easy to accidentally knock out the power, which would make the filesystem
disappear and therefore the system is likely to hang.  So if you're using an
external disk, you want to attach it to a non-critical system, and pipe the
data over ssh or mbuffer or something.  Also, hard drives don't have the
same shelf life, nor physical impact survival rate that tapes have.  And if
you're going to be writing once and archiving permamently ... then the cost
per GB might be a factor too.


 I’m primarily concerned with in the possibility of a bit flop. If this
 occurs will the stream be lost? Or will the file that that bit flop
 occurred in be the only degraded file? Lastly how does the reliability
 of this plan compare to more traditional backup tools like tar, cpio,
 etc…?

The advantage of zfs send is that you can do incrementals, which require
zero time to calculate.  You only need enough time to transfer the number of
bytes that have changed.  For example, I have a filesystem which takes 20
hrs to fully write to external media.  It takes 6 hours just to walk the
tree (rsync, tar, find, etc) scanning for files that have changed and
consequently should be copied for a tar-style or cpio-style incremental
backup.  Or ... When I use zfs send on average the total incremental
process takes only 7 minutes.  But of course it varies linearly based on how
much data has changed.

The advantage of tar, cpio, etc, is that they can write to tape, without
people telling you not to, as I have done above regarding zfs send to
tape.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] don't mount a zpool on boot

2010-05-29 Thread Kees Nuyt
On Thu, 20 May 2010 11:53:17 -0700, John Andrunas
j...@andrunas.net wrote:

Can I make a pool not mount on boot?  I seem to recall reading
somewhere how to do it, but can't seem to find it now.

As Tomas said, export the pool before shutdown.
If you have a pool which causes unexpected trouble at boot
time and you have no oppportunity to export it, you can:
- boot from a live CD or do a failsafe boot, 
- zpool import -R the root pool in e.g. /a, 
- move /a/etc/zfs/zpool.cache out of the way
- init 6 for a normal boot 

This way only the root pool will be imported.
You can import any other pools afterwards.
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] don't mount a zpool on boot

2010-05-29 Thread Dick Hoogendijk

Op Sat, 29 May 2010 20:34:54 +0200 schreef Kees Nuyt k.n...@zonnet.nl:


On Thu, 20 May 2010 11:53:17 -0700, John Andrunas
j...@andrunas.net wrote:


Can I make a pool not mount on boot?  I seem to recall reading
somewhere how to do it, but can't seem to find it now.


As Tomas said, export the pool before shutdown.


Why don't you set the canmount=noauto option in the zfs dataset.

--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | OpenSolaris 2010.03 b134
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread morris hooten
I have 6 zfs pools and after rebooting init 6 the vpath device path names have 
changed for some unknown reason. But I can't detach, remove and reattach to the 
new device namesANY HELP! please

pjde43m01  -  -  -  -  FAULTED  -
pjde43m02  -  -  -  -  FAULTED  -
pjde43m03  -  -  -  -  FAULTED  -
poas43m01  -  -  -  -  FAULTED  -
poas43m02  -  -  -  -  FAULTED  -
poas43m03  -  -  -  -  FAULTED  -


One pool listed below as example

 pool: poas43m01
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
poas43m01   UNAVAIL  0 0 0  insufficient replicas
  vpath4c   UNAVAIL  0 0 0  cannot open


  
before

30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath2a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@2:2
  32. vpath3a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@3:3
  33. vpath4a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@4:4
  34. vpath5a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@5:5
  35. vpath6a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@6:6
  36. vpath7a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@7:7


after
  
30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath8a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@8:8
  32. vpath9a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@9:9
  33. vpath10a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@10:10
  34. vpath11a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@11:11
  35. vpath12a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@12:12
  36. vpath13a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@13:13




{usbderp...@root} zpool detach poas43m03 vpath2c
cannot open 'poas43m03': pool is unavailable
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Ragnar Sundblad

On 30 maj 2010, at 01.53, morris hooten wrote:

 I have 6 zfs pools and after rebooting init 6 the vpath device path names 
 have changed for some unknown reason. But I can't detach, remove and reattach 
 to the new device namesANY HELP! please
 
 pjde43m01  -  -  -  -  FAULTED  -
 pjde43m02  -  -  -  -  FAULTED  -
 pjde43m03  -  -  -  -  FAULTED  -
 poas43m01  -  -  -  -  FAULTED  -
 poas43m02  -  -  -  -  FAULTED  -
 poas43m03  -  -  -  -  FAULTED  -
 
 
 One pool listed below as example
 
 pool: poas43m01
 state: UNAVAIL
 status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
 action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
poas43m01   UNAVAIL  0 0 0  insufficient replicas
  vpath4c   UNAVAIL  0 0 0  cannot open
 
 
 
 before
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath2a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@2:2
  32. vpath3a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@3:3
  33. vpath4a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@4:4
  34. vpath5a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@5:5
  35. vpath6a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@6:6
  36. vpath7a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@7:7
 
 
 after
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath8a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@8:8
  32. vpath9a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@9:9
  33. vpath10a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@10:10
  34. vpath11a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@11:11
  35. vpath12a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@12:12
  36. vpath13a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@13:13
 

I have never seen /pseudo devices, is this veritas or what is it?
Is it even solaris?

Just i wild guess:
If the disk devices are in a directory (file system?) called /pseudo,
zpool won't find them there, as it will only look in /dev/dsk by
default.
You could try zpool import -d /pseudo to have it look there instead.

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Mark Musante

Can you find the devices in /dev/rdsk?  I see there is a path in /pseudo at 
least, but the zpool import command only looks in /dev.  One thing you can try 
is doing this:

# mkdir /tmpdev
# ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a

And then see if 'zpool import -d /tmpdev' finds the pool.


On 29 May, 2010, at 19.53, morris hooten wrote:

 I have 6 zfs pools and after rebooting init 6 the vpath device path names 
 have changed for some unknown reason. But I can't detach, remove and reattach 
 to the new device namesANY HELP! please
 
 pjde43m01  -  -  -  -  FAULTED  -
 pjde43m02  -  -  -  -  FAULTED  -
 pjde43m03  -  -  -  -  FAULTED  -
 poas43m01  -  -  -  -  FAULTED  -
 poas43m02  -  -  -  -  FAULTED  -
 poas43m03  -  -  -  -  FAULTED  -
 
 
 One pool listed below as example
 
 pool: poas43m01
 state: UNAVAIL
 status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
 action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
poas43m01   UNAVAIL  0 0 0  insufficient replicas
  vpath4c   UNAVAIL  0 0 0  cannot open
 
 
 
 before
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath2a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@2:2
  32. vpath3a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@3:3
  33. vpath4a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@4:4
  34. vpath5a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@5:5
  35. vpath6a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@6:6
  36. vpath7a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@7:7
 
 
 after
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath8a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@8:8
  32. vpath9a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@9:9
  33. vpath10a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@10:10
  34. vpath11a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@11:11
  35. vpath12a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@12:12
  36. vpath13a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@13:13
 
 
 
 
 {usbderp...@root} zpool detach poas43m03 vpath2c
 cannot open 'poas43m03': pool is unavailable
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Richard Elling
Also, the zpool.cache may be out of date.  To clear its entries,
zpool export  poas43m01
and ignore any errors.

Then 
zpool import 
and see if the pool is shown as importable, perhaps with new device names.
If not, then try the zpool import -d option that Mark described.
 -- richard

On May 29, 2010, at 4:53 PM, morris hooten wrote:

 I have 6 zfs pools and after rebooting init 6 the vpath device path names 
 have changed for some unknown reason. But I can't detach, remove and reattach 
 to the new device namesANY HELP! please
 
 pjde43m01  -  -  -  -  FAULTED  -
 pjde43m02  -  -  -  -  FAULTED  -
 pjde43m03  -  -  -  -  FAULTED  -
 poas43m01  -  -  -  -  FAULTED  -
 poas43m02  -  -  -  -  FAULTED  -
 poas43m03  -  -  -  -  FAULTED  -
 
 
 One pool listed below as example
 
 pool: poas43m01
 state: UNAVAIL
 status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
 action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
poas43m01   UNAVAIL  0 0 0  insufficient replicas
  vpath4c   UNAVAIL  0 0 0  cannot open
 
 
 
 before
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath2a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@2:2
  32. vpath3a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@3:3
  33. vpath4a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@4:4
  34. vpath5a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@5:5
  35. vpath6a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@6:6
  36. vpath7a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@7:7
 
 
 after
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath8a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@8:8
  32. vpath9a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@9:9
  33. vpath10a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@10:10
  34. vpath11a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@11:11
  35. vpath12a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@12:12
  36. vpath13a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@13:13
 
 
 
 
 {usbderp...@root} zpool detach poas43m03 vpath2c
 cannot open 'poas43m03': pool is unavailable
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv reliability

2010-05-29 Thread Richard Elling
On May 28, 2010, at 10:35 AM, Bob Friesenhahn wrote:
 On Fri, 28 May 2010, Gregory J. Benscoter wrote:
 I’m primarily concerned with in the possibility of a bit flop. If this 
 occurs will the stream be lost? Or will the file that that bit flop occurred 
 in be the only degraded file? Lastly how does the reliability of this plan 
 compare to more traditional backup tools like tar, cpio, etc…?
 
 The whole stream will be rejected if a single bit is flopped.  Tar and cpio 
 will happily barge on through the error.

... without reporting the error.  Silent errors can be worse than detected
errors :-(
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Erik Trimble
Are the indicated devices actually under /pseudo  or are they really 
under /devices/pseudo  ?


Also, have you tried a 'devfsadm -C' to re-configure the /dev links?   
this might allow you to recognize the new vpath devices...



-Erik



On 5/29/2010 4:53 PM, morris hooten wrote:

I have 6 zfs pools and after rebooting init 6 the vpath device path names have 
changed for some unknown reason. But I can't detach, remove and reattach to the 
new device namesANY HELP! please

pjde43m01  -  -  -  -  FAULTED  -
pjde43m02  -  -  -  -  FAULTED  -
pjde43m03  -  -  -  -  FAULTED  -
poas43m01  -  -  -  -  FAULTED  -
poas43m02  -  -  -  -  FAULTED  -
poas43m03  -  -  -  -  FAULTED  -


One pool listed below as example

  pool: poas43m01
  state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
 replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
  scrub: none requested
config:

 NAMESTATE READ WRITE CKSUM
 poas43m01   UNAVAIL  0 0 0  insufficient replicas
   vpath4c   UNAVAIL  0 0 0  cannot open



before

30. vpath1aIBM-2145- cyl 8190 alt 2 hd 64 sec 256
   /pseudo/vpat...@1:1
   31. vpath2aIBM-2145- cyl 13822 alt 2 hd 64 sec 256
   /pseudo/vpat...@2:2
   32. vpath3aIBM-2145- cyl 13822 alt 2 hd 64 sec 256
   /pseudo/vpat...@3:3
   33. vpath4aIBM-2145- cyl 13822 alt 2 hd 64 sec 256
   /pseudo/vpat...@4:4
   34. vpath5aIBM-2145- cyl 27646 alt 2 hd 64 sec 256
   /pseudo/vpat...@5:5
   35. vpath6aIBM-2145- cyl 27646 alt 2 hd 64 sec 256
   /pseudo/vpat...@6:6
   36. vpath7aIBM-2145- cyl 27646 alt 2 hd 64 sec 256
   /pseudo/vpat...@7:7


after

30. vpath1aIBM-2145- cyl 8190 alt 2 hd 64 sec 256
   /pseudo/vpat...@1:1
   31. vpath8aIBM-2145- cyl 13822 alt 2 hd 64 sec 256
   /pseudo/vpat...@8:8
   32. vpath9aIBM-2145- cyl 13822 alt 2 hd 64 sec 256
   /pseudo/vpat...@9:9
   33. vpath10aIBM-2145- cyl 13822 alt 2 hd 64 sec 256
   /pseudo/vpat...@10:10
   34. vpath11aIBM-2145- cyl 27646 alt 2 hd 64 sec 256
   /pseudo/vpat...@11:11
   35. vpath12aIBM-2145- cyl 27646 alt 2 hd 64 sec 256
   /pseudo/vpat...@12:12
   36. vpath13aIBM-2145- cyl 27646 alt 2 hd 64 sec 256
   /pseudo/vpat...@13:13




{usbderp...@root} zpool detach poas43m03 vpath2c
cannot open 'poas43m03': pool is unavailable
   



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss