Re: [zfs-discuss] Zpool import not working

2010-06-13 Thread zfsnoob4
Thank you. The -D option works.

And yes, now I feel a lot more confident about playing around with the FS. I'm 
planning on moving an existing raid1 NTFS setup to ZFS, but since I'm on a 
budget I only have three drive in total to work with. I want to make sure I 
know what I'm doing before I mess around with anything.

Also I can confirm that the cache flush option is not ALWAYS needed for the 
import. I have opensolaris build 134 in VirtualBox, but I didn't enable cache 
flush. After destroying the import worked correctly with the -D option. I 
emphasize always because if you are writing to the disk, while you destroy it, 
it may not work very well; I haven't tested this.

Thanks for your help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working

2010-06-12 Thread zfsnoob4
Thanks, that works. But it only when I do a proper export first.

If I export the pool then I can import with:
zpool import -d /
(test files are located in /)

but if I destroy the pool, then I can no longer import it back, even though the 
files are still there. Is this normal?


Thanks for your help.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working

2010-06-12 Thread Mark Musante

I'm guessing that the virtualbox VM is ignoring write cache flushes.  See this 
for more ifno:
http://forums.virtualbox.org/viewtopic.php?f=8t=13661

On 12 Jun, 2010, at 5.30, zfsnoob4 wrote:

 Thanks, that works. But it only when I do a proper export first.
 
 If I export the pool then I can import with:
 zpool import -d /
 (test files are located in /)
 
 but if I destroy the pool, then I can no longer import it back, even though 
 the files are still there. Is this normal?
 
 
 Thanks for your help.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working

2010-06-12 Thread Neil Perrin

On 06/12/10 17:13, zfsnoob4 wrote:

Thanks. As I discovered from that post, VB does not have cache flush enabled by 
default. Ignoreflush must be explicitly turned off.

VBoxManage setextradata VMNAME 
VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush 0

where VMNAME is the name of your virtual machine.


Although I tried that it it returned with no output (indicating it worked) but 
it still won't detect a pool that has been destroyed. Is there any way to 
detect if flushes are working from inside the OS? Maybe a command that tells 
you if cacheflush is enabled?

Thanks.
  
You also need the -D flag. I could successfully import. This was 
running the latest bits:


: trasimene ; mkdir /pf
: trasimene ; mkfile 100m /pf/a /pf/b /pf/c
: trasimene ; zpool create whirl /pf/a /pf/b log /pf/c
: trasimene ; zpool destroy whirl
: trasimene ; zpool import -D -d /pf
 pool: whirl
   id: 1406684148029707587
state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:

   whirl   ONLINE
 /pf/a ONLINE
 /pf/b ONLINE
   logs
 /pf/c ONLINE
: trasimene ; zpool import -D -d /pf whirl
: trasimene ; zpool status whirl
 pool: whirl
state: ONLINE
scan: none requested
config:

   NAMESTATE READ WRITE CKSUM
   whirl   ONLINE   0 0 0
 /pf/a ONLINE   0 0 0
 /pf/b ONLINE   0 0 0
   logs
 /pf/c ONLINE   0 0 0

errors: No known data errors
: trasimene ;


It would, of course, have been easier if you'd been using real devices
but I understand you want to experiment first...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool import not working

2010-06-11 Thread zfsnoob4
Hey,

I'm running some test right now before setting up my server. I'm running 
Nexenta Core 3.02 (RC2, based on opensolaris build 134 I believe) in Virtualbox.

To do the test, I'm creating three empty files and then making a raidz mirror:
mkfile -n 1g /foo
mkfile -n 1g /foo1
mkfile -n 1g /foo2

Then I make a zpool:
zpool create testpool raidz /foo /foo1 /foo2

Now I destroy the pool and attempt to restore it:
zpool destroy testpool

But when I try to list available imports, the list is empty:
zpool import -D
return nothing.

zpool import testpool
also return nothing.

Even if I try to export the pool (so before destroying it):
zpool export testpool

I see it disappear from the zpool list, but I can't import it (commands return 
nothing).

Is this due to the fact that I'm using test files instead of real drives?

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working

2010-06-11 Thread Neil Perrin

On 06/11/10 22:07, zfsnoob4 wrote:

Hey,

I'm running some test right now before setting up my server. I'm running 
Nexenta Core 3.02 (RC2, based on opensolaris build 134 I believe) in Virtualbox.

To do the test, I'm creating three empty files and then making a raidz mirror:
mkfile -n 1g /foo
mkfile -n 1g /foo1
mkfile -n 1g /foo2

Then I make a zpool:
zpool create testpool raidz /foo /foo1 /foo2

Now I destroy the pool and attempt to restore it:
zpool destroy testpool

But when I try to list available imports, the list is empty:
zpool import -D
return nothing.

zpool import testpool
also return nothing.

Even if I try to export the pool (so before destroying it):
zpool export testpool

I see it disappear from the zpool list, but I can't import it (commands return 
nothing).

Is this due to the fact that I'm using test files instead of real drives?
  


- Yes.

zpool import will by default look in /dev/dsk.
You need to specify the directory (using -d dir) if your pool devices are
located elsewhere. See man zpool.

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-10-20 Thread Ross
If you have any backups of your boot volume, I found that the pool can be 
mounted on boot provided it's still listed in your /etc/zfs/zpool.cache file.  
I've moved to OpenSolaris now purely so I can take snapshots of my boot volume 
and backup that file.

The relevant bug you need fixing is this one, but I've no idea how long it 
might take before that fix is done.
http://bugs.opensolaris.org/view_bug.do?bug_id=6733267
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-10-18 Thread James
Any updates on this ?
I created a pool in 5/08, then added a slog device which sadly failed.
I can no longer mount the pool, it gives a cannot import 'mypool': one or more 
devices is currently unavailable.
I have tried it with the latest OpenSolaris  pre-release (2008.11, based on 
Nevada build 99) and still no luck.

Please advise,
James
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-06 Thread Ross Smith

Hmm... got a bit more information for you to add to that bug I think.
 
Zpool import also doesn't work if you have mirrored log devices and either one 
of them is offline.
 
I created two ramdisks with:
# ramdiskadm -a rc-pool-zil-1 256m
# ramdiskadm -a rc-pool-zil-2 256m
 
And added them to the pool with:
# zpool add rc-pool log mirror /dev/ramdisk/rc-pool-zil-1 
/dev/ramdisk/rc-pool-zil-2
 
I can reboot fine, the pool imports ok without the ZIL and I have a script that 
recreates the ramdisks and adds them back to the pool:#!/sbin/shstate=$1case 
$state in'start')   echo 'Starting Ramdisks'   /usr/sbin/ramdiskadm -a 
rc-pool-zil-1 256m   /usr/sbin/ramdiskadm -a rc-pool-zil-2 256m   echo 
'Attaching to ZFS ZIL'   /usr/sbin/zpool replace test 
/dev/ramdisk/rc-pool-zil-1   /usr/sbin/zpool replace test 
/dev/ramdisk/rc-pool-zil-2   ;;'stop')   ;;esac
 
However, if I export the pool, and delete one ramdisk to check that the 
mirroring works fine, the import fails:
# zpool export rc-pool
# ramdiskadm -d rc-pool-zil-1
# zpool import rc-pool
cannot import 'rc-pool': one or more devices is currently unavailable
 
Ross
 Date: Mon, 4 Aug 2008 10:42:43 -0600 From: [EMAIL PROTECTED] Subject: Re: 
 [zfs-discuss] Zpool import not working - I broke my pool... To: [EMAIL 
 PROTECTED]; [EMAIL PROTECTED] CC: zfs-discuss@opensolaris.orgRichard 
 Elling wrote:  Ross wrote:  I'm trying to import a pool I just exported 
 but I can't, even -f doesn't help. Every time I try I'm getting an error:  
 cannot import 'rc-pool': one or more devices is currently unavailable  
  Now I suspect the reason it's not happy is that the pool used to have a 
 ZIL :)  Correct. What you want is CR 6707530, log device failure 
 needs some work  http://bugs.opensolaris.org/view_bug.do?bug_id=6707530  
 which Neil has been working on, scheduled for b96.  Actually no. That CR 
 mentioned the problem and talks about splitting out the bug, as it's really 
 a separate problem. I've just done that and here's the new CR which probably 
 won't be visible immediately to you:  6733267 Allow a pool to be imported 
 with a missing slog  Here's the Description:  --- This 
 CR is being broken out from 6707530 log device failure needs some work  
 When Separate Intent logs (slogs) were designed they were given equal status 
 in the pool device tree. This was because they can contain committed changes 
 to the pool. So if one is missing it is assumed to be important to the 
 integrity of the application(s) that wanted the data committed 
 synchronously, and thus a pool cannot be imported with a missing slog. 
 However, we do allow a pool to be missing a slog on boot up if it's in the 
 /etc/zfs/zpool.cache file. So this sends a mixed message.  We should allow 
 a pool to be imported without a slog if -f is used and to not import without 
 -f but perhaps with a better error message.  It's the guidsum check that 
 actually rejects imports with missing devices. We could have a separate 
 guidsum for the main pool devices (non slog/cache). --- 
_
Get Hotmail on your mobile from Vodafone 
http://clk.atdmt.com/UKM/go/107571435/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-06 Thread Neil Perrin
Ross,

Thanks, I have updated the bug with this info.

Neil.

Ross Smith wrote:
 Hmm... got a bit more information for you to add to that bug I think.
  
 Zpool import also doesn't work if you have mirrored log devices and 
 either one of them is offline.
  
 I created two ramdisks with:
 # ramdiskadm -a rc-pool-zil-1 256m
 # ramdiskadm -a rc-pool-zil-2 256m
  
 And added them to the pool with:
 # zpool add rc-pool log mirror /dev/ramdisk/rc-pool-zil-1 
 /dev/ramdisk/rc-pool-zil-2
  
 I can reboot fine, the pool imports ok without the ZIL and I have a 
 script that recreates the ramdisks and adds them back to the pool:
 #!/sbin/sh
 state=$1
 case $state in
 'start')
echo 'Starting Ramdisks'
/usr/sbin/ramdiskadm -a rc-pool-zil-1 256m
/usr/sbin/ramdiskadm -a rc-pool-zil-2 256m
echo 'Attaching to ZFS ZIL'
/usr/sbin/zpool replace test /dev/ramdisk/rc-pool-zil-1
/usr/sbin/zpool replace test /dev/ramdisk/rc-pool-zil-2
;;
 'stop')
;;
 esac
  
 However, if I export the pool, and delete one ramdisk to check that the 
 mirroring works fine, the import fails:
 # zpool export rc-pool
 # ramdiskadm -d rc-pool-zil-1
 # zpool import rc-pool
 cannot import 'rc-pool': one or more devices is currently unavailable
  
 Ross
 
 
   Date: Mon, 4 Aug 2008 10:42:43 -0600
   From: [EMAIL PROTECTED]
   Subject: Re: [zfs-discuss] Zpool import not working - I broke my pool...
   To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
   CC: zfs-discuss@opensolaris.org
  
  
  
   Richard Elling wrote:
Ross wrote:
I'm trying to import a pool I just exported but I can't, even -f 
 doesn't help. Every time I try I'm getting an error:
cannot import 'rc-pool': one or more devices is currently 
 unavailable
   
Now I suspect the reason it's not happy is that the pool used to 
 have a ZIL :)
   
   
Correct. What you want is CR 6707530, log device failure needs some 
 work
http://bugs.opensolaris.org/view_bug.do?bug_id=6707530
which Neil has been working on, scheduled for b96.
  
   Actually no. That CR mentioned the problem and talks about splitting out
   the bug, as it's really a separate problem. I've just done that and 
 here's
   the new CR which probably won't be visible immediately to you:
  
   6733267 Allow a pool to be imported with a missing slog
  
   Here's the Description:
  
   ---
   This CR is being broken out from 6707530 log device failure needs 
 some work
  
   When Separate Intent logs (slogs) were designed they were given equal 
 status in the pool device tree.
   This was because they can contain committed changes to the pool.
   So if one is missing it is assumed to be important to the integrity 
 of the
   application(s) that wanted the data committed synchronously, and thus
   a pool cannot be imported with a missing slog.
   However, we do allow a pool to be missing a slog on boot up if
   it's in the /etc/zfs/zpool.cache file. So this sends a mixed message.
  
   We should allow a pool to be imported without a slog if -f is used
   and to not import without -f but perhaps with a better error message.
  
   It's the guidsum check that actually rejects imports with missing 
 devices.
   We could have a separate guidsum for the main pool devices (non 
 slog/cache).
   ---
  
 
 
 
 Find out how to make Messenger your very own TV! Try it Now! 
 http://clk.atdmt.com/UKM/go/101719648/direct/01/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Ross Smith

Just a thought, before I go and wipe this zpool, is there any way to manually 
recreate the /etc/zfs/zpool.cache file?
 
Ross Date: Mon, 4 Aug 2008 10:42:43 -0600 From: [EMAIL PROTECTED] Subject: 
Re: [zfs-discuss] Zpool import not working - I broke my pool... To: [EMAIL 
PROTECTED]; [EMAIL PROTECTED] CC: zfs-discuss@opensolaris.orgRichard 
Elling wrote:  Ross wrote:  I'm trying to import a pool I just exported 
but I can't, even -f doesn't help. Every time I try I'm getting an error:  
cannot import 'rc-pool': one or more devices is currently unavailable   
Now I suspect the reason it's not happy is that the pool used to have a ZIL :) 
 Correct. What you want is CR 6707530, log device failure needs some 
work  http://bugs.opensolaris.org/view_bug.do?bug_id=6707530  which Neil 
has been working on, scheduled for b96.  Actually no. That CR mentioned the 
problem and talks about splitting out the bug, as it's really a separate 
problem. I've just done that and here's the new CR which probably won't be 
visible immediately to you:  6733267 Allow a pool to be imported with a 
missing slog  Here's the Description:  --- This CR is 
being broken out from 6707530 log device failure needs some work  When 
Separate Intent logs (slogs) were designed they were given equal status in the 
pool device tree. This was because they can contain committed changes to the 
pool. So if one is missing it is assumed to be important to the integrity of 
the application(s) that wanted the data committed synchronously, and thus a 
pool cannot be imported with a missing slog. However, we do allow a pool to be 
missing a slog on boot up if it's in the /etc/zfs/zpool.cache file. So this 
sends a mixed message.  We should allow a pool to be imported without a slog 
if -f is used and to not import without -f but perhaps with a better error 
message.  It's the guidsum check that actually rejects imports with missing 
devices. We could have a separate guidsum for the main pool devices (non 
slog/cache). --- 
_
Win a voice over part with Kung Fu Panda  Live Search   and   100’s of Kung Fu 
Panda prizes to win with Live Search
http://clk.atdmt.com/UKM/go/107571439/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Richard Elling
Ross Smith wrote:
 Just a thought, before I go and wipe this zpool, is there any way to 
 manually recreate the /etc/zfs/zpool.cache file?

Do you have a copy in a snapshot?  ZFS for root is awesome!
 -- richard

  
 Ross

  Date: Mon, 4 Aug 2008 10:42:43 -0600
  From: [EMAIL PROTECTED]
  Subject: Re: [zfs-discuss] Zpool import not working - I broke my pool...
  To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
  CC: zfs-discuss@opensolaris.org
 
 
 
  Richard Elling wrote:
   Ross wrote:
   I'm trying to import a pool I just exported but I can't, even -f 
 doesn't help. Every time I try I'm getting an error:
   cannot import 'rc-pool': one or more devices is currently 
 unavailable
  
   Now I suspect the reason it's not happy is that the pool used to 
 have a ZIL :)
  
  
   Correct. What you want is CR 6707530, log device failure needs 
 some work
   http://bugs.opensolaris.org/view_bug.do?bug_id=6707530
   which Neil has been working on, scheduled for b96.
 
  Actually no. That CR mentioned the problem and talks about splitting out
  the bug, as it's really a separate problem. I've just done that and 
 here's
  the new CR which probably won't be visible immediately to you:
 
  6733267 Allow a pool to be imported with a missing slog
 
  Here's the Description:
 
  ---
  This CR is being broken out from 6707530 log device failure needs 
 some work
 
  When Separate Intent logs (slogs) were designed they were given 
 equal status in the pool device tree.
  This was because they can contain committed changes to the pool.
  So if one is missing it is assumed to be important to the integrity 
 of the
  application(s) that wanted the data committed synchronously, and thus
  a pool cannot be imported with a missing slog.
  However, we do allow a pool to be missing a slog on boot up if
  it's in the /etc/zfs/zpool.cache file. So this sends a mixed message.
 
  We should allow a pool to be imported without a slog if -f is used
  and to not import without -f but perhaps with a better error message.
 
  It's the guidsum check that actually rejects imports with missing 
 devices.
  We could have a separate guidsum for the main pool devices (non 
 slog/cache).
  ---
 


 
 Get Hotmail on your mobile from Vodafone Try it Now! 
 http://clk.atdmt.com/UKM/go/107571435/direct/01/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-05 Thread Ross Smith

No, but that's a great idea!  I'm on a UFS root at the moment, will have a look 
at using ZFS next time I re-install.
 Date: Tue, 5 Aug 2008 07:59:35 -0700 From: [EMAIL PROTECTED] Subject: Re: 
 [zfs-discuss] Zpool import not working - I broke my pool... To: [EMAIL 
 PROTECTED] CC: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org  Ross Smith 
 wrote:  Just a thought, before I go and wipe this zpool, is there any way 
 to   manually recreate the /etc/zfs/zpool.cache file?  Do you have a copy 
 in a snapshot? ZFS for root is awesome! -- richard Ross
 Date: Mon, 4 Aug 2008 10:42:43 -0600   From: [EMAIL PROTECTED]   
 Subject: Re: [zfs-discuss] Zpool import not working - I broke my pool...   
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]   CC: 
 zfs-discuss@opensolaris.org Richard Elling wrote:
 Ross wrote:I'm trying to import a pool I just exported but I can't, 
 even -f   doesn't help. Every time I try I'm getting an error:
 cannot import 'rc-pool': one or more devices is currently   unavailable 
   Now I suspect the reason it's not happy is that the pool used 
 to   have a ZIL :)  Correct. What you want is CR 
 6707530, log device failure needs   some work
 http://bugs.opensolaris.org/view_bug.do?bug_id=6707530which Neil has 
 been working on, scheduled for b96. Actually no. That CR mentioned 
 the problem and talks about splitting out   the bug, as it's really a 
 separate problem. I've just done that and   here's   the new CR which 
 probably won't be visible immediately to you: 6733267 Allow a pool 
 to be imported with a missing slog Here's the Description:
  ---   This CR is being broken out from 6707530 log 
 device failure needs   some work When Separate Intent logs 
 (slogs) were designed they were given   equal status in the pool device 
 tree.   This was because they can contain committed changes to the pool. 
   So if one is missing it is assumed to be important to the integrity   
 of the   application(s) that wanted the data committed synchronously, and 
 thus   a pool cannot be imported with a missing slog.   However, we do 
 allow a pool to be missing a slog on boot up if   it's in the 
 /etc/zfs/zpool.cache file. So this sends a mixed message. We should 
 allow a pool to be imported without a slog if -f is used   and to not 
 import without -f but perhaps with a better error message. It's 
 the guidsum check that actually rejects imports with missing   devices.  
  We could have a separate guidsum for the main pool devices (non   
 slog/cache).   ---  
   
 Get Hotmail on your mobile from Vodafone Try it Now!   
 http://clk.atdmt.com/UKM/go/107571435/direct/01/ 
_
Get Hotmail on your mobile from Vodafone 
http://clk.atdmt.com/UKM/go/107571435/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool import not working - I broke my pool...

2008-08-04 Thread Ross
I'm trying to import a pool I just exported but I can't, even -f doesn't help.  
Every time I try I'm getting an error:
cannot import 'rc-pool':  one or more devices is currently unavailable

Now I suspect the reason it's not happy is that the pool used to have a ZIL :)

However I know the pool works fine without the ZIL as my ZIL was a ramdisk and 
the server's been rebooted a good few times since doing that.  There have been 
no problems mounting the pool and each time after booting if I wanted a ramdisk 
ZIL again I just created a new one and ran zpool replace to add it to the pool.

The reason I exported the pool was because my ramdisk was called 'test' and I 
wanted to replace it with one called 'rc-pool-zil'.  I didn't have enough RAM 
to have two disks online at once so to save a reboot I exported the pool, 
deleted the 'test' ramdisk and created my new one.

The only problem is that I can't import the pool any more.  I'm beginning to 
think I should have rebooted instead...

The output of the relevant commands is:

# zpool import
  pool: rc-pool
id: 11547678520047091246
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

rc-pool UNAVAIL  missing device
  mirrorONLINE
c1t1d0  ONLINE
c2t0d0  ONLINE
c1t2d0  ONLINE
  mirrorDEGRADED
c2t1d0  UNAVAIL  cannot open
c1t3d0  ONLINE
c2t2d0  ONLINE
  mirrorONLINE
c1t4d0  ONLINE
c2t3d0  ONLINE
c1t5d0  ONLINE
  mirrorONLINE
c2t4d0  ONLINE
c1t6d0  ONLINE
c2t5d0  ONLINE
  mirrorONLINE
c1t7d0  ONLINE
c2t6d0  ONLINE
c2t7d0  ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.

# zpool import -f rc-pool
cannot import 'rc-pool': one or more devices is currently unavailable

Even after a reboot I get the same error.  Creating a ramdisk again with the 
original name doesn't seem to do anything either.  Checking the Sun error 
message it says a top level device isn't available and the pool can't be 
opened.  Well, the only thing it could possibly be is the ZIL, which seems odd 
since ZFS could certainly mount the pool fine without it on boot.  Can ZFS 
really not import a pool if the ZIL is missing?

If so, doesn't that make this a potential issue when people do start using 
separate media for the ZIL?  If the ZIL hardware fails for any reason do you 
loose access to your entire pool?  What if you're using battery backed nvram 
but power is off long enough to wipe the device?  What if a server fails and 
you need to move the disks to a new chassis?

Can anybody help me get this pool back online?

thanks,

Ross
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-04 Thread Richard Elling
Ross wrote:
 I'm trying to import a pool I just exported but I can't, even -f doesn't 
 help.  Every time I try I'm getting an error:
 cannot import 'rc-pool':  one or more devices is currently unavailable

 Now I suspect the reason it's not happy is that the pool used to have a ZIL :)
   

Correct.  What you want is CR 6707530, log device failure needs some work
http://bugs.opensolaris.org/view_bug.do?bug_id=6707530
which Neil has been working on, scheduled for b96.

 However I know the pool works fine without the ZIL as my ZIL was a ramdisk 
 and the server's been rebooted a good few times since doing that.  There have 
 been no problems mounting the pool and each time after booting if I wanted a 
 ramdisk ZIL again I just created a new one and ran zpool replace to add it to 
 the pool.

 The reason I exported the pool was because my ramdisk was called 'test' and I 
 wanted to replace it with one called 'rc-pool-zil'.  I didn't have enough RAM 
 to have two disks online at once so to save a reboot I exported the pool, 
 deleted the 'test' ramdisk and created my new one.

 The only problem is that I can't import the pool any more.  I'm beginning to 
 think I should have rebooted instead...
   

It still would have failed.

 The output of the relevant commands is:

 # zpool import
   pool: rc-pool
 id: 11547678520047091246
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
   devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
 config:

   rc-pool UNAVAIL  missing device
 mirrorONLINE
   c1t1d0  ONLINE
   c2t0d0  ONLINE
   c1t2d0  ONLINE
 mirrorDEGRADED
   c2t1d0  UNAVAIL  cannot open
   c1t3d0  ONLINE
   c2t2d0  ONLINE
 mirrorONLINE
   c1t4d0  ONLINE
   c2t3d0  ONLINE
   c1t5d0  ONLINE
 mirrorONLINE
   c2t4d0  ONLINE
   c1t6d0  ONLINE
   c2t5d0  ONLINE
 mirrorONLINE
   c1t7d0  ONLINE
   c2t6d0  ONLINE
   c2t7d0  ONLINE

   Additional devices are known to be part of this pool, though their
   exact configuration cannot be determined.

 # zpool import -f rc-pool
 cannot import 'rc-pool': one or more devices is currently unavailable

 Even after a reboot I get the same error.  Creating a ramdisk again with the 
 original name doesn't seem to do anything either.  Checking the Sun error 
 message it says a top level device isn't available and the pool can't be 
 opened.  Well, the only thing it could possibly be is the ZIL, which seems 
 odd since ZFS could certainly mount the pool fine without it on boot.  Can 
 ZFS really not import a pool if the ZIL is missing?
   

Yes.  Pending the above fix.  Prior to the fix, the separate ZIL log
was considered to be another top-level vdev.  This is why it refuses
to import, as it believes it is missing a top-level vdev instead of a
(potentially empty) ZIL.

 If so, doesn't that make this a potential issue when people do start using 
 separate media for the ZIL?  If the ZIL hardware fails for any reason do you 
 loose access to your entire pool?  What if you're using battery backed nvram 
 but power is off long enough to wipe the device?  What if a server fails and 
 you need to move the disks to a new chassis?

 Can anybody help me get this pool back online?
   

If you look in the archives, there was a way to work around this
discussed a few months ago.  It isn't pretty, and the real fix is in the
above CR.
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-08-04 Thread Neil Perrin


Richard Elling wrote:
 Ross wrote:
 I'm trying to import a pool I just exported but I can't, even -f doesn't 
 help.  Every time I try I'm getting an error:
 cannot import 'rc-pool':  one or more devices is currently unavailable

 Now I suspect the reason it's not happy is that the pool used to have a ZIL 
 :)
   
 
 Correct.  What you want is CR 6707530, log device failure needs some work
 http://bugs.opensolaris.org/view_bug.do?bug_id=6707530
 which Neil has been working on, scheduled for b96.

Actually no. That CR mentioned the problem and talks about splitting out
the bug, as it's really a separate problem. I've just done that and here's
the new CR which probably won't be visible immediately to you:

6733267 Allow a pool to be imported with a missing slog

Here's the Description:

---
This CR is being broken out from 6707530 log device failure needs some work

When Separate Intent logs (slogs) were designed they were given equal status in 
the pool device tree.
This was because they can contain committed changes to the pool.
So if one is missing it is assumed to be important to the integrity of the
application(s) that wanted the data committed synchronously, and thus
a pool cannot be imported with a missing slog.
However, we do allow a pool to be missing a slog on boot up if
it's in the /etc/zfs/zpool.cache file. So this sends a mixed message.

We should allow a pool to be imported without a slog if -f is used
and to not import without -f but perhaps with a better error message.

It's the guidsum check that actually rejects imports with missing devices.
We could have a separate guidsum for the main pool devices (non slog/cache).
---

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss