Re: [zfs-discuss] 2 servers, 1 ZFS filesystem and corruptions

2008-01-28 Thread Niksa Franceschi
Thanks for all information.

We'll try to escalate the issue :)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2 servers, 1 ZFS filesystem and corruptions

2008-01-25 Thread Niksa Franceschi
Hi, 

pool wasn't exported.
server1 was rebooted (with ZFS on it).
During reboot ZFS (pool) was released, and I could import it on server2 (which 
I have done).
However, when server1 was booting up it imported pool and mounted ZFS 
filesystems even thou they were already imported and mounted on server2.

As I said, what is interesting - if both servers are up, I cannot import pool 
to other server, if it is on another.
However, if server is booted up it somehow avoids check if same pool is already 
imported on other server, which in the end leads to same pool being imported on 
both servers and corruption.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2 servers, 1 ZFS filesystem and corruptions

2008-01-25 Thread Robert Milkowski
Hello Niksa,

Friday, January 25, 2008, 9:27:17 AM, you wrote:

NF Hi,

NF I have this setup:
NF 2xSUN V440 servers with FC adapters, installed Solaris 10u4.

NF Both servers see one LUN on XP storage.
NF On that LUN is created ZFS filesystem (on server1).

NF If I export that ZFS filesystem on server1, I can import it on server2, and 
vice-versa.

NF If I have imported ZFS on server1 and try to import it on
NF server2, it will fail (which is correct behavior).

NF However, if I export filesystem on server1, import it on server2
NF and reboot server1 - after reboot, server1 will import same ZFS
NF filesystem that is at that point mounted on server2, and I get
NF corruptions since both systems have same ZFS FS mounted at same time!

NF Is there any way to avoid such behavior - as this issue only arrizes at 
server reboot?

If the pool was exported it shouldn't have imported it.
You sure you have actually exported it and not just unmounted?

Another useful feature for you possible is 'zpool import -R ..'


-- 
Best regards,
 Robert Milkowski   mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2 servers, 1 ZFS filesystem and corruptions

2008-01-25 Thread Victor Latushkin
Niksa Franceschi wrote:
 Hi, 
 
 pool wasn't exported.
 server1 was rebooted (with ZFS on it).
 During reboot ZFS (pool) was released, and I could import it on server2 
 (which I have done).
 However, when server1 was booting up it imported pool and mounted ZFS 
 filesystems even thou they were already imported and mounted on server2.
 
 As I said, what is interesting - if both servers are up, I cannot import pool 
 to other server, if it is on another.
 However, if server is booted up it somehow avoids check if same pool is 
 already imported on other server, which in the end leads to same pool being 
 imported on both servers and corruption.

You need fix for
   6282725 hostname/hostid should be stored in the label

It is available in latest Nevada bits, but not yet available in Solaris 
10 update.

For more information please see the following link:

http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end

Hth,
Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2 servers, 1 ZFS filesystem and corruptions

2008-01-25 Thread eric kustarz

On Jan 25, 2008, at 6:06 AM, Niksa Franceschi wrote:

 Yes, the link explains quite well the issue we have.
 Only difference is that server1 can be manually rebooted, and while  
 it's still down I can mount ZFS pool on server2 even without -f  
 option, and yet server1 when booted up still mounts at same time.

 Just one questiong though.
 Is there any ETA when this patch may be available as official  
 Solaris 10 patch?

The current ETA is an early build of s10u6.  We hope to have patches  
available before the full update 6.

If you have a support contract, feel free to escalate.

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2 servers, 1 ZFS filesystem and corruptions

2008-01-25 Thread Niksa Franceschi
Yes, the link explains quite well the issue we have.
Only difference is that server1 can be manually rebooted, and while it's still 
down I can mount ZFS pool on server2 even without -f option, and yet server1 
when booted up still mounts at same time.

Just one questiong though.
Is there any ETA when this patch may be available as official Solaris 10 patch?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss