[zfs-discuss] Re: Backup of ZFS Filesystem with ACL 4

2007-03-26 Thread Ayaz Anjum
Hi !

I have currently an NFS server over ufs above 5 TB with alot of ACL which i was 
planning to migrate to zfs. But since backup of zfs with ACL is not possible, i 
am very upset for not being able to go ahead with replacing ufs with zfs.

Ayaz
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backup of ZFS Filesystem with ACL 4

2007-03-23 Thread Ayaz Anjum
HI Guys !

Please share you experience on how to backup zfs with ACL using commercially 
available backup softwares. Has any one tested backup of zfs with acl using 
Tivoli (TSM)

thanks

Ayaz
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-11 Thread Ayaz Anjum
HI !

I have some concerns here,  from my experience in the past, touching a 
file ( doing some IO ) will cause the ufs filesystem to failover, unlike 
zfs where it did not ! Why the behaviour of zfs different than ufs ? is 
not this compromising data integrity ?

thanks

Ayaz







From:
Robert Milkowski [EMAIL PROTECTED]
Recipients:
Manoj Joseph [EMAIL PROTECTED], Ayaz Anjum 
[EMAIL PROTECTED],zfs-discuss@opensolaris.org
Subject:
Re[2]: [zfs-discuss] writes lost with zfs !
Date:
03/08/2007 02:34:20 PM

Hello Manoj,

Thursday, March 8, 2007, 7:10:57 AM, you wrote:

MJ Ayaz Anjum wrote:

 2. with zfs mounted on one cluster node, i created a file and keeps it 
 updating every second, then i removed the fc cable, the writes are 
still 
 continuing to the file system, after 10 seconds i have put back the fc 
 cable and my writes continues, no failover of zfs happens.
 
 seems that all IO are going to some cache. Any suggestions on whts 
going 
 wrong over here and whts the solution to this.

MJ I don't know for sure. But my guess is, if you do a fsync after the 
MJ writes and wait for the fsync to complete, then you might get some 
MJ action. fsync should fail. zfs could panic the node. If it does, you 
MJ will see a failover.

Exactly.
Files must be open with O_DSYNC of fdsync should be used.
If you don't then writes are expected to be buffered and later put to
disks which in your case has to fail.

If you want to guarantee that when your applications writes something
it's on stable storage then use proper semantics like shown above.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com










--
 

Confidentiality Notice : This e-mail  and  any attachments  are 
confidential  to  the addressee and may also be privileged.  If  you are 
not  the addressee of  this e-mail, you may not copy, forward, disclose or 
otherwise use it in any way whatsoever.  If you have received this e-mail 
by mistake,  please  e-mail  the sender by replying to this message, and 
delete the original and any print out thereof. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] writes lost with zfs !

2007-03-11 Thread Ayaz Anjum
HI !

Well as per my actual post, i created a zfs file as part of Sun cluster 
HAStoragePlus, and then disconned the FC cable, since there was no active 
IO hence the failure of disk was not detected, then i touched a file in 
the zfs filesystem, and it went fine, only after that when i did sync then 
the node panicked and zfs filesystem is failed over to other node. On the 
othernode the file i touched is not there in the same zfs file system 
hence i am saying that data is lost. I am planning to deploy zfs in a 
production NFS environment with above 2TB of Data where users are 
constantly updating file. Hence my concerns about data integrity. Please 
explain.

thaks

Ayaz Anjum




Darren Dunham [EMAIL PROTECTED] 
Sent by: [EMAIL PROTECTED]
03/12/2007 05:45 AM

To
zfs-discuss@opensolaris.org
cc

Subject
Re: Re[2]: [zfs-discuss] writes lost with zfs !






 I have some concerns here,  from my experience in the past, touching a 
 file ( doing some IO ) will cause the ufs filesystem to failover, unlike 

 zfs where it did not ! Why the behaviour of zfs different than ufs ?

UFS always does synchronous metadata updates.  So a 'touch' that creates
a file is going to require a metadata write.

ZFS writes may not necessarily hit the disk until a transaction group
flush. 

 is not this compromising data integrity ?

It should not.  Is there a scenario that you are worried about?

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss









--
 

Confidentiality Notice : This e-mail  and  any attachments  are 
confidential  to  the addressee and may also be privileged.  If  you are 
not  the addressee of  this e-mail, you may not copy, forward, disclose or 
otherwise use it in any way whatsoever.  If you have received this e-mail 
by mistake,  please  e-mail  the sender by replying to this message, and 
delete the original and any print out thereof. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] writes lost with zfs !

2007-03-07 Thread Ayaz Anjum
HI !

I have tested the following scenario

created a zfs filesystem as part of HAStoragePlus in SunCluster 3.2, 
Solaris 11/06

Currently i am having only one fc hba per server.

1. There is no IO to the zfs mountpoint. I disconnected the FC cable. 
Filesystem on zfs still shows as mounted (because of no IO to filesystem). 
I touch a file. Still ok. i did a sync and only then the node panicked 
and zfs filesystem failed over to other cluster node. however my file 
which i touched is lost  

2. with zfs mounted on one cluster node, i created a file and keeps it 
updating every second, then i removed the fc cable, the writes are still 
continuing to the file system, after 10 seconds i have put back the fc 
cable and my writes continues, no failover of zfs happens.

seems that all IO are going to some cache. Any suggestions on whts going 
wrong over here and whts the solution to this.

thanks


Ayaz Anjum








--
 

Confidentiality Notice : This e-mail  and  any attachments  are 
confidential  to  the addressee and may also be privileged.  If  you are 
not  the addressee of  this e-mail, you may not copy, forward, disclose or 
otherwise use it in any way whatsoever.  If you have received this e-mail 
by mistake,  please  e-mail  the sender by replying to this message, and 
delete the original and any print out thereof. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss