Ayaz Anjum and others,

        I think once you move into NFS over TCP in a client
        server env, the chance for lost data is significantly
        higher than just a disconnecting a cable,

        Scenario, before a client generates a delayed write
        from his violatile DRAM client cache, client reboots,

        and/or a asynchronous or a delayed write is done, 
        no error on the write and the error is missed on the 
        close because the programmer didn't perform a fsync 
        on the fd before the close and/or expect that a
        close might fail,

        and/or the tcp connection is lost and the data is
        not transfered,

        Thus, I know of very few FSs that can guarantee against
        data loss. What most modern FSs try to prevent is data
        corruption and FS corruption,...

        However, I am surprised that you seem to indicate that
        no hardware indication is/was present to indicate some
        form of hardware degredation/failure had occured.

        Mitchell Erblich
        ----------------

        

        

is generated because of
        the delayed  

        
On 11-Mar-07, at 11:12 PM, Ayaz Anjum wrote:

>
> HI !
>
> Well as per my actual post, i created a zfs file as part of Sun  
> cluster HAStoragePlus, and then disconned the FC cable, since there  
> was no active IO hence the failure of disk was not detected, then i  
> touched a file in the zfs filesystem, and it went fine, only after  
> that when i did sync then the node panicked and zfs filesystem is  
> failed over to other node. On the othernode the file i touched is  
> not there in the same zfs file system hence i am saying that data  
> is lost. I am planning to deploy zfs in a production NFS  
> environment with above 2TB of Data where users are constantly  
> updating file. Hence my concerns about data integrity.

I believe Robert and Darren have offered sufficient explanations: You  
cannot be assured of committed data unless you've sync'd it. You are  
only risking data loss if your users and/or applications assume data  
is committed without seeing a completed sync, which would be a design  
error. This applies to any filesystem.

--Toby

> Please explain.
>
> thaks
>
> Ayaz Anjum
>
>
>
> Darren Dunham <[EMAIL PROTECTED]>
> Sent by: [EMAIL PROTECTED]
> 03/12/2007 05:45 AM
>
> To
> zfs-discuss@opensolaris.org
> cc
> Subject
> Re: Re[2]: [zfs-discuss] writes lost with zfs !
>
>
>
>
>
> > I have some concerns here,  from my experience in the past,  
> touching a
> > file ( doing some IO ) will cause the ufs filesystem to failover,  
> unlike
> > zfs where it did not ! Why the behaviour of zfs different than ufs ?
>
> UFS always does synchronous metadata updates.  So a 'touch' that  
> creates
> a file is going to require a metadata write.
>
> ZFS writes may not necessarily hit the disk until a transaction group
> flush.
>
> > is not this compromising data integrity ?
>
> It should not.  Is there a scenario that you are worried about?
>
> -- 
> Darren Dunham                                            
> [EMAIL PROTECTED]
> Senior Technical Consultant         TAOS            http:// 
> www.taos.com/
> Got some Dr Pepper?                           San Francisco, CA bay  
> area
>         < This line left intentionally blank to confuse you. >
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
>
>
>
>
>
>
>
> ---------------------------------------------------------------------- 
> ----------------------------
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to