A layout is a guarantee of ownership for the portion of the file
covered.  No other conflicting file access can be done while the
layout is granted.  So, if conflicting access is needed, the layout
must be recalled.  In addition, if something happens in the cluster
that would invalidate that access to that part of the file (such as
loss of a node moving data to a backup node, or cluster optimization
moving the location of the data, or storage nodes partitioning making
the data unavailable, etc.), the layout must be recalled.

It's probably best to recall layouts via an upcall.  VFS is not the
best model to follow here, since it's not a clustered filesystem.

Dan

On Thu, Sep 24, 2015 at 5:07 AM, Jiffin Tony Thottan
<jthot...@redhat.com> wrote:
> Hi all,
>
> Currently I am trying to add support for LAYOUTRECALL in FSAL_GLUSTER.
> So I look through other
> FSAL implementation and RFC5661 once again. As far as I understand it is
> a notification send from
> M.D.S. to client demanding  back the layouts. First I try to figure out
> scenarios in which layoutrecall
> is useful and following came into my mind(May be I am wrong and also
> please help me finding more) :
>
> 1.) While an I/O is performed , layout of file changes due to a
> gluster-internal process
>
> 2.) two clients performing I/O on same file based on layout provided by
> two different M.D.Ses
> [Currently in FSAL_GLUSTER provides layout for entire file because
> entire file is located on Storage Device]
>
> 3.) When detach a brick from the storage pool in gluster.
>
> But as a second thought , is it necessary to have LAYOUTRECALL ? Layout
> grants permission to a client
> for performing  a I/O. But it does not guarantee  such that only `this
> client can perform I/O on that`.
> And commenting out LAYOUTRECALL from FSAL_CEPH increases my doubt.
>
> And one more question , FSAL_GPFS introduced LAYOUTRECALL as part of
> UPCALL thread and FSAL_VFS
> as part of a call back thread. So which one will be better should
> handled as part of UPCALL thread or
> separately using another thread ?
>
> Please correct me if anything mentioned above is wrong.
>
> With Regards and Thanks,
> Jiffin
>
> ------------------------------------------------------------------------------
> Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
> Get real-time metrics from all of your servers, apps and tools
> in one place.
> SourceForge users - Click here to start your Free Trial of Datadog now!
> http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
> _______________________________________________
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

------------------------------------------------------------------------------
Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
Get real-time metrics from all of your servers, apps and tools
in one place.
SourceForge users - Click here to start your Free Trial of Datadog now!
http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to