Thanks for the clarification Dan. So the protocol doesn't mandate that 
the I/Os need to be blocked while recalling the layouts.

-Soumya

On 10/13/2015 06:28 PM, Daniel Gryniewicz wrote:
> Block layouts are completely exclusive, since they're intended to hold
> filesystems, which cannot deal with competing changes anywhere in the
> block device..  File and Object layouts aren't exclusive (and the
> kernel didn't assume they were last I checked), and depend on locks or
> delegations to provide exclusivity within a file.
>
> As to layout handling, whether a layout is exclusive like a delegation
> depends on the layout, which is FSAL specific.  In addition, unlike
> for delegations or locks, the actual I/O is not passing through the
> MDS, so the ganesha core cannot stop that I/O.  It is up to the FSAL
> to fence the I/O when necessary, telling the DSs that the I/O must be
> blocked.  So, we have to let the FSAL handle it. (Note, this would
> also apply to blocklayout, if we had it; it's a property of the "p" in
> pNFS).
>
> Dan
>
> On Tue, Oct 13, 2015 at 2:43 AM, Soumya Koduri <skod...@redhat.com> wrote:
>> Hi Dan/Matt,
>>
>> In the recent LinuxCon Europe Conference, Christoph Hellwig has given a talk
>> on "A Simple, and Scalable pNFS Server For Linux" [1]
>>
>> During that talk, he has mentioned that for kernel-NFS , LayoutRecall
>> semantics & logic is same as DelegRecall (at least for block & object-type
>> layouts and probably for file layouts too), i.e, for any conflicting access,
>> we need to block that I/O until the layouts are recalled.
>>
>> But I guess in NFS-Ganesha, we do not block the I/Os at the moment while
>> recalling layouts. We would like to know if we need to follow kernel-NFS
>> here or is to left to NFS server implementation to decide when to recall
>> layouts and why we choose not to block the I/Os.
>>
>> Also we seem to be leaving it to FSALs to handle/recall the layouts unlike
>> locks/Delegations whose conflicts are checked in common SAL layer itself. Is
>> there any particular reason behind it? Do we leave the decision when to
>> recall layout and/or block the conflicting I/Os to the FSALs to handle.
>>
>> [1]
>> http://events.linuxfoundation.org/events/linuxcon-europe/program/schedule
>>
>> Thanks,
>> Soumya
>>
>>
>> On 09/24/2015 06:21 PM, Daniel Gryniewicz wrote:
>>>
>>> A layout is a guarantee of ownership for the portion of the file
>>> covered.  No other conflicting file access can be done while the
>>> layout is granted.  So, if conflicting access is needed, the layout
>>> must be recalled.  In addition, if something happens in the cluster
>>> that would invalidate that access to that part of the file (such as
>>> loss of a node moving data to a backup node, or cluster optimization
>>> moving the location of the data, or storage nodes partitioning making
>>> the data unavailable, etc.), the layout must be recalled.
>>>
>>> It's probably best to recall layouts via an upcall.  VFS is not the
>>> best model to follow here, since it's not a clustered filesystem.
>>>
>>> Dan
>>>
>>> On Thu, Sep 24, 2015 at 5:07 AM, Jiffin Tony Thottan
>>> <jthot...@redhat.com> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> Currently I am trying to add support for LAYOUTRECALL in FSAL_GLUSTER.
>>>> So I look through other
>>>> FSAL implementation and RFC5661 once again. As far as I understand it is
>>>> a notification send from
>>>> M.D.S. to client demanding  back the layouts. First I try to figure out
>>>> scenarios in which layoutrecall
>>>> is useful and following came into my mind(May be I am wrong and also
>>>> please help me finding more) :
>>>>
>>>> 1.) While an I/O is performed , layout of file changes due to a
>>>> gluster-internal process
>>>>
>>>> 2.) two clients performing I/O on same file based on layout provided by
>>>> two different M.D.Ses
>>>> [Currently in FSAL_GLUSTER provides layout for entire file because
>>>> entire file is located on Storage Device]
>>>>
>>>> 3.) When detach a brick from the storage pool in gluster.
>>>>
>>>> But as a second thought , is it necessary to have LAYOUTRECALL ? Layout
>>>> grants permission to a client
>>>> for performing  a I/O. But it does not guarantee  such that only `this
>>>> client can perform I/O on that`.
>>>> And commenting out LAYOUTRECALL from FSAL_CEPH increases my doubt.
>>>>
>>>> And one more question , FSAL_GPFS introduced LAYOUTRECALL as part of
>>>> UPCALL thread and FSAL_VFS
>>>> as part of a call back thread. So which one will be better should
>>>> handled as part of UPCALL thread or
>>>> separately using another thread ?
>>>>
>>>> Please correct me if anything mentioned above is wrong.
>>>>
>>>> With Regards and Thanks,
>>>> Jiffin
>>>>
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
>>>> Get real-time metrics from all of your servers, apps and tools
>>>> in one place.
>>>> SourceForge users - Click here to start your Free Trial of Datadog now!
>>>> http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
>>>> _______________________________________________
>>>> Nfs-ganesha-devel mailing list
>>>> Nfs-ganesha-devel@lists.sourceforge.net
>>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Monitor Your Dynamic Infrastructure at Any Scale With Datadog!
>>> Get real-time metrics from all of your servers, apps and tools
>>> in one place.
>>> SourceForge users - Click here to start your Free Trial of Datadog now!
>>> http://pubads.g.doubleclick.net/gampad/clk?id=241902991&iu=/4140
>>> _______________________________________________
>>> Nfs-ganesha-devel mailing list
>>> Nfs-ganesha-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>>>
>>

------------------------------------------------------------------------------
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to