Re: [Nfs-ganesha-devel] Race between fsal_find_fd() and 'open downgrade'

2018-01-10 Thread Pradeep
Thanks Frank. I will try it out. Any concerns on holding the lock
across system calls (though this is read lock)?

On 1/10/18, Frank Filz  wrote:
> We have a patch under review for FSAL_GPFS for that issue, awaiting the
> submitter to extend the patch to cover other FSALs also. If you want to
> verify the fix would work for your case, it should be easy to take the fix
> and do the same thing in FSAL_VFS.
>
> https://review.gerrithub.io/#/c/390141/
>
> As soon as this patch is merged into V2.6, it will be slated for backport
> to
> V2.5-stable.
>
> Frank
>
>> -Original Message-
>> From: Pradeep [mailto:pradeep.tho...@gmail.com]
>> Sent: Wednesday, January 10, 2018 8:06 PM
>> To: nfs-ganesha-devel 
>> Subject: [Nfs-ganesha-devel] Race between fsal_find_fd() and 'open
>> downgrade'
>>
>> Hello,
>>
>> I'm seeing a write failure because another thread in ganesha doing 'open
>> downgrade' closed the FD the find_fd() returned. Any suggestion on how to
>> fix the race?
>>
>> vfs_reopen2()
>> {...
>> status = vfs_open_my_fd(myself, openflags, posix_flags, my_fd);
>>
>> if (!FSAL_IS_ERROR(status)) {
>> /* Close the existing file descriptor and copy the new
>>  * one over.
>>  */
>> vfs_close_my_fd(my_share_fd);
>> *my_share_fd = fd;
>>
>> vfs_write2()
>> {
>> ..
>> status = find_fd(&my_fd, obj_hdl, bypass, state, openflags,
>>  &has_lock, &closefd, false);
>>
>> if (FSAL_IS_ERROR(status)) {
>> LogDebug(COMPONENT_FSAL,
>>  "find_fd failed %s",
>> msg_fsal_err(status.major));
>> goto out;
>> }
>>
>> fsal_set_credentials(op_ctx->creds);
>>
>> nb_written = pwrite(my_fd, buffer, buffer_size, offset);
>>
>>
> 
> --
>> Check out the vibrant tech community on one of the world's most engaging
>> tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
>
> ---
> This email has been checked for viruses by Avast antivirus software.
> https://www.avast.com/antivirus
>
>

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Race between fsal_find_fd() and 'open downgrade'

2018-01-10 Thread Frank Filz
We have a patch under review for FSAL_GPFS for that issue, awaiting the
submitter to extend the patch to cover other FSALs also. If you want to
verify the fix would work for your case, it should be easy to take the fix
and do the same thing in FSAL_VFS.

https://review.gerrithub.io/#/c/390141/

As soon as this patch is merged into V2.6, it will be slated for backport to
V2.5-stable.

Frank

> -Original Message-
> From: Pradeep [mailto:pradeep.tho...@gmail.com]
> Sent: Wednesday, January 10, 2018 8:06 PM
> To: nfs-ganesha-devel 
> Subject: [Nfs-ganesha-devel] Race between fsal_find_fd() and 'open
> downgrade'
> 
> Hello,
> 
> I'm seeing a write failure because another thread in ganesha doing 'open
> downgrade' closed the FD the find_fd() returned. Any suggestion on how to
> fix the race?
> 
> vfs_reopen2()
> {...
> status = vfs_open_my_fd(myself, openflags, posix_flags, my_fd);
> 
> if (!FSAL_IS_ERROR(status)) {
> /* Close the existing file descriptor and copy the new
>  * one over.
>  */
> vfs_close_my_fd(my_share_fd);
> *my_share_fd = fd;
> 
> vfs_write2()
> {
> ..
> status = find_fd(&my_fd, obj_hdl, bypass, state, openflags,
>  &has_lock, &closefd, false);
> 
> if (FSAL_IS_ERROR(status)) {
> LogDebug(COMPONENT_FSAL,
>  "find_fd failed %s", msg_fsal_err(status.major));
> goto out;
> }
> 
> fsal_set_credentials(op_ctx->creds);
> 
> nb_written = pwrite(my_fd, buffer, buffer_size, offset);
> 
>

--
> Check out the vibrant tech community on one of the world's most engaging
> tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Race between fsal_find_fd() and 'open downgrade'

2018-01-10 Thread Pradeep
Hello,

I'm seeing a write failure because another thread in ganesha doing 'open
downgrade' closed the FD the find_fd() returned. Any suggestion on how
to fix the race?

vfs_reopen2()
{...
status = vfs_open_my_fd(myself, openflags, posix_flags, my_fd);

if (!FSAL_IS_ERROR(status)) {
/* Close the existing file descriptor and copy the new
 * one over.
 */
vfs_close_my_fd(my_share_fd);
*my_share_fd = fd;

vfs_write2()
{
..
status = find_fd(&my_fd, obj_hdl, bypass, state, openflags,
 &has_lock, &closefd, false);

if (FSAL_IS_ERROR(status)) {
LogDebug(COMPONENT_FSAL,
 "find_fd failed %s", msg_fsal_err(status.major));
goto out;
}

fsal_set_credentials(op_ctx->creds);

nb_written = pwrite(my_fd, buffer, buffer_size, offset);

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] Possibility of converting FSAL_VFS and FSAL_XFS to use stackable FSALS

2018-01-10 Thread Frank Filz
> Hi Frank, others,
> 
> Frank Filz wrote on Tue, Jan 09, 2018 at 03:19:46PM -0800:
> > All in all, these functions are not a great fit for the FSAL API so
> > I'm not sure it would be a good solution. Forcing some of the
> > functions into FSAL methods would require some code duplication that
> > loses some of the advantage of the mechanism. There would also be a
> > question of how things like the file descriptors and fsal_filesystems
> > are shared between the main FSAL_ VFS and the underlying stacked FSAL.
> 
> Actually we were thinking of making LUSTRE a stackable FSAL under VFS as
> well, so when this was brought up on the phone I thought it was a good idea,
> but now I'm reading this I have a more basic question-- is there still any use
> for the XFS fsal?
> Same question for PANFS, I guess, could we tell if anyone is still using that?
> The last commit that was specific to panfs was a license change in 2015...
> There have been many likely untested changes (multi-fd,
> mdcache...) since...
> 
> 
> Historically XFS exposed handle functions through libhandle before the
> kernel VFS did. Now it looks like both linux and freebsd have a variant to
> natively handle that, so we only have one FSAL to build either way with
> slightly different guts that can be abstracted as we currently do
> transparently.
> If it's too much effort to maintain XFS and PANFS and they aren't
> used/tested anymore, my vote would be to just rip it out.

If we don't need to support older kernels, or seamless migration (no client 
unmount), XFS is unnecessary, and in fact has the danger of less testing, 
though really it's using almost the same mechanism as we use for FreeBSD to 
change up the system calls, dealing with fsid differences, and differences in 
exactly how the handle is managed.

PANFS should be dead, Panasas chose to go with the FreeBSD kernel NFS server 
for their NFS implementation.

I'm not even sure we haven’t managed to break something on FreeBSD since I last 
ran it. I know the folks at iXsystems were exploring Ganesha but they have 
dropped out of active exploration.

> Re-LUSTRE, I hadn't given it much practical thought but I think stacking will
> work for us. We'd be using pure VFS calls and only add a few hooks around
> open mostly, the only trick will be to get the subfsal to give us the opened 
> fd
> somehow so we can do additional checks on it, but if we enforce stacking like
> mdcache (e.g. lustre MUST be above VFS) then it should be possible.

For such intimate stacking, I see no issue with the stacked FSAL peeking into 
the underlying FSAL's structures (fsal_obj_handle and file descriptor extension 
to state_t for what you need). In this case, stacking is being used to create 
the equivalent of a C++ inheritance (though the way MDCACHE stacks on top would 
be not so easy to do with C++ inheritance...).

> There is one point though, we'd ideally want mdcache>lustre>vfs, so we'd
> have to do the stacking manually from C code and not let folks do stacking in
> their config.

Yea, when we do things like this, we need to manage the stacking carefully.

Frank


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel