Hi,

Similar issue was reported in nfs-ganesha github [1]. As mentioned in 
the link, there is upcall thread (actively polling in a loop) spawned 
for every export which might be consuming the CPU. There are few 
optimizations needed here -

* Make this behavior optional by checking existing 'clustered' config 
option value. But currently this option only works for the cluster with 
node-IDs defined which is not the case for Gluster. Need to fix that.

or

* Instead of continuously polling, we should define and register a 
callback routine to receive the notifications from the backend cluster. 
This seems optimal approach as not all the upcalls may not be tied to 
the clustered configuration.

We just haven't got time to work on these aspects. Please feel free to 
file the bugs against nfs-ganesha/FSAL_GLUSTER bugzilla component and 
assign it to me. Will try to target it for Ganesha 2.5 release.

Thanks,
Soumya


[1] https://github.com/nfs-ganesha/nfs-ganesha/issues/124

On 11/03/2016 05:30 PM, Nathan Madpsy wrote:
> (I don't have the message ID of the original so this will be a new thread.)
>
> Original Message:
> https://www.gluster.org/pipermail/gluster-users.old/2016-October/028892.html
>
>
> I too am seeing ~10% CPU usage per Gluster export when using Ganesha NFS. 
> This occurs straight after the process starts and the export does not need to 
> be mounted to see the problem.
>
> Ubuntu 16.04
>
> gluster 3.8.5-ubuntu1~xenial1
>
> nfs-ganesha 2.4.1-ubuntu1~xenial5
>
> I confirmed (as expected, given this is a userspace NFS server after all) 
> that it is not sys and thus strace is no use.
>
>
> My (test) Ganesha config is as follows:
>
> EXPORT{
>       Export_Id = 2;
>       Path = /gv0;
>       Access_type = RW;
>       Squash = No_root_squash;
>       Disable_ACL = TRUE;
>       Pseudo = /gv0;
>       Protocols = "4";
>       SecType = "sys";
>         FSAL {
>                 name = GLUSTER;
>                 hostname = localhost;
>                 volume = gv0;
>         }
> }
>
>
> --
> Nathan
>
>
> _______________________________________________
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>

------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to