[Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2018-10-12 Thread Atin Mukherjee
Write behind related changes introduced new defects.

-- Forwarded message -
From: 
Date: Fri, 12 Oct 2018 at 20:43
Subject: New Defects reported by Coverity Scan for gluster/glusterfs
To: 


Hi,

Please find the latest report on new defect(s) introduced to
gluster/glusterfs found with Coverity Scan.

2 new defect(s) introduced to gluster/glusterfs found with Coverity Scan.
3 defect(s), reported by Coverity Scan earlier, were marked fixed in the
recent build analyzed by Coverity Scan.

New defect(s) Reported-by: Coverity Scan
Showing 2 of 2 defect(s)


** CID 1396102:  Null pointer dereferences  (NULL_RETURNS)
/xlators/performance/write-behind/src/write-behind.c: 2474 in
wb_mark_readdirp_start()



*** CID 1396102:  Null pointer dereferences  (NULL_RETURNS)
/xlators/performance/write-behind/src/write-behind.c: 2474 in
wb_mark_readdirp_start()
2468 wb_mark_readdirp_start(xlator_t *this, inode_t *directory)
2469 {
2470 wb_inode_t *wb_directory_inode = NULL;
2471
2472 wb_directory_inode = wb_inode_create(this, directory);
2473
>>> CID 1396102:  Null pointer dereferences  (NULL_RETURNS)
>>> Dereferencing a null pointer "wb_directory_inode".
2474 if (!wb_directory_inode->lock.spinlock)
2475 return;
2476
2477 LOCK(_directory_inode->lock);
2478 {
2479 GF_ATOMIC_INC(wb_directory_inode->readdirps);

** CID 1396101:  Null pointer dereferences  (NULL_RETURNS)
/xlators/performance/write-behind/src/write-behind.c: 2494 in
wb_mark_readdirp_end()



*** CID 1396101:  Null pointer dereferences  (NULL_RETURNS)
/xlators/performance/write-behind/src/write-behind.c: 2494 in
wb_mark_readdirp_end()
2488 {
2489 wb_inode_t *wb_directory_inode = NULL, *wb_inode = NULL, *tmp
= NULL;
2490 int readdirps = 0;
2491
2492 wb_directory_inode = wb_inode_ctx_get(this, directory);
2493
>>> CID 1396101:  Null pointer dereferences  (NULL_RETURNS)
>>> Dereferencing a null pointer "wb_directory_inode".
2494 if (!wb_directory_inode->lock.spinlock)
2495 return;
2496
2497 LOCK(_directory_inode->lock);
2498 {
2499 readdirps = GF_ATOMIC_DEC(wb_directory_inode->readdirps);



To view the defects in Coverity Scan visit,
https://u2389337.ct.sendgrid.net/wf/click?upn=08onrYu34A-2BWcWUl-2F-2BfV0V05UPxvVjWch-2Bd2MGckcRZBK54bFWohdObZ6wlkeK264nDC24cnLwH4MTOSDXRjQcO27-2F6DmQXPB4g4Mz-2BEJJ0-3D_MGdSxOtVesORpvKsy8XkEUz8gK23WuwInCh-2FVRcDCRGI3dzUd2Ukeqo7jOkDVtDwdofsVY7aGvZQg7zRE31MpIpZfuKb72GMUDqgUubcYrIu5oXcyupFTk-2BbhUXFdLHUSfe4AbOPNeG8BbDwGUW1v07zqQu8VKIaMFyP-2BoYbiYsfmt7-2FPg8uG5gutfCHZL61I0rptYdI3rhGJ6h55uDbGL4twf-2Fi-2F-2FuWXuVz4tE-2BiLw-3D

  To manage Coverity Scan email notifications for "
atin.mukherje...@gmail.com", click
https://u2389337.ct.sendgrid.net/wf/click?upn=08onrYu34A-2BWcWUl-2F-2BfV0V05UPxvVjWch-2Bd2MGckcRbVDbis712qZDP-2FA8y06Nq4F4Na18V6TzekbRgLfnxbftCtNrSI0AdVE2H7Oze59ZO0QossEy3LBj8V8EoFBmLcCGWfAfPSpkvjpvSyEnHW4SE-2Fd5u6fIUaVdSUke9RseU-3D_MGdSxOtVesORpvKsy8XkEUz8gK23WuwInCh-2FVRcDCRGI3dzUd2Ukeqo7jOkDVtDwdofsVY7aGvZQg7zRE31MpHd29gt9gjsKh3qRlC2RNFOu5d1QLlY3kA1t3-2BZa7JxqLa9L0-2FbeQCY21g0-2BWD9nj7BVPc7SCSBZSdLtNp0BxH2zEpj2wcPymqs8Yua6j-2FpBNb5CGYrqE-2F1elotYuozHtizG6MZ7T8-2FFr6hkCGYystU-3D

-- 
--Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster Monitoring using Prometheus - Status Update

2018-10-12 Thread Aravinda
## Quick start:

```
cd $GOPATH/src/github.com/gluster
git clone https://github.com/gluster/gluster-prometheus.git
cd gluster-prometheus
PREFIX=/usr make
PREFIX=/usr make install

# Enable and start using,
systemctl enable gluster-exporter
systemctl start gluster-exporter
```

Note: By default exporter collects metrics using `glusterd` and
`gluster` CLI. Configure `/etc/gluster-exporter/global.conf` to use
with glusterd2.


## Completed

- All the supported metrics are now works with both `glusterd` and
  `glusterd2`. Volume info from glusterd will be upgraded to include
  sub volume details and to match with glusterd2 Volume info. This
  also enables capturing sub-volume related metrics like
  sub volume utilization easily.
  (https://github.com/gluster/gluster-prometheus/pull/35)

- Configuration support added to support glusterd/glusterd2 related
  configurations. By default it collects metrics from `glusterd`.
Update
  the configuration file(`/etc/gluster-exporter/global.conf`)
  (https://github.com/gluster/gluster-prometheus/pull/24)

- All metrics collectors are enabled by default, metrics can be
  disabled by updating the `/etc/gluster-exporter/collectors.conf`
  file and restarting the `gluster-exporter`
  (https://github.com/gluster/gluster-prometheus/pull/24)

- `gluster-exporter` can be managed as a `systemd` service. Once
  installed, it can be enabled and started using `systemctl enable
  gluster-exporter` and `systemctl start gluster-exporter`
  (https://github.com/gluster/gluster-prometheus/pull/37)
  
- Installation and setup instructions are updated in README file.
  (https://github.com/gluster/gluster-prometheus/pull/40 and
  https://github.com/gluster/gluster-prometheus/pull/35)

- `pkg/glusterutils` package is introduced, which collects the
  required information from both `glusterd` and `glusterd2`. Metrics
  collectors need not worry about handling it for `glusterd` and
  `glusterd2`. For example, `glusterutils.VolumeInfo` internally
  handles glusterd/glusterd2 based on configuration and provides
  uniform interface to metrics
  collectors. (https://github.com/gluster/gluster-prometheus/pull/35)
  

## In-progress

- RPM generation scripts - Currently prometheus exporter can be
  installed using source install(`PREFIX=/usr make` and `PREFIX=/usr
  make install`). RPM spec file helps to generate the RPM and to
  integrate with GCS and to integrate with centos-ci.
  (https://github.com/gluster/gluster-prometheus/pull/26)
  
- Understanding Prometheus Operator - POC project started to try out
  Prometheus Operator. Theoritically Prometheus operator can detect
  the pods/containers which are annotated as `prometheus.io/scrape:
  "true"`. Custom `Dockerfile` is created to experiment with
  Prometheus Operator till the RPM spec file related changes merges
  and rpm is included in gluster official container.
  (https://github.com/gluster/gluster-prometheus/pull/48)

- Gluster interface - Code is refactored to support glusterd and
  glusterd2 compatibility feature easily.
  (https://github.com/gluster/gluster-prometheus/pull/47)

- Ongoing metrics collectors - Volume count and brick disk io related
  metrics PRs are under review.
  (https://github.com/gluster/gluster-prometheus/pull/22 and
  https://github.com/gluster/gluster-prometheus/pull/15)

- PR related to selecting Leader node/peer is under review. This
  feature will become foundation for sending Cluster related metrics
  only from the leader node.
  (https://github.com/gluster/gluster-prometheus/pull/38)



Install and Usage guide:
https://github.com/gluster/gluster-prometheus/blob/master/README.adoc

Project repo: https://github.com/gluster/gluster-prometheus

-- 
regards
Aravinda
(on behalf of gluster-prometheus Team)

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [NFS-Ganesha-Devel] Re: Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-12 Thread Soumya Koduri



On 10/12/18 5:55 PM, Kinglong Mee wrote:

On 2018/10/12 14:34, Soumya Koduri wrote:> On 10/12/18 7:22 AM, Kinglong Mee 
wrote:

On 2018/10/11 19:09, Soumya Koduri wrote:

NFS-Ganesha's md-cache layer already does extensive caching of attributes and 
ACLs of each file looked upon. Do you see any additional benefit with turning 
on gluster md-cache as well?  More replies inline..


Yes, I think.

The logical is different between md-cache and ganesha's cache,
Ganesha caches xattr data depends on timeout, if timeout, ganesha get it from 
back-end glusterfs;
Md-cache caches depneds on timeout too, but md-cache can delay the timeout for 
some cases.


Could you please list out which xattrs fall into that category. AFAIK, like 
iatt, even all xattrs are invalidated post timeout in md-cache xlator.


The iatt's expire time is ->ia_time + timeout, xatt's expire time is ->xa_time 
+ timeout.
Most FOPs reply (read/write/truncate...) contains the postattr incidentally,
and update the ->ia_time.
But, ->xa_time cannot be updated as ->ia_time now, it only be updated at 
mdc_lookup_cbk now.

I will add a case of update ->xa_time if file's mtiem/ctime does not change
when updating the ->ia_time.
And, let stat/fstat/setattr/fsetattr/getxattr/fgetxattr's reply contains the 
xattr incidentally,
and update the ->xa_time too.


+1




By turning on both these caches, the process shall consume more memory and CPUs 
to store and invalidate same set of attributes at two different layers right. 
Do you see much better performance when compared to that cost?


I think md-cache supporting cache virtual ACLs is a function update,
ganesha can use it by the new added option, or not.

If setting a smaller timeout for ganesha's cache, it timeout frequently.
But, md-cache can return the cached xattr.

I do not have a comparing data between them.
After testing, we can chose the better combination,
1. md-cache on and ganesha's cache on;
2. md-cache on but ganesha's cache off;
3. md-cache off but ganesha's cache on.



yes..Please share your findings if you get to test these combinations.

Thanks,
Soumya
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] OUTREACHY (Extend Python and Go bindings for libgfapi library in GlusterFS)

2018-10-12 Thread Prashanth Pai
Hi Anshul,

You'll find all the details on the project page:
https://www.outreachy.org/communities/cfp/gluster/project/extend-python-and-go-bindings-for-libgfapi-library/

You'll have to do application period tasks prior to taking up issues from
the project.
Once you are done with the application period tasks, please send the
details (screenshots, programs) to one or many co-mentors.

On Fri, Oct 12, 2018 at 5:35 PM Anshul tale  wrote:

> Hi,
>
> I would like to work for the project "Extend Python and Go bindings for
> libgfapi library in GlusterFS. Can you please give me some issue to work on
> ?
>
> Thanks
> Anshul Tale
>
>
> "
>


-- 
- Prashanth Pai
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [NFS-Ganesha-Devel] Re: Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-12 Thread Kinglong Mee
On 2018/10/12 14:34, Soumya Koduri wrote:> On 10/12/18 7:22 AM, Kinglong Mee 
wrote:
>> On 2018/10/11 19:09, Soumya Koduri wrote:
>>> NFS-Ganesha's md-cache layer already does extensive caching of attributes 
>>> and ACLs of each file looked upon. Do you see any additional benefit with 
>>> turning on gluster md-cache as well?  More replies inline..
>>
>> Yes, I think.
>>
>> The logical is different between md-cache and ganesha's cache,
>> Ganesha caches xattr data depends on timeout, if timeout, ganesha get it 
>> from back-end glusterfs;
>> Md-cache caches depneds on timeout too, but md-cache can delay the timeout 
>> for some cases.
> 
> Could you please list out which xattrs fall into that category. AFAIK, like 
> iatt, even all xattrs are invalidated post timeout in md-cache xlator.

The iatt's expire time is ->ia_time + timeout, xatt's expire time is ->xa_time 
+ timeout.
Most FOPs reply (read/write/truncate...) contains the postattr incidentally, 
and update the ->ia_time.
But, ->xa_time cannot be updated as ->ia_time now, it only be updated at 
mdc_lookup_cbk now.

I will add a case of update ->xa_time if file's mtiem/ctime does not change
when updating the ->ia_time.
And, let stat/fstat/setattr/fsetattr/getxattr/fgetxattr's reply contains the 
xattr incidentally,
and update the ->xa_time too.

> By turning on both these caches, the process shall consume more memory and 
> CPUs to store and invalidate same set of attributes at two different layers 
> right. Do you see much better performance when compared to that cost?

I think md-cache supporting cache virtual ACLs is a function update, 
ganesha can use it by the new added option, or not.

If setting a smaller timeout for ganesha's cache, it timeout frequently.
But, md-cache can return the cached xattr.

I do not have a comparing data between them.
After testing, we can chose the better combination, 
1. md-cache on and ganesha's cache on;
2. md-cache on but ganesha's cache off;
3. md-cache off but ganesha's cache on.

thanks,
Kinglong Mee

> 
> Thanks,
> Soumya
> 
>>
>>>
>>> On 10/11/18 7:47 AM, Kinglong Mee wrote:
 Cc nfs-ganesha,

 Md-cache has option "cache-posix-acl" that controls caching of posix ACLs
 ("system.posix_acl_access"/"system.posix_acl_default") and virtual 
 glusterfs ACLs
 ("glusterfs.posix.acl"/"glusterfs.posix.default_acl") now.

 But, _posix_xattr_get_set does not fill virtual glusterfs ACLs when lookup 
 requests.
 So, md-cache caches bad virtual glusterfs ACLs.

 After I turn on "cache-posix-acl" option to cache ACLs at md-cache, nfs 
 client gets many EIO errors.

 https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/427305

 There are two chooses for cache virtual glusterfs ACLs in md-cache,
 1. Cache it separately as posix ACLs (a new option maybe 
 "cache-glusterfs-acl" is added);
  And make sure _posix_xattr_get_set fills them when lookup requests.

>>>
>>> I am not sure if posix layer can handle it. Virtual xattrs are in-memory 
>>> and not stored on disk. They are converted to/from posix-acl in posix-acl 
>>> xlator. So FWIU, posix-acl xlator should handle setting these attributes as 
>>> part of LOOKUP response if needed. Same shall apply for any virtual xattr 
>>> cached in md-cache. Request Poornima to comment.
>>
>> Posix-acl can hand it correctly now.
>>
>>>
>>> At a time, any gfapi consumer would use either posix-acl or virtual 
>>> glusterfs ACLs. So having two options to selectively choose which one of 
>>> them to cache sounds better to me instead of unnecessarily storing two 
>>> different representations of the same ACL.
>>
>> Make sense.
>> I will add another option for virtual glusterfs ACLs in md-cache.
>>
>> thanks,
>> Kinglong Mee
>>
>>>
>>> Thanks,
>>> Soumya
>>>
 2. Does not cache it, only cache posix ACLs;
  If gfapi request it, md-cache lookup according posix ACL at cache,
  if exist, make the virtual glusterfs ACL locally and return to gfapi;
  otherwise, send the request to glusterfsd.

 Virtual glusterfs ACLs are another format of posix ACLs, there are larger 
 than posix ACLs,
 and always exist no matter the really posix ACL exist or not.
>>>

 So, I'd prefer #2.
 Any comments are welcome.

 thanks,
 Kinglong Mee

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-devel

>>>
>> ___
>> Devel mailing list -- de...@lists.nfs-ganesha.org
>> To unsubscribe send an email to devel-le...@lists.nfs-ganesha.org
>>
> 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] OUTREACHY (Extend Python and Go bindings for libgfapi library in GlusterFS)

2018-10-12 Thread Anshul tale
Hi,

I would like to work for the project "Extend Python and Go bindings for
libgfapi library in GlusterFS. Can you please give me some issue to work on
?

Thanks
Anshul Tale


"
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] FOSDEM Call for Participation: Software Defined Storage devroom

2018-10-12 Thread Niels de Vos
CfP for the Software Defined Storage devroom at FOSDEM 2019 (Brussels,
Belgium, February 3rd).

FOSDEM is a free software event that offers open source communities a
place to meet, share ideas and collaborate. It is renown for being
highly developer- oriented and brings together 8000+ participants from
all over the world.  It is held in the city of Brussels (Belgium).

FOSDEM 2019 will take place during the weekend of February 2nd-3rd 2019.
More details about the event can be found at http://fosdem.org/

** Call For Participation

The Software Defined Storage devroom will go into it's third round for
talks around Open Source Software Defined Storage projects, management
tools and real world deployments.

Presentation topics could include but are not limited too:

- Your work on a SDS project like Ceph, Gluster, OpenEBS or LizardFS

- Your work on or with SDS related projects like SWIFT or Container
  Storage Interface

- Management tools for SDS deployments

- Monitoring tools for SDS clusters

** Important dates:

- Nov 25th 2018:  submission deadline for talk proposals
- Dec 17th 2018:  announcement of the final schedule
- Feb  3rd 2019:  Software Defined Storage dev room

Talk proposals will be reviewed by a steering committee:
- Niels de Vos (Gluster Developer - Red Hat)
- Jan Fajerski (Ceph Developer - SUSE)
- other volunteers TBA

Use the FOSDEM 'pentabarf' tool to submit your proposal:
https://penta.fosdem.org/submission/FOSDEM19

- If necessary, create a Pentabarf account and activate it.
  Please reuse your account from previous years if you have already
  created it.

- In the "Person" section, provide First name, Last name
  (in the "General" tab), Email (in the "Contact" tab) and Bio
  ("Abstract" field in the "Description" tab).

- Submit a proposal by clicking on "Create event".

- Important! Select the "Software Defined Storage devroom" track (on the
  "General" tab).

- Provide the title of your talk ("Event title" in the "General" tab).

- Provide a description of the subject of the talk and the intended
  audience (in the "Abstract" field of the "Description" tab)

- Provide a rough outline of the talk or goals of the session (a short
  list of bullet points covering topics that will be discussed) in the
  "Full description" field in the "Description" tab

- Provide an expected length of your talk in the "Duration" field. Please
  count at least 10 minutes of discussion into your proposal plus allow
  5 minutes for the handover to the next presenter.
  Suggested talk length would be 20+10 and 45+15 minutes.

** Recording of talks

The FOSDEM organizers plan to have live streaming and recording fully
working, both for remote/later viewing of talks, and so that people can
watch streams in the hallways when rooms are full. This requires
speakers to consent to being recorded and streamed. If you plan to be a
speaker, please understand that by doing so you implicitly give consent
for your talk to be recorded and streamed. The recordings will be
published under the same license as all FOSDEM content (CC-BY).

Hope to hear from you soon! And please forward this announcement.

If you have any further questions, please write to the mailinglist at
storage-devr...@lists.fosdem.org and we will try to answer as soon as
possible.

Thanks!
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [NFS-Ganesha-Devel] Re: Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-12 Thread Soumya Koduri



On 10/12/18 12:04 PM, Soumya Koduri wrote:
1. Cache it separately as posix ACLs (a new option maybe 
"cache-glusterfs-acl" is added);

 And make sure _posix_xattr_get_set fills them when lookup requests.



I am not sure if posix layer can handle it. Virtual xattrs are 
in-memory and not stored on disk. They are converted to/from posix-acl 
in posix-acl xlator. So FWIU, posix-acl xlator should handle setting 
these attributes as part of LOOKUP response if needed. Same shall 
apply for any virtual xattr cached in md-cache. Request Poornima to 
comment.


Posix-acl can hand it correctly now.


Okay.





At a time, any gfapi consumer would use either posix-acl or virtual 
glusterfs ACLs. So having two options to selectively choose which one 
of them to cache sounds better to me instead of unnecessarily storing 
two different representations of the same ACL.


Make sense.
I will add another option for virtual glusterfs ACLs in md-cache.


Cool. thanks!

-Soumya



thanks,
Kinglong Mee

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [NFS-Ganesha-Devel] Re: Problems about cache virtual glusterfs ACLs for ganesha in md-cache

2018-10-12 Thread Soumya Koduri



On 10/12/18 7:22 AM, Kinglong Mee wrote:

On 2018/10/11 19:09, Soumya Koduri wrote:

NFS-Ganesha's md-cache layer already does extensive caching of attributes and 
ACLs of each file looked upon. Do you see any additional benefit with turning 
on gluster md-cache as well?  More replies inline..


Yes, I think.

The logical is different between md-cache and ganesha's cache,
Ganesha caches xattr data depends on timeout, if timeout, ganesha get it from 
back-end glusterfs;
Md-cache caches depneds on timeout too, but md-cache can delay the timeout for 
some cases.


Could you please list out which xattrs fall into that category. AFAIK, 
like iatt, even all xattrs are invalidated post timeout in md-cache xlator.


By turning on both these caches, the process shall consume more memory 
and CPUs to store and invalidate same set of attributes at two different 
layers right. Do you see much better performance when compared to that 
cost?


Thanks,
Soumya






On 10/11/18 7:47 AM, Kinglong Mee wrote:

Cc nfs-ganesha,

Md-cache has option "cache-posix-acl" that controls caching of posix ACLs
("system.posix_acl_access"/"system.posix_acl_default") and virtual glusterfs 
ACLs
("glusterfs.posix.acl"/"glusterfs.posix.default_acl") now.

But, _posix_xattr_get_set does not fill virtual glusterfs ACLs when lookup 
requests.
So, md-cache caches bad virtual glusterfs ACLs.

After I turn on "cache-posix-acl" option to cache ACLs at md-cache, nfs client 
gets many EIO errors.

https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/427305

There are two chooses for cache virtual glusterfs ACLs in md-cache,
1. Cache it separately as posix ACLs (a new option maybe "cache-glusterfs-acl" 
is added);
     And make sure _posix_xattr_get_set fills them when lookup requests.



I am not sure if posix layer can handle it. Virtual xattrs are in-memory and 
not stored on disk. They are converted to/from posix-acl in posix-acl xlator. 
So FWIU, posix-acl xlator should handle setting these attributes as part of 
LOOKUP response if needed. Same shall apply for any virtual xattr cached in 
md-cache. Request Poornima to comment.


Posix-acl can hand it correctly now.



At a time, any gfapi consumer would use either posix-acl or virtual glusterfs 
ACLs. So having two options to selectively choose which one of them to cache 
sounds better to me instead of unnecessarily storing two different 
representations of the same ACL.


Make sense.
I will add another option for virtual glusterfs ACLs in md-cache.

thanks,
Kinglong Mee



Thanks,
Soumya


2. Does not cache it, only cache posix ACLs;
     If gfapi request it, md-cache lookup according posix ACL at cache,
     if exist, make the virtual glusterfs ACL locally and return to gfapi;
     otherwise, send the request to glusterfsd.

Virtual glusterfs ACLs are another format of posix ACLs, there are larger than 
posix ACLs,
and always exist no matter the really posix ACL exist or not.




So, I'd prefer #2.
Any comments are welcome.

thanks,
Kinglong Mee

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel




___
Devel mailing list -- de...@lists.nfs-ganesha.org
To unsubscribe send an email to devel-le...@lists.nfs-ganesha.org


___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel